Abstract
In general, the problem of building optimal convolutional codes under a certain criteria is hard, especially when size field restrictions are applied. In this paper, we confront the challenge of constructing an optimal 2D convolutional code when communicating over an erasure channel. We propose a general construction method for these codes. Specifically, we provide an optimal construction where the decoding method presented in the bibliography is considered.
MSC:
94B10; 94B35
1. Introduction
Two-dimensional (2D) convolutional codes are suited for applications where data are organized in a two-dimensional grid, like images. The theory of 2D convolutional codes is a generalization of the theory of one-dimensional (1D) convolutional codes but much more involved. These codes were introduced in [1] and in [2] by Fornasini and Valcher, respectively. In [1], the authors considered 2D convolutional codes that were constituted of sequences indexed on with values in , where was a finite field and established the algebraic properties of these codes. In [2], the authors studied 2D convolutional codes constituted of compact support sequences on . An important property of a code is its distance since that measures its capacity of error correction. In [3], the authors defined the free distance of a 2D convolutional code, established an upper bound for this distance, and then presented some constructions of the 2D convolutional codes with an optimal free distance. A generalization of these codes, called nD convolutional codes, were first introduced in [4,5] and then further developed in [6,7,8]. However, decoding these kind of codes is a barely explored topic, which we will address in this paper. We will consider transmission over an erasure channel. Unlike the symmetric channel, where errors might occur randomly, over these channels the receiver knows in the transmission which symbols are erased and that the received symbols are correct. These channels are particularly suitable in communication systems like real-time streaming applications or situations where certain packets are more crucial than others. Convolutional codes are particularly convenient for erasure channels due to their ability to consider certain parts of the received data, thereby adapting the correction process to the erasure’s location.
There exist only two decoding algorithms for 2D convolutional codes over the erasure channel [9,10]. In [9], the authors made use of the parity-check matrices of the code, while the authors of [10] employed the encoders of the code.
The decoding algorithm presented in [9] considers the specific neighborhoods around erasures that would allow one to decode these erasures using 1D convolutional codes. These 1D codes are projections of the 2D convolutional code, and they are obtained by considering 1D parity-check matrices and by taking into account only some of the coefficients of a parity-check matrix that is of 2D code. However, the authors did not provide any such construction in which the corresponding 1D projections produced optimal efficiency for this decoding algorithm. In this paper, we discuss this problem and present several constructions of 2D convolutional codes whose corresponding associated 1D convolutional codes are optimal or quasi-optimal.
2. Preliminaries
In this section, we detail the main backgrounds of 1D and 2D convolutional codes that are required to contextualize the constructions introduced in this paper. We will also recall the decoding algorithm over an erasure channel that was presented in [9].
We denote, by , the ring of polynomials in the indeterminate z and with coefficients in a finite field . By , we denote the ring of polynomials in the two indeterminates, and , with coefficients in .
2.1. 1D Convolutional Codes
Definition 1.
An convolutional code is an -submodule of of rank k. A matrix whose columns constitute a basis of is called a generator matrix of , i.e.,
The vector is the codeword corresponding to the information sequence .
Given a convolutional code , two generator matrices of this code, and , are said to be equivalent. In addition, they differ by right multiplication with a unimodular matrix (a invertible polynomial matrix with a polynomial inverse or, equivalently, a polynomial matrix with determinant in ), i.e.,
A convolutional code is non-catastrophic when it admits a right prime generator matrix , i.e., if with and , then must be a unimodular matrix. A polynomial matrix is said to be left prime if its transpose is a right prime. Note that, since two generator matrices of a convolutional code differ by a right multiplication through a unimodular matrix, if a code admits a right prime generator matrix, then all its generator matrices are right prime and it will be called non-catastrophic code.
The degree of an convolutional code is the maximum degree of the full-size minors of any generator matrix. Note that the degree can be computed with any generator matrix since two generator matrices of a code differ by the right multiplication of an unimodular matrix, as was previously said. A matrix is said to be column-reduced if the sum of its column degrees is equal to the maximum degree of its full-size minors. A polynomial matrix is said to be row-reduced if its transpose is column-reduced. An convolutional code with degree is said to be an convolutional code.
Another important matrix associated with some convolutional codes is the (full row rank ) parity-check matrix noted as . This matrix is the generator matrix of the dual code of ; therefore, we can generate as its kernel as follows:
The parity-check matrix plays a central role in the decoding process for block codes and convolutional codes. Although there is always a parity-check matrix for a block code, this is not the case for convolutional codes. In [11], it was shown that a convolutional code admits a parity-check matrix if and only if it is non-catastrophic. Any non-catastrophic convolutional code admits a left prime and row-reduced parity-check matrix [12].
When transmitting information through erasure channels we may lose some parts of the information. Convolutional codes have been proven to be good for communication over these channels [13].
In [13], a decoding algorithm for these kinds of channels was presented, and it is based on solving linear systems of equations. Consider that we receive the codeword , and that are correct (have no erasures) but the coefficients afterward may have erasures. Let be a parity-check matrix of the code. We define the matrix as follows:
Then, . By reordering these equations we can obtain a linear system of the form
where and are the columns of and the coordinates of that correspond with the erasures, respectively; and , where and are analogously defined concerning the correctly received information. Therefore, we can correct the erasures by solving a linear system with as many unknowns as erasures in the received codeword. Note that we have considered only part of the coefficients of , i.e., the coefficients . We refer to this sequence as a window, and we say that it is a window of size.
Notice that, in order to correct the erasures produced in a codeword, we need a previous safe space where all coefficients are correctly recovered. In the construction of the system (2), the vectors, —which were previous to and the first coefficient of with some erasure—conformed to this safe space.
Once we have a method for correcting the erasures produced by a channel, we may want to know the correction capability that a code can achieve. This capacity is described in terms of the distances of the code. For convolutional codes, there exist two different distances: the free distance and the column distance. The Hamming weight of a vector is given by the expression
where is the number of non-zero coordinates of .
Definition 2.
Let be an convolutional code, then the
free distance of is
The next theorem establishes an upper bound on the free distance of a convolutional code, and it is called the generalized Singleton bound.
Theorem 1
([14]). Let be an convolutional code. Then,
The free distance of a convolutional code allows us to know the correction capacity for errors and erasures once the whole codeword is received.
An advantage of convolutional codes over block codes is that they permit us to do a partial decoding, i.e., we can start recovering erasures even though we do not have the complete codeword. The measurement of the capability of this kind of correction is given by the column distances, which are defined for non-catastrophic convolutional codes [15].
Given a vector and with , we defined the truncation of to the interval as .
Definition 3.
Let be an non-catastrophic convolutional code, then the j-column distance of is defined by
The following inequalities can be directly deduced from the previous definitions [15]:
In addition to the free distance having a bound, the column distances also have a Singleton-like bound, which is given by the following theorem.
Theorem 2
([15]). Let be an non-catastrophic convolutional code. Then, for , we have
Moreover, if for some , then for .
As seen in (3), the column distances cannot be greater than the free distance. In fact, there exists an integer
for which can be equal to , i.e., the bound can be held for and , as well as for [15]. An convolutional code with for (or equivalently with ) is called a maximum distance profile (MDP) code. Next, we detail a characterization of these codes in terms of their generator and parity-check matrices. For that, we need the definition of a superregular matrix.
The determinant of a square matrix over a field is given by
where is the symmetric group. We will refer to term as the addends of , that is, to the product of the form and as a component of the factor of a term, i.e., the multiplicands of the form for . We will also say that a term is trivial if at least one of its components is zero, that is, the term is zero. Now, let A be a square submatrix of a matrix B; if all the terms of are trivial, then we say that is a trivially zero minor of B or that it is a trivial minor of B.
Definition 4
([16,17]). Let and , then B is superregular if all of its non-trivial zero minors are non-zero.
Theorem 3
([15]). Let be a non-catastrophic convolutional code with a right prime and column-reduced generator matrix , as well as a left prime and row-reduced parity-check matrix , then the following statements are equivalent:
- 1.
- is an MDP code
- 2.
- , where for , and it also has the property that every full-size minor that is not trivially zero is non-zero.
- 3.
- , where for , and it also has the property that every full-size minor that is not trivially zero is non-zero.
Lemma 1
([13]). Let be an MDP convolutional code. If any sliding window of length at most erasures occur with , then we can completely recover the transmitted sequence.
This family of optimal codes requires a great deal of regularity in the sense of the previous theorem, and this causes the construction of the MDP convolutional codes to usually require big base fields. From the applied point of view, this is a concern. In last few years, the efforts of the community have been directed to improving this situation by either giving different constructions [18,19,20,21] or finding bounds [22,23]. Research has also led to the development of MDP convolutional codes over finite rings [24].
When MDP codes cannot perform a correction due to the accumulation of erasures, we have to consider some of the packets to be lost and to continue until a safe space is found again. To solve this situation, in [13,25,26,27], a definition for Reverse-MDP convolutional codes was provided. While the usual MDP convolutional codes can only carry out a forward correction, i.e., their decoding direction is from left to right, the reverse-MDP codes will allow a backward correction, i.e., a correction from right to left.
Proposition 1
([26], Prop 2.9). Let be an convolutional code with a right prime and column-reduced generator matrix . Let be the matrix obtained by replacing each entry of with , where is the i-th column degree of . Then, is a right prime, and column-reduced generator matrix of an convolutional code and
We call the reverse code of . Similarly, we denote the parity-check matrix of by .
Definition 5.
Let be an MDP convolutional code. We can then say that is a reverse-MDP convolutional code if the reverse code is also an MDP code.
In [13], it was proven that reverse-MDP codes can perform a correction from right to left as efficiently as an MDP code does in the opposite direction. However, to perform a backward correction, a safe space is required on the right side of the erasures. This can be easily seen in the following example.
Example 1.
Let us assume that we are communicating through an erasure channel. If so, then we have recovered the information correctly up to an instant t and we can later receive the following pattern:
where ★ indicates that the corresponding component has been erased and means that the component has been correctly received. If we use a MDP convolutional code, we cannot perform a correction due to the accumulation of erasures at the beginning of the sequence. Nevertheless, if we consider a reverse-MDP convolutional code and take the first 60 symbols of as a safe space, then we can correct the erasures in . We can repeat this method by taking the first 60 symbols in and recovering the section , that is, we build the linear system as in (2) by considering the following sets of symbols:
This example shows that reverse-MDP convolutional codes can perform better than MDP convolutional codes when transmitting over an erasure channel since we can exploit its backward capability correction. However, this type of approach depends on the fact that over the communication there exists a long enough sequence of symbols that have been correctly received to play the role of a safe space for any of the decoding directions. The following codes are aimed at solving the situation in which a safe space cannot be found.
Definition 6
([13]). Let be a parity-check matrix of an convolutional code and , then we obtain the matrix
which is called a partial parity-check matrix of . Moreover, is said to be a
complete-MDP
convolutional code if, for any of its parity-check matrices , every full-size minor of that is non-trivially zero is non-zero.
Complete-MDP convolutional codes have an extra feature on top of being able to perform a correction as a MDP or as reverse-MDP convolutional codes. If, in the process of decoding the code, we cannot accomplish the correction due to the accumulation of too many erasures, it can compute a new safe space to continue with the process as soon as a correctable sequence of symbols is found.
Theorem 4
([13], Theorem 6.6). Given a code sequence from a complete-MDP convolutional code. If, in a window of size , there are not more than erasures, and if they are distributed in such a way that between position 1 and and between positions and , for , i.e., there are no more than erasures, then the full correction of all symbols in this interval will be possible. In particular, a new safe space can be computed.
For more constructions, examples, and further content on complete-MDP convolutional codes, we refer the reader to [12,13,17].
Example 2.
Again, let us assume that we are using an erasure channel. In this case, we are not able to recover some of the previous symbols and we thus receive the following pattern:
Note that, if we use a MDP or reverse-MDP convolutional code, we require a safe space of 48 symbols to correct the erasures in any of the directions, which we cannot find.
Nevertheless, if we use a complete-MDP convolutional code, we still have one more feature to use. We can compute a new safe space by using Theorem 4, that is, find a window of size where not more than 25 erasures occur. In the received pattern of erasures we can find the following sequence:
When erasures are recovered, we have a new safe space and we can perform the usual correction.
In [13], an algorithm to recover the information over an erasure channel by performing correction in both directions, i.e., forward and backward, is given.
We provide Algorithm 1, which is a new version of the abovementioned algorithms and includes the case for which the code can decode by using the complete-MDP property. We will maintain the notation given in [13], that is, the value 0 means that a symbol or a sequence of symbols has not been received and that 1 is correctly recovered. The function findzeros returns a vector with the positions of the zeros in , as well as forward, backward, and complete, which are the forward, backward, and complete recovering functions, respectively. Note that these functions use the parity-check matrices of and to recover the erasures that appear in within a window of size (when necessary).
2.2. 2D Convolutional Codes
Definition 7
([1]). A 2D finite support convolutional code of rate is a free -submodule of with rank k.
As for the 1D case, a full column rank polynomial matrix , whose columns form a basis for the code, is such that we can express it as follows:
which is called a generator matrix of .
A polynomial matrix is said to be right factor prime if , then for some and , is unimodular (i.e., has a polynomial inverse). Again, similarly to the 1D case, if the code admits a right factor prime generator matrix, then it can be defined by using its full rank polynomial parity-check matrix as follows:
| Algorithm 1 Decoding algorithm for complete-MDP codes |
|
Since we are going to deal with a situation in which the elements of the codewords are distributed in the plane , we will consider the order given by
The codeword and the matrix are represented with its coefficients in order, respectively, as follows:
and
Note that this depiction allows us to see the kernel representation in a more detailed manner as
where if or . It is also possible to denote this product using constant matrices
where
is a vector in , and is a matrix over . An example of this matrix correspondent to a parity-check matrix is presented in Figure 1. Is easy to see that it does not follow the same pattern of construction as the partial parity-check matrix in the 1D case. Note that all the matrix coefficients of appeared in all the columns following the previously established order ≺ with the particularity that, for in the block columns with indices , the coefficients with for were separated from the matrices with by t zero blocks.
Figure 1.
Matrix H.
Consider now that has been transmitted over an erasure channel. We define the support of as the set of indices of the coefficients of that are non-zero, i.e.,
Let be the set of indices of the support of in which there are erasures in the corresponding coefficients of and , i.e., the set of indices of the support of correspond to the coefficients that were correctly received. For the sake of simplicity, and if the context allows it, we will denote as . In Figure 2 an example of erasures distributed in the plane is presented.
Figure 2.
Erasures (red dots) distributed in .
Since we have the kernel representation (4) of the code in a block fashion, we can consider the equivalent linear system , where and denote the submatrices of where block columns are indexed by and , respectively. Correspondingly, and refer to the subvectors of , whose block rows are indexed by and , respectively. In order to recover the erasures, we solved the linear system by considering the erased components of as the unknowns. Note that is known.
As pointed out in [9], this system is massive even when the parameters of the code are small. The authors in this paper developed an algorithm to deal with this situation in being able to recover the information more efficiently. They proposed to decode it by a set of lines, that is, to choose a set of erasures in a line (i.e., horizontal (), vertical (), or diagonal ()), as well as to define a neighborhood for this set of erasures and then use all the correct information within these related coefficients to construct and solve a linear system.
Next, we describe such a method for the correction of erasures in horizontal lines, wherein the method for vertical and diagonal will be analogous. Let us consider a subset of , , where its subindices lie in an horizontal line with the vertical coordinate s, that is,
With this in mind we can “rewrite” (4) as
where—similar to the above— and indicate the submatrices of that are indexed by and block-wise, respectively, as well as where and are defined analogously. (Note that may contain erasures.)
Definition 8.
Let be a horizontal window of length . i.e.,
for some . We then define the following neighborhood of as
Example 3.
Figure 3.
, neighborhood of .
Note that the neighborhood is constituted of an aligned set of triangles. The main role of the above defined neighbors in the decoding process is described in the next set of results from [9].
Lemma 2
([9]). Let with . Suppose that is a transmitted codeword, then is the support of its coefficients with erasures and is the support of the coefficients with erasures that are distributed on a horizontal line in in a window W of length such that . Consider and as in (4), as well as and as in (6). Then, define the vector by selecting the coefficients of with . Define as a submatrix of accordingly. Then, it holds that
where
is an matrix.
It is easy to see that the structure of the matrix in (8) is the same as the partial parity-check matrix of a 1D convolutional code. By taking into account this similarity we define the 1D convolutional code , which is associated with a 2D convolutional code as follows:
where
with column distances , .
Lemma 3
([9]). Let with . Suppose that is a transmitted codeword, is the support of its coefficients with erasures, and is the support of the coefficients with its erasures distributed on a horizontal line in in a window W of length such that . If contains only indices in (and not in ), we have
where is known and is as in (8). Moreover, consider the 1D convolutional code with a parity check as defined in (10) and with a column distance . If there exists such that at most there are erasures that occur in any consecutive components of , then we can completely recover the vector .
The requirement for to contain just the erasures in plays the role of the safe space for the 1D convolutional codes.
Example 4.
Let be a 2D convolutional code with a parity-check matrix as follows:
We define the 1D convolutional codes , where
Let us assume that we receive the pattern of erasures that are shown in Figure 4 (red dots), and let us consider the erasures on the first line (big red dots), i.e.,
and set . To correct , we use the code by Lemma 3. To do so, we build a system as in (3), where
and
Figure 4.
Erasures (red dots) distributed in .
In Lemma 3, it is said that one can conduct a partial correction in line by using the column distances of the used convolutional code. In Example 5, we can see how this partial recovery is performed.
Example 5.
Let be a 2D convolutional code with a parity-check matrix . Let us assume that we receive the pattern of erasures that are shown in Figure 4 (red dots). As mentioned above, we will try to recover the horizontal lines from below to above; therefore, we will try to recover the erasures on the first line (big red dots), i.e., . Since we have some erasures on the top line, we will consider the code associated with the neighbor (i.e., the whole gray area).
We assumed that the 1D convolutional code has the column distances , , , and . Note that the total amount of erasures holds that ; therefore, we have to consider a window of a size (the big red dots and black dot). By following the proof of Lemma 3 in [9], we picked the first equations from the system and obtained the following system:
As proven in [13], once we solved this system we were able to successfully recover the erasures in . We then shifted the window and repeated the process until the recovery was fully completed.
As we mentioned at the beginning of this section, analogous methods can be described by taking lines of erasures in a vertical or diagonal fashion. In [9], it was said that, to decode a set of erasures in the grid , one can carry out the horizontal decoding as was explicitly conducted above in the “from below to above” direction until is not possible to correct anymore. Afterward, execute the algorithm by taking the vertical lines of the erasures and correcting them in the “from left to right” direction until no more corrections can be made. Finally, we followed the same procedure with diagonal lines of erasures, and we then repeated this cycle until any of them were able to recover further information.
3. Construction of 2D Convolutional Codes from 1D Convolutional Codes
In this section, we propose a construction of 2D convolutional codes from a 1D convolutional code.
Let be an non-catastrophic 1D convolutional code with a parity-check matrix . Consider the 2D convolutional code , which is defined by the parity-check matrix
We consider the 1D convolutional codes associated with (named horizontal, vertical, and diagonal), which are constructed by its line positions as follows:
- Horizontal: ;
- Vertical: ;
- Diagonal: ,
where the matrices and are defined by
Naturally, at this point, one wants to know if the optimal properties of are inherited by the line associated codes , , and . In particular, we were interested in the complete-MDP property. In general, these codes did not inherit this property as the next examples show.
Example 6.
In [28], Almeida and Lieb presented a description of all complete-MDP convolutional codes over the smallest possible fields. Let be the one of these convolutional code over , which is defined by the parity-check matrix , where , which is a complete-MDP (as stated in Theorem 13 [28]). Note that the partial parity-check matrices for the vertical, horizontal, and diagonal associated codes are
respectively, and the parameters of these codes are . For , the non-trivial zero minor, which is zero, can be obtained by considering columns 1, 5, and 6:
which means that is not a complete-MDP. With a similar reasoning, we see that and are also not complete-MDPs. As such, in this case, no associated code maintained the complete-MDP property.
Next, we considered a 1D complete-MDP, which is defined in [17], as follows:
Lemma 4
([17]). For , with , let
and let be constructed out of rows and columns of
Then, we have and for
Theorem 5
([17]). With the notation from the preceding lemma, choose i.e., as well as
For , set and Then, those rows of , whose indices lie in I, form the partial parity-check matrix of an () complete-MDP convolutional code if the characteristic of the field is greater than
Let be the 1D complete-MDP convolutional code defined in Theorem 5 with the degree . In the construction, it was assumed that The associated line codes defined from had the parameters .
The next theorem shows that the associated vertical 1D convolutional code was also a complete-MDP.
Theorem 6.
Let be a 2D convolutional code that was constructed as in (12) by using a 1D complete-MDP convolutional code , as shown in Theorem 5 with the parity-check matrix , where . Then, the associated line code with the parity-check matrix
is an 1D complete-MDP convolutional code.
Proof.
The proof follows since every full-size minor of that is not trivially zero is a non-trivial minor of , which is a super-regular matrix by Lemma 4. In fact, Lieb showed Theorem 5 [17]. This proof involves demonstrating that the conditions of the indices of the columns in the partial parity-check matrix , which describes non-trivial zero full-size minors, are equivalent to the conditions outlined in Lemma 4. Moreover, both sets of conditions were found to be mutually equivalent to where is chosen such that Now, to prove the theorem, it is sufficient to observe that the columns of still satisfy where is chosen such that It is worth remembering that is obtained from by removing all the block-rows (of size ) and block-columns (of size n) with odd indices. □
Although the horizontal and diagonal associated line codes do not maintain the optimality as is the case in the vertical one, these codes still satisfy a weaker optimality property.
Theorem 7.
Let be a 2D convolutional code, constructed as in (12), by using a 1D complete-MDP convolutional code , as represented in Theorem 5 with the parity-check matrix , where . Then, the horizontal and the reverse diagonal associated line codes are expressed with the parity-check matrices
respectively, which are 1D convolutional codes with the column distance
Proof.
By ([15], Prop 2.1), we need to show that the matrices
have the property that every full-size minor that is non-trivially zero is non-zero. Note that can be viewed as the submatrix of the partial parity-check matrix of , by considering it as the first rows and columns. In this context, considering that is superregular, we can deduce that, within , every non-trivial full-size minor is non-zero.
For , note that it is exactly the submatrix of that considers the first rows and the columns with indices in the set , as well as the conditions over the columns of the nontrivial minors that imply the condition of Lemma 4. □
While it is demonstrated in this section that the associated line 1D convolutional codes do not necessarily inherit the complete-MDP property of the 1D convolutional code , there exist codes in which this phenomenon occurs. The next section presents a construction of a 2D convolutional code from a complete-MDP 1D convolutional code, whose associated line of 1D convolutional codes are also complete-MDPs. These types of codes are the ones that perform better with the decoding procedure that we are considering, which we will also see later in Section 5.
4. An Optimal Construction
In this section, we present the construction of an optimal 2D convolutional code in the sense that all its associated line codes hold the complete-MDP property. In this case, we will use a complete-MDP 1D convolutional code that was introduced in [17].
Theorem 8
([17]). Let with and , and let α be a primitive element of a finite field with . Then, with
for is the parity-check matrix of an complete-MDP convolutional code.
In order to prove the optimality of our constructions, we will need the next result, which states that the matrices considered in Theorem 8 are superregular.
Proposition 2
([16]). Let α be a primitive element of a finite field , and let be a matrix over with the following properties:
- 1.
- If , then for a positive integer ;
- 2.
- If , then for any , or for any ;
- 3.
- If , , and , then ;
- 4.
- If , , and , then .
Suppose N is greater than any exponent of α appearing as a non-trivial term of any minor of B, then B is superregular.
It is worth mentioning that if we have two matrices A and B that are equivalent (i.e., that we can obtain A from B by using row and column linear transformations), and if one of them holds the hypothesis of the theorem, then both of them are superregular.
Theorem 9.
Let be a 2D convolutional code, which was constructed as in (12) by using a 1D complete-MDP convolutional code , as expressed in Theorem 8, with the parity-check matrix , where . Then, the associated line codes, , , and are 1D complete-MDP convolutional codes.
Proof.
Recall that the associated line 1D convolutional codes , , and have the parity-check matrices
To see that , , and are 1D complete-MDP codes, we need to prove that the non-trivial zero full-size minors of the matrices
are non-zero, where . To this end, we will see that they satisfy the conditions of Proposition 2, i.e., that they are superregular.
First, since is a complete-MDP code, we have
which has the property that all its full-size minors that are not trivially zero are non-zero. Note that we can see as a submatrix of when removing all the block-rows and block-columns with odd indices. Now, is easy to see that if all of the non-trivial zero full-size minors of are not trivially minors of , then they are non-zero.
Finally, through following the same argument as in the proof of Theorem 8 in [17], if we permute the columns of and in such a way that they have reverse order, then we have
which keeps the same terms for the non-trivial zero full-size minors. It is easy to see that these matrices fulfill the conditions of Proposition 2, thus making them superregular matrices. □
Example 7.
Consider the 1D convolutional code over , which was constructed as in Theorem 8, having the parity-check matrix with , , and . Note that in order to obtain a complete-MDP code, . The 2D convolutional code obtained by following the proposed construction was a 2D convolutional code with the parity-check matrix
and its line associated codes had the following parity-check matrices:
Note that for and , its line associated codes are therefore . Now, it is clear that is a submatrix of when looking at (14); on the other hand, and cannot be seen as submatrices of .
5. Decoding Algorithm
In this section, we provide a decoding algorithm for the 2D convolutional codes whose associated line 1D convolutional codes were complete-MDPs. We will use Algorithm 1, which was introduced in the previous section, to perform the corrections over the lines (i.e., horizontal, vertical, and diagonal).
First, we present the construction of a sort algorithm to obtain the lines for which we will perform the correction of erasures. This procedure will provide the set of coefficients of the received codeword for the line decoding that was presented in the previous section depending on the orientation of the line we need to correct. The mergesort function will sort the set by using the above order ≺.
This algorithm chooses the first horizontal line that has some erasure (i.e., the line where s is the minimum integer such that ), as well as the first vertical line that has some erasure or the last diagonal lien with erasures. By doing this, we can correct the lines as explained above. Algorithm 2 depicts the procedure explained in [9] and above in a pseudo-code fashion.
In the proposed routine, we start by extracting the coordinates with erasures , and we later use Algorithm 3 to obtain the sequence of vectors to be corrected by Algorithm 1. In this case, Algorithm 1 can perform analogously but by building the linear systems as in Lemma 3. After performing the decoding, depending on the amount of erasures that are recovered, it changes orientation or returns back to horizontal decoding. When it cannot correct more erasures in any orientation (), it stops.
| Algorithm 2 Decoding 2D convolutional codes |
|
| Algorithm 3 Choosing the set of coefficients to be corrected |
|
Very recently, in [10], an alternative decoding algorithm for 2D convolutional codes was presented. The proposed method also follows the idea of decoding in two directions: horizontal and vertical. Although the main guideline is similar, the performance and the algebraic properties used are very different. Let be the generator matrix of a 2D convolutional code, which can be expressed as follows:
where , and . Then, we can encode the message
to the codeword
with
Note that, for this description, the coefficients of form the j-th horizontal line, and the analogously coefficient determine the j-th vertical line.
There are two main differences with respect to the algorithm described in this paper, the first of which is the decoding performance in a given line. While Algorithm 1 tries forward, backward, and complete-MDP (if possible) decoding for a given line, the method from [10] performs partial corrections that may leave gaps of erasures, whereby when it later it tries to fill those gaps it is always in the forward direction. This last-mentioned process of “filling the gaps” was performed by correcting one point at a time; however, this may make this algorithm less efficient in terms of time.
The second difference was the requirement of a neighborhood. Whereas our algorithm needs a safe neighborhood to perform a correction, this new approach does not require it. This allows it to have more plasticity, that is, it can correct a line when the adjacent ones still have erasures. This feature makes this new approach more flexible against different erasure patterns.
Example 8.
Let us consider Example 3 in [10]. In Figure 5, the pattern of erasures (∗) considered in that example is shown. We will use this example to understand how different the behavior of the algorithms is. We assumed that we received a codeword (with erasures) , where ∗ denotes an erasure and for . By applying the algorithm in [10], the order of decoding was as follows: First—by horizontal decoding—, and were recovered. Then—by vertical decoding— and were recovered. By horizontal decoding, we then obtained the rest of and .
Figure 5.
Pattern of the erasures proposed in [10], Example 3.
If our decoding method is employed, it can be observed that we also can take back the information but in the following different order:
- 1.
- We started with horizontal decoding and found , and .
- 2.
- Since we could not decode the second line due to an excessive accumulation of erasures, we changed to vertical decoding. Here, we could compute and .
- 3.
- Again, we could not continue due to the amount of erasures; as such, we moved to diagonal decoding, and—through this approach—we recovered .
- 4.
- Finally, we switched to horizontal decoding. Through this approach, we could decode the rest of the erasures one line at a time.
In this particular example, in which neither the partial decoding of [10] nor the complete-MDP property of our algorithm were used, it is easy to see that both of them produced similar behavior. When the more complex pattern of erasures happens and these features are needed, then the performance for these two algorithms may differ. With this in mind, we conjectured that whenever a correction is possible from both routines, then the efficiency differs whether this “special feature” is needed for each of the cases. That is, if one algorithm needs its own special property and the other does not, then the second one will be more efficient when establishing the time computation. To decide which of these algorithms works more efficiently and under which circumstances, a deeper and more complex analysis is required that is beyond the aim of this paper.
6. Conclusions and Further Research
Throughout this paper, we introduced a general construction of 2D convolutional codes by using 1D convolutional codes. By following this scheme, we proposed two constructions based on 1D complete-MDP convolutional codes from which one is optimal for the decoding procedure defined in [9]. We also described and complemented the decoding algorithm proposed in [9] by providing its pseudo-code, as well as by adding the possibility of performing the decoding by using the complete-MDP property.
Further research is possible in different directions. Through proposing a general construction of 2D convolutional codes based on 1D convolutional codes, we have a starting point to see if an n dimensional (n-D) convolutional code can be constructed by using dimensional convolutional codes. In the same direction, we could address the problem of decoding such n-D codes by using lower dimensional decoding methods. Moreover, constructions of decoding algorithms for 2D convolutional codes over an erasure channel is still an interesting topic for future research since there currently exist only two of these algorithms in the literature.
Author Contributions
All authors contributed equally to this work. All authors have read and agreed to the published version of the manuscript.
Funding
The authors Raquel Pinto and Carlos Vela were supported by The Center for Research and Development in Mathematics and Applications (CIDMA) through the Portuguese Foundation for Science and Technology (FCT—Fundação para a Ciência e a Tecnologia) (references UIDB/04106/2020 and UIDP/04106/2020): https://doi.org/10.54499/UIDB/04106/2020 and https://doi.org/10.54499/UIDP/04106/2020.
Data Availability Statement
No data required.
Conflicts of Interest
The authors declare no conflicts of interest.
References
- Fornasini, E.; Valcher, M.E. Algebraic aspects of two-dimensional convolutional codes. IEEE Trans. Inf. Theory 1994, 40, 1068–1082. [Google Scholar] [CrossRef]
- Valcher, M.E.; Fornasini, E. On 2D finite support convolutional codes: An algebraic approach. Multidim. Syst. Signal Proc. 1994, 5, 231–243. [Google Scholar] [CrossRef]
- Climent, J.-J.; Napp, D.; Perea, C.; Pinto, R. Maximum Distance Separable 2D Convolutional Codes. IEEE Trans. Inf. Theory 2016, 62, 669–680. [Google Scholar] [CrossRef]
- Gluesing-Luerssen, H.; Rosenthal, J.; Weiner, P. Duality between multidimensional convolutional codes and systems. In Advances in Mathematical Systems Theory; Colonius, F., Helmke, U., Wirth, F., Praetzel-Wolters, D., Eds.; Birkhäuser: Boston, MA, USA, 2000; pp. 135–150. [Google Scholar]
- Weiner, P.A. Multidimensional Convolutional Codes. Ph.D. Thesis, University of Notre Dame, Notre Dame, IN, USA, 1998. [Google Scholar]
- Charoenlarpnopparut, C. Applications of Grobner bases to the structural description and realization of multidimensional convolutional code. Sci. Asia 2009, 35, 95–105. [Google Scholar] [CrossRef]
- Charoenlarpnopparut, C.; Tantaratana, S. Algebraic approach to reduce the number of delay elements in the realization of multidimensional convolutional code. In Proceedings of the 47th IEEE International Midwest Symposium Circuits and Systems (MWSCAS 2004), Hiroshima, Japan, 25–28 July 2004; pp. 529–532. [Google Scholar]
- Kitchens, B. Multidimensional convolutional codes. SIAM J. Discret. Math. 2002, 15, 367–381. [Google Scholar] [CrossRef]
- Climent, J.-J.; Napp, D.; Pinto, R.; Simões, R. Decoding of 2D convolutional codes over an erasure channel. Adv. Math. Commun. 2016, 10, 179–193. [Google Scholar] [CrossRef][Green Version]
- Lieb, J.; Pinto, R. A decoding algorithm for 2D convolutional codes over the erasure channel. Adv. Math. Commun. 2023, 17, 935–959. [Google Scholar] [CrossRef]
- York, E.V. Algebraic Description and Construction of Error Correcting Codes: A Linear Systems Point of View. Ph.D. Thesis, University of Notre Dame, Notre Dame, IN, USA, 1997. [Google Scholar]
- Lieb, J.; Pinto, R.; Rosenthal, J. Convolutional Codes. In A Concise Encyclopedia of Coding Theory; Huffman, W.C., Kim, J.-L., Solé, P., Eds.; CRC Press: Boca Raton, FL, USA, 2021; pp. 197–225. [Google Scholar]
- Tomas, V.; Rosenthal, R.; Smarandache, R. Decoding of Convolutional Codes over the Erasure Channel. IEEE Trans. Inf. Theory 2012, 58, 90–108. [Google Scholar] [CrossRef]
- Rosenthal, J.; Smarandache, R. Maximum distance separable convolutional codes. Appl. Algebra Eng. Commun. Comput. 1999, 10, 15–32. [Google Scholar] [CrossRef]
- Gluesing-Luerssen, H.; Rosenthal, J.; Smarandache, R. Strongly-MDS convolutional codes. IEEE Trans. Inf. Theory 2006, 52, 584–598. [Google Scholar] [CrossRef]
- Almeida, P.J.; Napp, D.; Pinto, R. Superregular matrices and applications to convolutional codes. Linear Algebra Its Appl. 2016, 499, 1–25. [Google Scholar] [CrossRef]
- Lieb, J. Complete MDP convolutional codes. J. Algebra Its Appl. 2019, 18, 1950105. [Google Scholar] [CrossRef]
- Alfarano, G.N.; Napp, D.; Neri, A.; Requena, V. Weighted Reed-Solomon codes. Linear Multilinear Algebra 2023, 72, 841–874. [Google Scholar] [CrossRef]
- Chen, Z. Convolutional Codes with a Maximum Distance Profile Based on Skew Polynomials. IEEE Trans. Inf. Theory 2023, 68, 5178–5184. [Google Scholar] [CrossRef]
- Luo, G.; Cao, X.; Ezerman, M.F.; Ling, S. A Construction of Maximum Distance Profile Convolutional Codes with Small Alphabet Sizes. IEEE Trans. Inf. Theory 2023, 69, 2983–2990. [Google Scholar] [CrossRef]
- Napp, D.; Smarandache, R. Constructing strongly-MDS convolutional codes with maximum distance profile. Adv. Math. Commun. 2016, 10, 275–290. [Google Scholar] [CrossRef]
- Lieb, J. Necessary field size and probability for MDP and complete MDP convolutional codes. Des. Codes Cryptogr. 2019, 87, 3019–3043. [Google Scholar] [CrossRef]
- Chen, Z. A lower bound on the field size of convolutional codes with a maximum distance profile and an improved construction. IEEE Trans. Inf. Theory 2023. [Google Scholar] [CrossRef]
- Alfarano, G.N.; Gruica, A.; Lieb, J.; Rosenthal, R. Convolutional codes over finite chain rings, MDP codes and their characterization. Adv. Math. Commun. 2022, 17, 1–22. [Google Scholar] [CrossRef]
- Tomás, V.; Rosenthal, J.; Smarandache, R. Reverse-maximum distance profile convolutional codes over the erasure channel. In Proceedings of the 19th International Symposium on Mathematical Theory of Networks and Systems, MTNS 2010, Budapest, Hungary, 5–9 July 2010; pp. 1212–2127. [Google Scholar]
- Hutchinson, R. The existence of strongly MDS convolutional codes. SIAM J. Control Optim. 2008, 47, 2812–2826. [Google Scholar] [CrossRef][Green Version]
- Massey, J.L. Reversible codes. Inf. Control 1964, 77, 369–380. [Google Scholar] [CrossRef]
- Almeida, P.J.; Lieb, J. Complete j-MDP Convolutional Codes. IEEE Trans. Inf. Theory 2020, 66, 7348–7359. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).




