Reed–Solomon codes are the basis of many applications such as secret sharing [1
], distributed storage [2
], private information retrieval [4
] and the analysis of cryptographic hardness [5
]. The most used tool for decoding Reed–Solomon codes is the key equation by Berlekamp [6
] and the milestone algorithms that solve it are the Berlekamp–Massey algorithm [7
] and the Sugiyama et al. adaptation of the Euclidean algorithm [8
]. Their connections are analyzed in [9
]. This paper is meant to bring a new unified presentation of the key equation, the Sugiyama-Euclidean algorithm and the Berlekamp–Massey algorithm for correcting errors and erasures for Reed–Solomon codes.
presents a revisited key equation for both erasures and errors using the symmetry between polynomials and their reciprocals as well as the symmetries between dual and primal codes. In the new key equation, as opposed to the classical equation, there is no need to reference computations modulo a power of the indeterminate, and the correction polynomials reveal error locations rather than their inverses. Section 3
gives a way to solve the new key equation using the Euclidean algorithm. We show how the Berlekamp–Massey algorithm can be obtained by reorganizing the Euclidean algorithm. Hence, the whole paper is, in fact, a simple presentation of the Berlekamp–Massey algorithm as a restructured Euclidean algorithm.
2. Symmetric Key Equation
2.1. Reed–Solomon Codes
Suppose that is a finite field of q elements and suppose that is a primitive element of . Let . Each vector is identified with the polynomial . The evaluation of at a is then denoted . The cyclic code of length n generated by the polynomial is classically referred to as a (primal) Reed–Solomon code. Its dimension is k. On the other hand, the cyclic code of lenth n generated by the polynomial is referred to as a dual Reed–Solomon code. Its dimension is k as well. The minimum distance of both codes is . The codes are related by the equality
The vector space is naturally bijected to itself through a map taking to . For a vector the vector is defined componentwise as . Symmetrically, if , then In particular, .
Due to this bijective map, algorithms for correcting errors and erasures for primal Reed–Solomon code are also applicable for dual Reed–Solomon codes and vice versa. Indeed, if the codeword at minimum distance of a received vector u differs from u by a vector of errors e, then the codeword at minimum distance of a received vector differs from by a vector of errors .
2.2. Decoding for Errors and Erasures
Suppose that a noisy channel adds t
errors and erases s
other components of a transmitted codeword
. Let u
be the received word after replacing the erased positions by 0 and let
. The erasure locator polynomial
is defined as
while the error locator polynomial
is defined as
. We remark that while
is known driectly from the received word, the
is not a priori
known. The error evaluator polynomial
is defined as
The error positions can be identified by
while the error values can be derived, as well as the erased values, from the analogue of the Forney formula [13
Notice that in the traditional setting, the roots of the locator polynomial are not related to the error positions but to their inverses. Hence, in the new setting we take the reciprocals of the polynomials of the traditional setting thus establishing a symmetry between the different versions. Also, the classical Forney formula involves the evaluator polynomial and the derivative of the locator polynomial evaluated at the inverses of the error positions, while with the new settings we use directly the error positions.
Finally, the polynomial is called the syndrome polynomial of e.
We can compute directly,
The general term of S is , but we only know from a received word the values . For this reason we use the truncated syndrome polynomial defined as The degree of the polynomial is at most . One consequence of this bound is that the reciprocal polynomials , and the polynomial satisfy the well known Berlekamp key equation . Theorem 1 uses the bound on the degree of to derive a symmetric key equation for dual Reed–Solomon codes. To prove it, we first need the next two lemmas.
Suppose that f is a polynomial of with . Suppose that for a given the polynomial has no term of degree . Then α is a root of f.
The Euclidean division of f by gives a polynomial of degree smaller than that satisfies . Then . On one hand, the product has no term of degree . On the other hand, the coefficient of of degree is exactly . Hence, if has no term of degree , then necessarily . ☐
Suppose that f is a polynomial of with such that the terms of degree of are all zero. Then is a divisor of f.
Suppose that the terms of degree
are all zero. Suppose
was not erased and
. We have
and consequently the term of degree
is 0. Then,
Because of the restriction on the degree of f, none of the last two summands has term of degree . Since the term of degree of is 0, so is the term of degree of . By Lemma 2, must be a divisor of f. Since j was chosen arbitrarily such that and was not erased, we conclude that must divide f. ☐
Theorem 1 (Symmetric key equation).
Suppose that a number s of erasures occurred together with a number of at most errors. Then the polynomials and Ω are uniquely determined by the conditions
It is easy to see that
satisfy conditions 1, 2, 3. It follows from the previous lemmas that
satisfy condition 4. Conversely, suppose that
satisfy the conditions 3 and 4. We will prove that the terms of degrees
are all zero. Then, by Lemma 3, and because
, it can be deduced that
is a divisor of f
. Indeed, write
By consition 4, the degree of the first term in this sum is less than
. By condition 3,
. By condition 4,
and by condition 3,
. So, the terms of degrees
are all zero. Suppose now that there exists
By condition 4, and as just seen, . Consequently, . Now condition 1 and condition 2 imply and so and . ☐
3. Solving the Symmetric Key Equation
We first approach the case in which only erasures occurred. In this case , , and can be directly derived from the key equation of Theorem 1. Indeed, the polynomial is exactly the sum of those monomials of of degree at least , divided by the monomial .
Suppose now the case in which errors and erasures occured simultaneously. The extended Euclidean algorithm applied to the quotient polynomial and the divisor polynomial gives and two polynomials and satisfying that . A new remainder and two polynomials and such that are computed at each intermediate step of the Euclidean algorithm, in a way such that the degree of decreases at each step. Truncating at a proper point the Euclidean algorithm we can obtain two polynomials and satisfying that the degree of is smaller than . The next algorithm is a truncated version of the Euclidean algorithm. It satisfies that, for all , and .
|Algorithm 1: Euclidean Algorithm|
or, equivalently, in matrix form,
For every integer i larger than or equal to consider the matrix It is easy to check that the polynomial is monic. In the algorithm one can replace the update step by the next multiplication.
where the polynomial
is the quotient of the division of
. Furthermore, if
and the update step becomes
One can see that
’s are the leading coefficients of the left-most, top-most polynomials in the previous product of all the previous matrices. This follows from the fact that
is monic. Define
as the (changing) leading coefficients of the left-most, top-most element in the product of all the previous matrices. It follows that
Let us label the matrices in the previous product:
Now, we define
Lets us see now that, for all , the polynomials and are monic. Indeed, is monic, and it follows by induction and by the definition of the matrices , that is monic for all i. Now, all the matrices have determinant equal to 1. This implies that is constant for all i and it equals . In particular, since , we deduce that for every i, the polynomial is monic.
Algorithm 2 computes the matrices until .
|Algorithm 2: Single Coefficient Euclidean Algorithm.|
Due to the fact that the polynomials are monic, after each step with a negative value of p the new updated value p coincides with the previous one but with opposite sign and so happens for . Taking this into account we join each step with a negative value of p with the next step. We obtain
This adjustment keeps unaltered. It can be stated as follows
At this point we observe that we only need to keep the polynomials (and ) because we need their leading coefficients (the ’s). The next lemma proves that these leading coefficients can be obtained independently of the polynomials . This allows the computation of the polynomials iteratively while dispensing with the polynomials .
Lemma 4. Proof.
The result is obvious for . Since we joined two steps, before Algorithm 3, the degree of the remainder is at most for every . Consequently all terms of cancel with terms of and must have leading term equal to either a term of or a term of or a sum of a term of and a term of .
On the other hand, the algorithm computes only while . In particular, . Leu us show that in this case the degree of the leading term of is strictly larger than the degree of . Indeed, since all the matrices in the algorithm have determinant equal to 1, this implies that . ☐
|Algorithm 3: Refactored Single Coefficient Euclidean Algorithm|
if or then
We transform now Algorithm 3 in a way such that isntead of keeping the remainders we keep their degrees. For this we use the values satisfying, at each step, that , .
Algorithm 4 is exactly the Berlekamp–Massey algorithm applied to the recurrence for all . This linear recurrence is a consequence of the equality and the fact that is a polynomial and, hence, its terms of negative order in its expression as a Laurent series in are all zero.
|Algorithm 4: Berlekamp-Massey Algorithm|
if or then