Next Article in Journal
A Black-Box Analysis of the Capacity of ChatGPT to Generate Datasets of Human-like Comments
Previous Article in Journal
Use of Explainable Artificial Intelligence for Analyzing and Explaining Intrusion Detection Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

High-Dimensional Cross Parity Codes and Parities from Lower Than (d − 1)-Dimensional Hyperplanes

Faculty of Mathematics and Computer Science, FernUniversität in Hagen, 58084 Hagen, Germany
Computers 2025, 14(5), 161; https://doi.org/10.3390/computers14050161
Submission received: 6 November 2024 / Revised: 15 April 2025 / Accepted: 25 April 2025 / Published: 26 April 2025

Abstract

:
Cross parity codes are mostly used as 2-dimensional codes, and sometimes as 3-dimensional codes. We argue that higher dimensions can help to reduce the number of parity bits, and thus deserve further investigation. As a start, we investigate parities from ( d 2 ) -dimensional hyperplanes in d-dimensional parity codes, instead of parities from ( d 1 ) -dimensional hyperplanes as usual.

1. Introduction

Cross parity codes have been used for a long time [1] as part of channel coding to achieve forward error correction due to their simplicity and fast correction circuits, although the number of parity bits is higher than for other block codes [2]. Mostly, 2-dimensional parity codes have been used with row and column parities, and they have been extended to correct at least a fraction of 2-bit errors [3]. We are not aware of a systematic study of higher dimensions; only [2] mention that they performed experiments, and [1] mentions that higher than three dimensions are possible.
As the minimum code weight is restricted to min ( 4 , d + 1 ) , it seems useless to investigates dimensions d > 3 , yet there are other parameters to be considered as well, such as the number of parity bits.
We therefore first investigate the optimal parity code dimension for a given data size, with respect to the number of parity bits. As the results indicate that higher dimensions can be useful, we investigate higher dimensions with regard to the question of if they can correct a part of the 2-bit errors. To this end, we consider a variant that is only useful in higher dimensions: while the normal d-dimensional cross parity code computes parities from ( d 1 ) -dimensional hyperplanes, we consider parities from ( d 2 ) -dimensional hyperplanes. We have not been able to locate literature that investigates this variant. The first dimension where this is useful is d = 3 . Figure 1 depicts 64 = 4 3 data bits arranged in a 3-dimensional cube. On the left side, the usual cross parity code with 3 · 4 = 12 parity bits from 2-dimensional planes is shown. On the right side, 1-dimensional parities are used, resulting in 3 2 · 4 2 = 48 parity bits. While the number of parity bits is much higher, the promise is that a larger fraction of 2-bit errors can be corrected.
Our analysis indicates that using higher-dimension parity codes can help to reduce the number of parity bits, and that with the proposed variant, all 2-bit errors can be corrected, in contrast to 2-dimensional parity codes.
The remainder of this article is structured as follows. Section 2 provides some background on cross parity codes. Section 3 demonstrates that cross parity codes of higher dimensions can reduce the number of parity bits needed and should therefore be investigated further. Section 4 investigates a variant that becomes possible with higher dimensions: computing the parities not from ( d 1 ) -dimensional hyperplanes as usual, but using ( d 2 ) -dimensional hyperplanes. Section 5 reports on simulation experiments. Section 6 discusses related work, and Section 7 provides conclusions and an outlook for future work.

2. Background

In this research, we consider error-correction codes, i.e., codes that add redundancy to information to be transmitted during channel coding to achieve forward error correction. The sender of the information adds the redundant information to form the message to be transmitted. The receiver of the message, i.e., information plus redundant information, can detect if an error in the form of one or several bit flips for binary information (on which we will focus) has occurred during transmission. For a limited number of bit flips, the receiver can even correct these errors, i.e., recover the original information. In particular, we consider systematic codes that leave the information (message) unchanged and merely add redundancy bits.
Let N be the number of information or data bits, to which T redundancy bits are added. Then, each data word x { 0 , 1 } N is extended by redundancy bits c ( x ) { 0 , 1 } D to form a code word x , c ( x ) or message of length N + T bits to be transmitted. Let C = { ( x , c ( x ) ) | x { 0 , 1 } N } be the set of code words. The code distance d m i n is the minimum Hamming distance between any two different code words. Then, at least d m i n bit flips during transmission are necessary to convert a code word into another code word. If fewer than d m i n bit errors occur, this will be detected, as the receiver will not receive a valid code word, i.e., the receiver receives a tuple ( x ˜ , c ˜ ) for which c ˜ c ( x ˜ ) . Also, if fewer than d m i n / 2 bit flips occur, these errors can be corrected by the receiver by finding the unique code word ( x , c ( x ) ) with a Hamming distance less than d m i n / 2 to the received tuple ( x ˜ , c ˜ ) . Block codes are often characterized by a tuple message length, information length, and code distance, possibly extended by the size of the alphabet used (2 in our case of binary information), i.e., ( N + T , N , d m i n ) 2 .
As an example, we consider a ( 3 , 1 , 3 ) 2 -code, i.e., N = 1 data bit and T = 2 redundancy bits. The redundancy bits simply replicate the data bit. Then, only two valid code words 000 and 111 exist with Hamming distance d m i n = 3 . If 1 or 2 bit errors occur, e.g., errors on the first and last bits will convert 000 into 101, then this can be detected as no valid code word is received. However, in our example d m i n / 2 = 1 , and therefore this 2-bit error cannot be corrected, as the unique code word with Hamming distance 1 from 101 will be 111.
As adding redundancy consumes part of the available bandwidth and thus reduces the effective bandwidth for message information, the code rate of an error-correction code [4], defined as N / ( N + T ) , is an important measure (next to the code distance, which defines the number of correctable and detectable errors) to compare codes. If the code rate is close to 1, then the overhead from the redundancy bits is not high.
A final measure to compare codes is the decoding time, i.e., the receiver’s effort to check if an error occurred during transmission and to correct it if possible. For some codes, the decoding can be quite complex and might even necessitate a software algorithm, cf. for instance the Berlekamp–Massey algorithm for Reed–Solomon Codes [4], while for cross parity codes (see below) it is very short and can be performed in hardware, which allows applications like error correction in memories.
An important class of error-correction block codes are linear codes that can be defined via a binary ( N + T ) × N -matrix M, where for systematic codes the top part is the N × N -identity matrix so that x , c ( x ) = M · x .
Please note that in many cases, one cannot choose the data size N arbitrarily. Hamming codes are linear codes where N is of the form N = 2 t t 1 and T = t redundancy bits are added [4], resulting in a message of length 2 t 1 and a code rate of 1 t / ( 2 t 1 ) , which is close to 1. d m i n = 3 , i.e., one error can be corrected and two errors can be detected. Our (3,1,3)-code above is the simplest form of a Hamming code. For BCH codes with 2-bit error correction (so-called double-error correction or DEC-BCH codes) [5], N = 2 t 2 t 1 data bits can be used and T = 2 t redundancy bits are added. Please note that the number N of data bits is always slightly smaller than a power of 2, which means that the case where N = 2 u , i.e., N is a power of 2, must be treated by using a code for 2 u + 1 ( u + 1 ) 1 data bits, where the remaining data bits are assumed to be 0, and there are 2 ( u + 1 ) redundancy bits.
Let N = n d for integers n 2 and d 3 . Let I = { 0 , , n 1 } , D = { 0 , , d 1 } and D ( 2 ) = { ( k 1 , k 2 ) D 2 | k 1 < k 2 } .
Let x { 0 , 1 } n d be binary data of N bits in total, oriented in a d-dimensional cube of side length n. Each data bit is denoted as x i 0 , , i d 1 , where i j I for j D , in short: x i j : j D .
The usual cross parity code defines d · n parity bits via all possible ( d 1 ) -dimensional planes, of which there is n in each dimension and d dimensions to choose from:
p k , l = i j I : j D { k } x i j : j < k , l , i j : j > k = i 0 , , i k 1 , i k + 1 , , i d 1 I d 1 x i 0 , , i k 1 , l , i k + 1 i d 1
where k D , l I and ∑ means addition modulo 2, i.e., exclusive or. The number of redundancy bits T = d n is thus proportional to N 1 / d and thus notably higher than for Hamming or BCH codes. The resulting code rate is n d / ( n d + d n ) and thus depends on the chosen dimension, see the next section.
All 1-bit errors can be corrected, and 2-bit errors can be detected, as the minimum code weight is min ( d + 1 , 4 ) [2]. Yet, for d = 2 , no 2-bit errors can be corrected. This, however, can be achieved via an extension with a few additional parity bits [3].

3. Choosing Dimension

Wong and Shea [2] give the minimum code weight as min ( d + 1 , 4 ) , which seems to suggest that only dimensions d = 2 and d = 3 are of interest, and there is also no research known to us that systematically investigates parity codes of higher dimensions.
For a given N, the number of parity bits is solely a function of d. If we consider N a constant and f ( d ) = d · N 1 / d as a function on real d with domain [ 2 , log 2 N ] , then we can calculate the derivation f ( d ) = N 1 / d ln N · N 1 / d / d and see that there is a d * in the domain with f ( d * ) = 0 , and the sign of f goes from negative to positive, i.e., f will be minimal at d * . As d really is an integer, the minimum will be reached at either d * or d * . Still, not all integral values of d might be possible because n = N 1 / d must also be integral as well, or at least nearly integral so as not to waste too many bits through using N ˜ = n ˜ d with n ˜ = N 1 / d and ignoring N ˜ N bits.
As an example, we might consider again the N = 64 data bits from Figure 1. If we arrange them in d = 2 dimensions, we obtain n = 8 and thus d · n = 2 · 8 = 16 parity bits, i.e., more than for d = 3 , where we had 12 parity bits (cf. Section 1). Yet, an arrangement in d = 4 dimensions with side length 3 is also possible. As 3 4 = 81 , we waste some bits, yet the number of parity bits is still 4 · 3 = 12 , and the “superfluous” bits could be ignored by assuming them to be 0 or could be used to add further redundancy. While d = 5 seems not to be interesting as 2 5 = 32 is too small and 3 5 cannot be better than 3 4 , we finally consider d = 6 , leading to a cube of side length 2 with 2 · 6 = 12 parity bits. The difference to smaller dimensions is that for d = 6 , each parity bit is computed over 32 data bits, while for d = 3 , each parity bit is computed over 16 data bits. The code rates for d = 2 , 3 , 4 , 6 are 0.80 , 0.84 , 0.84 , 0.84 , respectively.
The following examples demonstrate that already for quite moderate values of N, the minimum number of parity bits is reached in higher dimensions, and the same holds if we use modified cross parity codes that add 2 d parity bits from half planes in each dimension (called quadrants for d = 2 [3]) to be able to correct some of the 2-bit errors.
For N = 2 10 = 1024 , at least d = 2 and d = 5 are possible. The minimum is reached at d * 7 , and the number of parity bits are 64 for d = 2 yet only 20 for d = 5 , resulting in code rates of 0.94 and 0.98 , respectively. For N = 2 12 = 4096 , at least d = 2 , 3 , 4 , 6 are possible, as d = 5 would lead to n ˜ = 6 and N ˜ = 7776 , i.e., almost twice as large as N. The minimum is reached at d * 8.3 ; the function values are given in the upper row of Table 1 together with the achieved code rates.
If we add 2 d parity bits to obtain the extended cross parity code, function g ( d ) = f ( d ) + 2 d has the derivation g ( d ) = f ( d ) + 2 d · ln 2 , which also goes from negative to positive in the domain. For N = 1024 , the number of parity bits is 68 for d = 2 and 52 for d = 5 . For N = 4096 , the minimum is reached at d * 3.85 . The number of parity bits is shown in the lower row of Table 1 together with the achieved code rates.
This observation indicates that parity codes of higher dimension deserve further investigation, e.g., with respect to their merits for extended cross parity codes.
We also see that these higher-dimensional cross parity codes are not impractical. While the code rates do not reach the level of a 2-bit correcting BCH code (26 redundancy bits for 4096 data bits, achieving code rate 0.994), they are also not far away and lead to simpler decoding functions.

4. Parities from ( d 2 )-Dimensional Hyperplanes

The usual cross parity codes compute parity from ( d 1 ) -dimensional hyperplanes, which seems plausible for d = 2 and d = 3 . However, as the previous section suggests, higher dimensions might also be of interest. Then, the question arises if parities from lower-dimensional hyperplanes also have some merit, although their number will be higher than in the usual cross parity code.
In this article, we consider parities from ( d 2 ) -dimensional hyperplanes. As an example, one may imagine line parities in a cube for d = 3 (cf. Figure 1(right)), which cover the three visible surfaces, i.e., there are 3 n 2 parity bits.
Formally, we define d 2 · n 2 parity bits via
p k 1 , k 2 , l 1 , l 2 = i j I : j D { k 1 , k 2 } x i j : j < k 1 , l 1 , i j : k 1 < j < k 2 , l 2 , i j : j > k 2 = i 0 , , i k 1 1 , i k 1 + 1 , , i k 2 1 , i k 2 + 1 , i d 1 I d 2 x i 0 , , i k 1 1 , l 1 , i k 1 + 1 , , i k 2 1 , l 2 , i k 2 + 1 , i d 1
where ( k 1 , k 2 ) D ( 2 ) and l 1 , l 2 I .
Please note that in the following, we do not consider bit errors on parity bits, and hence do not specify how these are protected. As protecting the parity bits can be performed in the same manner as for data bits (yet with a smaller size and smaller dimension), we see this as acceptable.
If a single bit error occurs in bit x i 0 , , i d 1 , then this shows in d 2 parities: for each ( k 1 , k 2 ) D ( 2 ) , it shows in p k 1 , k 2 , i k 1 , i k 2 .
More formally, the receiver computes
Δ p k 1 , k 2 , l 1 , l 2 = p k 1 , k 2 , l 1 , l 2 + i j I : j D { k 1 , k 2 } x i j : j < k 1 , l 1 , i j : k 1 < j < k 2 , l 2 , i j : j > k 2
for each ( k 1 , k 2 ) D ( 2 ) and l 1 , l 2 I , and for each i 0 , , i d 1 I d computes
Δ x i 0 , , i d 1 = ( k 1 , k 2 ) D ( 2 ) Δ p k 1 , k 2 , i k 1 , i k 2 .
For completeness, i.e., to separate from multi-bit errors, one could additionally multiply with Δ p k 1 , k 2 , l 1 , l 2 ¯ , where either l 1 k 1 or l 2 k 2 , or both, for all ( k 1 , k 2 ) D ( 2 ) .
In the case of a single bit error in bit x i 0 , , i d 1 , Δ x i 0 , , i d 1 = 1 and all Δ x i 0 , , i d 1 = 0 for ( i 0 , , i d 1 ) I d { ( i 0 , , i d 1 ) } .
Assume as an example that two-bit errors occur at two bits on a line in one of the dimensions, w.l.o.g. x 2 , i 1 , , i d 1 and x 3 , i 1 , , i d 1 . The errors show in parities Δ p 0 , k 2 , 2 , i k 2 and Δ p 0 , k 2 , 3 , i k 2 for each k 2 D { 0 } . These errors can be distinguished from 1-bit errors and thus corrected.
Now assume the general case that two-bit errors occur at x i 0 , , i d 1 and x i 0 , , i d 1 , where i j i j for j J D and i j = i j for j D J . Please note that J , because otherwise ( i 0 , , i d 1 ) = ( i 0 , , i d 1 ) , yet all sets J with 1 | J | d are possible. The case | J | = 1 has been treated in the example; thus, we assume | J | 2 in the sequel.
For k 1 , k 2 D J with k 1 < k 2 (if | D J | 2 ), all Δ p k 1 , k 2 , l 1 , l 2 = 0 , as for l 1 = i k 1 and l 2 = i k 2 both errors are counted, and none are counted otherwise.
For k 1 , k 2 J with k 1 < k 2 , Δ p k 1 , k 2 , i k 1 , i k 2 = 1 and Δ p k 1 , k 2 , i k 1 , i k 2 = 1 , while Δ p k 1 , k 2 , l 1 , l 2 = 0 for ( l 1 , l 2 ) ( i k 1 , i k 2 ) and ( l 1 , l 2 ) ( i k 1 , i k 2 ) .
For k 1 J and k 2 D J with k 1 < k 2 (if | D J | 1 ), Δ p k 1 , k 2 , i k 1 , i k 2 = 1 and Δ p k 1 , k 2 , i k 1 , i k 2 = 1 , while Δ p k 1 , k 2 , l 1 , l 2 = 0 for ( l 1 , l 2 ) ( i k 1 , i k 2 ) and ( l 1 , l 2 ) ( i k 1 , i k 2 ) (please note that i k 2 = i k 2 here).
The same holds for k 2 J and k 1 D J with k 1 < k 2 .
Thus, for each k J , the following 2 ( d 1 ) parities have value 1: Δ p k 1 , k , i k 1 , i k = 1 and Δ p k 1 , k , i k 1 , i k = 1 for k 1 = 0 , , k 1 , and Δ p k , k 2 , i k , i k 2 = 1 and Δ p k , k 2 , i k , i k 2 = 1 for k 2 = k + 1 , , d 1 .
For each pair ( k 1 , k 2 ) D ( 2 ) with k 1 , k 2 J , parities Δ p k 1 , k 2 , . . . are counted twice, so that the total number of parities with value 1 is 2 ( d 1 ) | J | 2 | J | 2 .
From the number of Δ p with value 1, the different sizes of J can be distinguished, and for a particular size of J, the different possibilities for J that correspond to the different error positions can be distinguished. Thus, 2-bit errors are correctable.
We illustrate this with our example from the right side of Figure 1. For one error, e.g., on the upper right front corner, three line parity bits (one in each dimension) do not match the re-computed parities, cf. the left side of Figure 2. If two errors occur in a line, e.g., on the upper right front corner and left to it, four line parity bits do not match the re-computed parities, cf. mid of Figure 2. If two errors occur yet not in a line, e.g., the two neighbors of the upper right corner, then six parity bits are affected, cf. right side of Figure 2. If we switch the error bits to the other two data bits in the 2 × 2 -square on the top right, then the affected parities in blue and green would remain the same, yet the red parities would differ.
The code rates for such cross parity codes from ( d 2 ) -dimensional hyperplanes for different values of N and the value d o p t of the dimension that leads to best code rate for the given N are depicted in Table 2. The code rates at least for the largest value of N are competitive with DEC-BCH codes (0.945 vs. 0.994), given that decoding for cross parity codes is quite fast.

5. Simulation Experiments

We implemented a simulation for n = 4 and d = 3 , i.e., N = 4 3 = 64 data bits. We systematically injected all possible 1-bit and 2-bit errors and computed the error syndromes. We found that each syndrome is unique, i.e., that all 1-bit and 2-bit errors can be recognized and distinguished and thus are correctable.
The simulation starts by computing the parity bits of a data word by the sender, i.e., by computing the code word. Then, all 64 possible 1-bit error positions and all 2016 2-bit error positions are enumerated. Each error is injected at the pre-computed position by the sender, the modified code word is given to the receiver, and then the parity bits are re-computed by the receiver. More exactly, in each case the receiver computes and stores the 48 Δ p values according to Equation (1) and the 48 Δ x values according to Equation (2). When this has been performed for all errors, the receiver checks if for any pair of errors, the subsets of the Δ p s with values 1 are equal. If this occurs, these two errors cannot be distinguished. As this does not occur, the receiver matches each subset with the position(s) of the injected error(s) that lead to this subset. If the same subset occurs in the future, then the receiver can flip the data bits at the error positions and thus correct the error.

6. Related Work

There are only few works that deal with cross parity codes.
While already Rubinoff [1] mentions n-dimensional codes, he neither discusses the choice of dimension nor the use of lower-dimensional planes for parities.
Hsiao and Tou [6] compare parity codes with other error-correcting codes but do not seem to go beyond three dimensions.
Wong et al. [2] mention that they performed experiments with higher dimensions, but they also do not discuss the choice of dimension nor the use of lower-dimensional planes for parities.
Poolakkaparambil et al. [7] combine other types of block codes with cross parity codes, yet also seem not discuss higher dimensions.
Interestingly, Vertat and Dudacek [8] define 3-dimensional parity codes with 2-dimensional planes of 1-dimensional line parities, yet do not analyze them but construct 5-dim. and 6-dimensional parity codes as 2-dim. and 3-dimensional constructions, respectively, of those 3-dimensional building blocks with additional parity blocks, i.e., they perform a hierarchical construction.
Standard textbooks like Lin and Costello [4] or Tomlinson et al. [9] do not discuss them in detail.

7. Conclusions

Cross parity codes have been used for a long time, but a systematic investigation of parity codes in higher dimensions seems missing. As a first result, we have demonstrated that higher dimensions can help to reduce the number of parity bits to achieve competitive code rates, and are thus a useful field for further studies, especially in view of extended cross parity codes that can correct at least part of 2-bit errors.
Beyond this, we have investigated a variant that is only possible for higher dimensions. Instead of computing parity bits from ( d 1 ) -dimensional hyperplanes, we presented parities from ( d 2 ) -dimensional hyperplanes. While this increases the number of parity bits, we demonstrated that—in contrast to 2-dimensional parity codes—2-bit errors can be corrected, and code rates are competitive with DEC-BCH codes, at least for large N.
Future work will comprise more detailed investigations on higher-dimensional codes with respect to their behavior on burst errors, their potential for extensions like [3], and the possibilities to reduce the number of parity bits, e.g., by using ( d 2 ) -dimensional hyperplanes only in some dimensions and ( d 1 ) -dimensional hyperplanes to compute fewer parity bits in the remaining dimensions. This might be further improved by moving from cubes to cuboids, i.e., having different side lengths in different dimensions: a smaller side length in dimensions with ( d 2 ) -dimensional hyperplanes will help to further reduce the number of parity bits.

Funding

This research received no external funding.

Data Availability Statement

Dataset is available on request from the author.

Acknowledgments

I thank Michael Gössel and Georg Duchrau for helpful discussions on cross parity codes.

Conflicts of Interest

The author has no competing interest to declare that are relevant to the content of this article.

References

  1. Rubinoff, M. n-dimensional codes for detecting and correcting multiple errors. Commun. ACM 1961, 4, 545–551. [Google Scholar] [CrossRef]
  2. Wong, T.F.; Shea, J.M. Multi-dimensional parity-check codes for bursty channels. In Proceedings of the 2001 IEEE International Symposium on Information Theory, Washington, DC, USA, 24–29 June 2001; p. 123. [Google Scholar] [CrossRef]
  3. Duchrau, G.; Gössel, M. Modified Cross Parity Codes for Adjacent Double Error Correction. In Proceedings of the Architecture of Computing Systems—36th International Conference, ARCS 2023, Athens, Greece, 13–15 June 2023; Proceedings; Lecture Notes in Computer Science. Goumas, G.I., Tomforde, S., Brehm, J., Wildermann, S., Pionteck, T., Eds.; Springer: Cham, Switzerland, 2023; Volume 13949, pp. 94–102. [Google Scholar] [CrossRef]
  4. Lin, S.; Costello, D.J., Jr. Error Control Coding—Fundamentals and Applications; Prentice Hall Computer Applications in Electrical Engineering Series; Prentice Hall: Upper Saddle River, NJ, USA, 1983. [Google Scholar]
  5. Nordmann, P.; Gössel, M. A New DEC/TED Code for Fast Correction of 2-Bit-Errors. In Proceedings of the 25th IEEE International Symposium on On-Line Testing and Robust System Design, IOLTS 2019, Rhodes, Greece, 1–3 July 2019; Gizopoulos, D., Alexandrescu, D., Papavramidou, P., Maniatakos, M., Eds.; IEEE: Piscataway, NY, USA, 2019; pp. 171–175. [Google Scholar] [CrossRef]
  6. Hsiao, M.Y.; Tou, J.T. Application of Error-Correcting Codes in Computer Reliability Studies. IEEE Trans. Reliab. 1969, R-18, 108–118. [Google Scholar] [CrossRef]
  7. Poolakkaparambil, M.; Mathew, J.; Jabir, A.M.; Mohanty, S.P. Low complexity cross parity codes for multiple and random bit error correction. In Proceedings of the Thirteenth International Symposium on Quality Electronic Design, ISQED 2012, Santa Clara, CA, USA, 19–21 March 2012; Bowman, K.A., Gadepally, K.V., Chatterjee, P., Budnik, M.M., Immaneni, L., Eds.; IEEE: Piscataway, NY, USA, 2012; pp. 57–62. [Google Scholar] [CrossRef]
  8. Vertat, I.; Dudacek, L. Multidimensional Cross Parity Check Codes as a Promising Solution to CubeSat Low Data Rate Downlinks. In Proceedings of the 2019 International Conference on Applied Electronics (AE), Pilsen, Czech Republic, 10–11 September 2019; pp. 1–5. [Google Scholar] [CrossRef]
  9. Tomlinson, M.; Tjjai, C.J.; Ambroze, M.A.; Ahmed, M.; Jibril, M. Error-Correction Coding and Decoding; Springer: Cham, Switzerland, 2017. [Google Scholar] [CrossRef]
Figure 1. Cube with N = 64 = 4 3 data bits (black) arranged in d = 3 dimensions. (Left): with usual parities from 3 planes (green, blue, red), each with d = 2 dimensions. (Right): with 1-dimensional parities (green, blue, red for different dimensions).
Figure 1. Cube with N = 64 = 4 3 data bits (black) arranged in d = 3 dimensions. (Left): with usual parities from 3 planes (green, blue, red), each with d = 2 dimensions. (Right): with 1-dimensional parities (green, blue, red for different dimensions).
Computers 14 00161 g001
Figure 2. Cubes of 64 = 4 3 data bits (black) arranged in d = 3 dimensions with parity bits in red, green and blue for 3 dimensions, errors indicated in grey and affected parity bits encircled. (Left): One error on top right corner. (Mid): Two errors in a line. (Right): Two errors not in a line.
Figure 2. Cubes of 64 = 4 3 data bits (black) arranged in d = 3 dimensions with parity bits in red, green and blue for 3 dimensions, errors indicated in grey and affected parity bits encircled. (Left): One error on top right corner. (Mid): Two errors in a line. (Right): Two errors not in a line.
Computers 14 00161 g002
Table 1. Number of parity bits and code rates for N = 4096 data bits depending on dimension d, with ( g ( d ) ) and without ( f ( d ) ) extended parities.
Table 1. Number of parity bits and code rates for N = 4096 data bits depending on dimension d, with ( g ( d ) ) and without ( f ( d ) ) extended parities.
Dim.No. ParityCodeNo. Ext.Code
d Bits  f ( d ) Rate Bits  g ( d ) Rate
21280.9691320.969
3480.988560.986
4320.992480.988
6240.994880.978
Table 2. Code rates for N = 64 , 256, 1024 and 4096 data bits and cross parity codes using ( d 2 ) -dimensional data hyperplanes with code rate-optimal dimension d o p t { 3 , , 6 } for given N.
Table 2. Code rates for N = 64 , 256, 1024 and 4096 data bits and cross parity codes using ( d 2 ) -dimensional data hyperplanes with code rate-optimal dimension d o p t { 3 , , 6 } for given N.
No. Data Bits N d opt Code Rate
6430.571
25640.727
102450.865
409660.945
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Keller, J. High-Dimensional Cross Parity Codes and Parities from Lower Than (d − 1)-Dimensional Hyperplanes. Computers 2025, 14, 161. https://doi.org/10.3390/computers14050161

AMA Style

Keller J. High-Dimensional Cross Parity Codes and Parities from Lower Than (d − 1)-Dimensional Hyperplanes. Computers. 2025; 14(5):161. https://doi.org/10.3390/computers14050161

Chicago/Turabian Style

Keller, Jörg. 2025. "High-Dimensional Cross Parity Codes and Parities from Lower Than (d − 1)-Dimensional Hyperplanes" Computers 14, no. 5: 161. https://doi.org/10.3390/computers14050161

APA Style

Keller, J. (2025). High-Dimensional Cross Parity Codes and Parities from Lower Than (d − 1)-Dimensional Hyperplanes. Computers, 14(5), 161. https://doi.org/10.3390/computers14050161

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop