Next Article in Journal
The Synergistic Impact of 5G on Cloud-to-Edge Computing and the Evolution of Digital Applications
Previous Article in Journal
Fixed Point Theorems in Fuzzy Partial Metric Spaces
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Reduction and Efficient Solution of ILP Models of Mixed Hamming Packings Yielding Improved Upper Bounds

1
Faculty of Informatics, Eötvös Loránd University, 1117 Budapest, Hungary
2
HUN-REN Wigner Research Centre for Physics, 1121 Budapest, Hungary
3
Institute of Physics, University of Pécs, 7624 Pécs, Hungary
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(16), 2633; https://doi.org/10.3390/math13162633
Submission received: 27 May 2025 / Revised: 25 July 2025 / Accepted: 14 August 2025 / Published: 16 August 2025

Abstract

We consider mixed Hamming packings, addressing the maximal cardinality of codes with a minimum codeword Hamming distance. We do not rely on any algebraic structure of the alphabets. We extend known-integer linear programming models of the problem to be efficiently tractable using standard ILP solvers. This is achieved by adopting the concept of contact graphs from classical continuous sphere packing problems to the present discrete context, resulting in a reduction technique for the models which enables their efficient solution as well as their decomposition to smaller subproblems. Based on our calculations, we provide a systematic summary of all lower and upper bounds for packings in the smallest Hamming spaces. The known results are reproduced, with some bounds found to be sharp, and the upper bounds improved in some cases.

1. Introduction

Hamming packings have been broadly studied in the literature and have many applications. In code theory, they are known as unrestricted error correcting codes. Mixed Hamming packings are codes in which the characters of codewords come from alphabets of different cardinality. In the present contribution, we consider mixed Hamming packings.

1.1. Related Work

Given a Hamming space, the determination of the maximal cardinality of a Hamming packing with a given minimal distance is a key question in coding theory. It has been studied in numerous contributions in the literature; the broader topic is covered by the excellent monographs of Stinson [1] and Etzion [2]. A good overview is also provided in the Concise Encyclopedia of Coding Theory [3].
Most of the works related to the topic assume an algebraic structure on the alphabets, like groups or Galois fields. Usually, lower bounds are corollaries of explicit constructions, like in [4,5,6,7,8,9,10], while upper bounds are derived from estimations, discussed, e.g., in [6,11,12,13,14,15,16]. Exact values are known only in a few marginal cases, e.g., for maximum distance codes or perfect codes [4,7,17,18,19] or for codes in certain small Hamming spaces [6,8,12,20,21].
Meanwhile, a Hamming space does not necessarily assume any algebraic structure—codewords can be considered as a set of strings, i.e., n-tuples of the alphabet elements, forming a metric space with the Hamming distance. Such an approach is more general in that it makes less assumptions about the structure. On the other hand, it is more restrictive, too, as it rules out the application of many successful and advanced techniques. The particular questions concerning Hamming packings addressed in the present contribution—for instance, the kind of results that can be found in Brouwer’s online collection [21]—do not depend on the possible algebraic structures. Indeed, we can find other works, e.g., [22] or [9], which do not rely on algebraic structures.
Mixed codes, i.e., those in which the cardinality of the alphabets at character positions can be different, constitute a relevant subset of coding problems with open research questions (see also Section 3.3 of [3]). Our results relate to ball packing bounds of mixed codes, similar to the results in [22,23].

1.2. Our Contribution

We consider Hamming spaces without assuming any algebraic structure on the alphabets. We consider mixed codes, i.e., codes where the cardinality of the alphabets of the symbols and different positions can be different. Hence, by a Hamming packing, or simply, packing, we mean a mixed Hamming packing, and by a code, we mean a mixed code throughout this paper.
We extend a known [24], simple, and inefficient (ILP) model for finding maximum packings with constraints that make the model useful in practical computations. Some of our constraints consist of fixing variables, thereby reducing the size of the model. The considered ILP models, with or without our constraints, have the same optima; hence, they are equivalent to an exhaustive search.
The above extension is achieved by adopting the notion of the contact graph ( C G ) introduced originally for Tammes’ S 2 classic sphere packing problem [25] to our code-theoretic setting. The contact graph helps with the classification of packings. As part of our main result, we show that for any code, there exists a code with the same cardinality and a connected contact graph. We give a constructive algorithmic proof. The aforementioned extension of the ILP models is achieved using this proposition and its corollaries. State-of-the-art ILP solvers can solve our extended model efficiently in many cases.
This paper is organized as follows. In Section 2, we introduce the notation and definitions. In Section 3, we recapitulate certain relevant facts on bounds or constructing optimal packings in particular cases. In Section 4, we describe the ILP models we study. Section 5 contains our main results for the size reduction of the models. Section 6 presents our computational results. We describe in detail some selected cases in which we have found improved bounds to showcase the use of our methods in detail, while the corresponding Appendix A provides a systematic summary of optimal or best known codes, particularly in the smallest mixed Hamming spaces. In Section 7, the results are summarized and conclusions are drawn.

2. Notation and Conventions

We consider k-ary alphabets without any algebraic structure. Yet, we will denote alphabets by integers Z k = { 0 , 1 , k 1 } for the sake of notational convenience. The elements of the Cartesian product H ( k 1 , k 2 , k n ) = Z k 1 × Z k 2 × × Z k n : = Z k 1 Z k 2 Z k n are called words. We can assume without loss of generality that 2 k 1 k 2 k n < .
Let us denote the length of words by n, also referred to as the dimension of H. From now on, H without any argument will denote the set of possible codewords with a fixed parameter set ( k 1 , k n ) , whenever the set of parameters is clear from the context.
To simplify the notation of words, the elements of the Cartesian products will be concatenated so that the words are n-strings of numbers. The positions j { 1 , n } within a codeword are termed as coordinates, and the j-th element (character) of w W is denoted by w [ j ] . A subset C H is termed a code; its elements are the codewords. As an example, the binary–ternary–quaternary mixed codes are those with k j 4 j : 1 j n ; thus, they consist of n characters from binary, ternary, and quaternary alphabets.
Certain codes are considered as equivalent. Let us consider the permutations π H of the coordinates which are not mixing coordinates of alphabets with different cardinality. That is, if the coordinate j is moved to π H ( j ) by the permutation, then the alphabets at j and π H ( j ) are of the same cardinality. Further, let π j be a permutation (also known as relabelling) of the alphabet at coordinate j. We shall call two codes equivalent if they can be transformed into each other with the composition of all these permutations.
Given two words, w H and w H , their (Ham ming) distance is the number of positions at which they differ:
d H ( w , w ) = j = 1 n sgn w [ j ] w [ j ] .
(Note that the previous formula subtracts the characters to shorten the description; it could be formulated without it.) The set H of all possible codewords, together with the Hamming distance d H ( . , . ) as a metric is termed a Hamming space.
For a code C H , the minimum distance of the code is the minimum of d H ( w , w ) over all distinct w , w C :
d H ( C ) = min w w w , w C d H ( w , w ) .
Similarly, we can introduce the distance of two codes
d H ( C , C ) = min w C w C d H ( w , w ) .
Notice that d H ( C ) d H ( C , C ) . Among all codes having a given minimum distance d, there exists at least one which has maximal cardinality. Such a code is termed an optimal (or maximum) Hamming packing. We will refer to it as a Hamming packing in what follows, that is, by a Hamming packing, we will always mean an optimal Hamming packing unless otherwise stated. In addition, those that are not necessarily optimal packings and that do not strictly contain any other packings are referred to as maximal (Hamming) packings.
The term ‘packing’ comes from a geometric interpretation; considering the codewords as points, with the metric generated by the Hamming distance, a Hamming packing is indeed an extremal packing in the geometric sense. The maximal cardinality of a Hamming packing with minimum distance d is defined as
N k 1 , k 2 , , k s ( α 1 , α 2 , , α s ; d ) N ( H ; d ) = max d H ( C ) d | C | ,
where we have introduced a more accurate and a simplified notation; we will use both notations.
Finally, it will be convenient to use the notion of spheres. The sphere with a centre w and radius r can be defined as
S ( w , r ) = { v : d H ( v , w ) = r } .
The concept of spheres can be generalized to arbitrary subsets C H as centres:
S ( C , r ) = { w : d H ( C , { w } ) = r } .
Note that the sphere of an arbitrary subset might be empty, even in nontrivial cases.

3. Known Bounds and Properties

Let us summarize certain bounds and basic properties related to the cardinality of maximum Hamming packings that we will use and are prevalently known from the literature.
Proposition 1.
Komamiya–Joshi–Singleton bound [17,26,27]: Let H = Z k 1 Z k 2 Z k n , where k 1 k 2 k n . Then, for any 1 d n : N ( H ; d ) k 1 k 2 k 3 k n + 1 d .
Proof. 
The pigeonhole principle can be applied. For any code with minimal distance d, restricting to any n + 1 d subset of the alphabets, the corresponding parts of each pair of codewords from the code must differ in at least one position; otherwise, this pair could only realize a distance of at most d 1 . This property holds for every such subsets of alphabets, including the ones with minimal cardinalities.    □
A code will be called KJS sharp if it saturates its Komamiya–Joshi–Singleton bound. Remarkably, in the case when d = 1 , 2 or n , these codes are KJS sharp, and direct optimal code constructions are trivial; several special cases are mentioned in [9]. From now on, we suppose that 3 d n 1 . The KJS sharpness is a natural generalization of the maximal distance separable (MDS [17,28]) property in spaces over uniform alphabets.
Next, we recapitulate a proposition which yields upper bounds for smaller spaces by truncating a code at a certain position.
Proposition 2
([8,9]). Let H = Z k 1 Z k 2 Z k n for a given 2 k 1 k 2 k n , and let H j = Z k 1 Z k 2 Z k j 1 Z k n for all 1 j n . Then, the following two statements hold: N ( H j ; d ) N ( H ; d ) and N ( H j ; d ) k j 1 k j N ( H ; d ) .
Proof. 
The first part is trivial: every C H j code is also a code in H. Conversely, consider an optimal packing C in H such that N ( H ; d ) = | C | . For each codeword’s coordinate at j, using the pigeonhole principle, min i Z k j | { w C : w [ j ] = i } | | C | k j . Let C C be the subset code obtained by the following truncation: C = { w C : w [ j ] i } . This has an equivalent C code in C, where w [ j ] k j 2 w C ; hence, C H j , and | C | k j 1 k j | C | .    □
We note that the second inequality is commonly known [22], especially for the k j = 3 case, which is particularly important for football pool systems. The proof is the same for other values of k j ; it is the straightforward generalization of item (iv) of Proposition 4.3 from the work of Brouwer et al. [8]. Notice that even in case of k j = 2 , the truncation still has a reasonable meaning and provides an estimation for a 1-codimensional (i.e., one-letter-shorter) optimal Hamming packing:
N ( Z k 1 Z k 2 Z k j 1 Z k j + 1 Z k n ; d ) 1 k j N ( Z k 1 Z k 2 Z k j 1 Z k j Z k j + 1 Z k n ; d ) .
Corollary 1.
Consider a fixed d and two Hamming spaces H and H with the same dimension. Assume that k j = k j 1 j n + 1 d and k j k j holds, and H, the smaller space, has a KJS sharp packing. Then the same packing is also optimal in the bigger space H .
This means that having a KJS sharp packing in a space also determines the optimal packing in certain bigger spaces. Let us now discuss codes of length 4.
Proposition 3.
For n = 4 , 2 k 1 k 2 k 3 k 4 and k 2 6 , k 2 3 , the optimal value N k 1 , k 2 , k 3 , k 4 ( 1 , 1 , 1 , 1 ; d = 3 ) = k 1 k 2 , which meets its Komamiya–Joshi–Singleton (KJS) bound.
The constructive proofs known from the literature are based on the existence of mutually orthogonal Latin squares (MOLS), which can be found in [1], wherein for the case of k 2 = k 1 = k 3 = k 4 , Theorems 10.21, 6.38, and 6.51 in [1] are applied as MOLS. Due to the sharpness of KJS bounds, truncated codes on the position with the smallest cardinality will remain as KJS sharp feasible maximal packings in the truncated Hamming space. The remaining marginal cases for n = 4 and k 2 { 2 , 6 } are also known and can be enumerated (we will also list one optimal code per case in Section 6): N 2 ( 4 ; d = 3 ) = 2 ,   N 2 , 3 ( 3 , 1 ; d = 3 ) = 3 ,   N 2 , 3 ( 2 , 2 ; d = 3 ) = 4 ,   N 2 , 4 ( 3 , 1 ; d = 3 ) = 4 ,   N 5 , 6 ( 1 , 3 ; d = 3 ) = 30 ,   N 6 ( 4 ; d = 3 ) = 34 , N 6 , 7 ( 3 , 1 ; d = 3 ) = 36 .
Let us briefly summarize some known algebraic constructions for families of optimal codes having a length of 5 (these are based on Theorems 10.21, 6.38, and 6.51 in [1]).
Proposition 4.
Suppose that given 2 k 1 k 2 k 3 k 4 k 5 , n = 5 , and suppose that there exists 3 MOLS of order k 2 , then in this case, the optimal value N k 1 , k 2 , k 3 , k 4 , k 5 ( 1 , 1 , 1 , 1 , 1 ; d = 4 ) = k 1 k 2 .
Proposition 4 covers all but the following finitely many cases: k 2 { 2 , 3 , 6 , 10 } . For the latter, mostly partial results are known, while the optima are unknown. Remarkably, the results in [29,30,31] show that there exists 3 MOLS of order k 2 for every k 2 11 .
Proposition 5.
Suppose that given 2 k 1 k 2 k 3 k 4 k 5 , n = 5 , and 4 k 3 4 s + 2 . Then, for the optimal value, we have N k 1 , k 2 , k 3 , k 4 , k 5 ( 1 , 1 , 1 , 1 , 1 ; d = 3 ) = k 1 k 2 k 3 .
The proof relies on the existence of orthogonal arrays of strength 3 and length 5 in [32] (denoted by O A ( 3 , 5 , m ) therein), where m = k 3 , and the alphabet truncation mentioned in Proposition 2 is applicable for the respective coordinates belonging to k 1 , k 2 . Proposition 5 covers all cases except the following: k 3 = 4 s + 2 for certain infinite classes of s values, based on the new construction methods of orthogonal arrays in [33].
Corollary 2.
Suppose that s 1 , s Z is given, then after applying constructions from Proposition 5 with parameter k 3 = ( 4 s + 3 ) and estimations from Proposition 2, we establish the following KJS sharp optimal values:
(8) N ( 4 s + 2 ) , ( 4 s + 3 ) ( 1 , 4 ; d = 3 ) = ( 4 s + 2 ) ( 4 s + 3 ) 2 , N ( 4 s + 2 ) , ( 4 s + 3 ) ( 2 , 3 ; d = 3 ) = ( 4 s + 2 ) 2 ( 4 s + 3 ) , N ( 4 s + 2 ) , ( 4 s + 3 ) ( 3 , 2 ; d = 3 ) = ( 4 s + 2 ) 3 .
The statement also holds for k 3 = 4 , which yields N 3 , 4 ( 3 , 2 ; d = 3 ) = 27 . We can omit an infinite amount of Hamming space families, some with huge alphabet cardinalities from our consideration, by setting the following bound on the cardinality of the maximal alphabet: max j = 1 n k j < KJS , or formally: k m < j = 1 n + 1 d k j 1 m n . If this bound is violated, an optimal code can be altered so that all codewords have different values at position n. Hence, the character at this position can be eliminated, and thus, the code should be constructed from an optimal code of the resulting smaller Hamming space.
This observation leads us to a reasonable number of cases for small | H | values up to 600 and occasionally for some bigger spaces. Our strategy was to fix a d, n, and a KJS bound as a product of fixed cardinalities, and then determine all small spaces with their optimal packing below the given KJS bound using Corollary 1.

4. ILP Models for Hamming Packing

Let H be a given Hamming space and d Z + . Let x a denote a boolean indicator variable for each possible codeword a H so that x a = 1 iff a is in the C maximum Hamming packing. It is prevalently known [24] that a C can be trivially obtained from the following integer linear program as any of its primal optimal solutions:
(9) max a H x a s . t . x a + x b 1 a , b H : 1 d H ( a , b ) d 1 x 0 , 1 n .
The objective ensures the maximality of the packing, whereas the inequality constraints forbid the simultaneous selection of points too close to each other. Later, we will make use of the monotonicity property of this ILP: if in a feasible solution an element of value 1 is set to 0, the resulting solution is also feasible. In other words, every subset of a feasible packing is feasible.
The simple model in Equation (9) is inefficient to determine N k 1 , k 2 , , k s ( α 1 , α 2 , , α s ; d ) for the exponential scaling of the number of variables. Here, we propose some modifications to improve the efficiency. Without loss of generality, it can be assumed that the all-zero codeword z is always chosen; hence, z C . Under this assumption, the model can be simplified by omitting the variables corresponding to the inner part of the z-centred ball with radius d 1 , leading to
(10) max a H x a s . t . x z = 1 x a = 0 1 d H ( a , z ) d 1 x a + x b 1 a , b H : 1 d H ( a , b ) d 1 x 0 , 1 n .
For practical reasons, the obsoleted (binary) variables can be removed from the model. This will result in a significant decrease in the memory consumption of the presolvers used by ILP solvers, as confirmed by our experience. Thus, the model can be written in the form
(11) max 1 + a H d H ( a , z ) d x a s . t . x z = 1 x a + x b 1 a , b H B ( z , d 1 ) : 1 d H ( a , b ) d 1 x 0 , 1 n .
Note that developing certain linear programming models in order to estimate bounds for the cardinalities of extremal Hamming packings is a fairly old idea; see e.g., Delsarte’s results [11]. The ILP model we have introduced here, however, has the advantage that it is an equivalent declarative reformulation of the exhaustive search [24].

5. Reducing ILPs Using Contact Graphs

In this section, we use contact graphs. We show that every Hamming packing C can be modified so that the resulting C packing will have a connected C G ( C ) contact graph and | C | = | C | . Furthermore, under certain restrictions on the cardinalities of the alphabets, there exists a maximum packing with a connected contact graph with a minimal vertex degree of at least 2. Let us first define contact graphs and Hamming graphs.
Definition 1.
The contact graph C G ( C ) of a Hamming packing C (with minimal distance d which is fixed) is the graph whose vertices correspond to codewords in C, and there is an edge between pairs of nodes a and b iff d H ( a , b ) = d .
Definition 2.
The Hamming graph H ( C , d ) of a subset of a Hamming space for distance d is the graph whose vertices correspond to codewords in C, and there is an edge between pairs of nodes a and b iff d H ( a , b ) d .
Note that for any C Hamming packing, C G ( C ) is a subgraph of H ( C , d ) and the latter is a subgraph of H ( H , d ) . Now, we show that for most codes with a given minimal distance and a certain d, which is lower than the minimal distance, one can modify the code by replacing a single codeword so that the distance d is realized in the new code. Formally, this can be represented by the following proposition.
Proposition 6.
For every 0 < d Z and for every C H with | C | 2 and d d H ( C ) , there exists a C H code such that d = d H ( C ) , | C | = | C | and | C C | | C | 1 .
Proof. 
The proof is constructive. According to the assumptions, there exists a pair of points { v , w } C such that d H ( C ) = d H ( v , w ) is realized. Let { v , w } be a fixed, arbitrarily chosen pair. If d H ( w , C { w } ) = d , then let C = C , and the proof is completed. Otherwise, change a single character in w in which it differs from v, at the coordinate, e.g., j. This will result in the codeword w = ( w [ 1 ] , w [ 2 ] , , w [ j 1 ] , v [ j ] , w [ j + 1 ] ) . The resulting code C ( 1 ) = C { w } { w } preserves cardinality, and d H ( C ( 1 ) ) = d H ( C ) 1 holds. This procedure can be repeated at the beginning with the codeword pair ( v , w ) , which now realize the minimal distance in C ( 1 ) . The repetition will modify w again; thus, it will be the only codeword to change. The procedure is to be iterated, always modifying the same codeword, and for the iteration, d H ( C ( k + 1 ) ) = d H ( C ( k ) ) 1 will hold; thus, the iteration can be continued till d H ( C ( m + 1 ) ) = d , and let C = C ( m + 1 ) .    □
This proposition has an important corollary:
Corollary 3.
N ( H ; d ) = max d H ( C ) = d | C | .
Recall that in the definition of Equation (4), there is a lower-or-equal relation in the maximization; this can be sharpened to equality, i.e., Proposition 6 ensures that the lower values are covered. The idea of the previous proof can be employed to prove a useful proposition regarding contact graphs: codes can be transformed into codes of the same cardinality and no isolated vertices.
Proposition 7.
For every 0 < d Z and for every C H , if | C | 2 and d d H ( C ) hold then there is a C H code such that | C | = | C | and C G ( C ) do not have any isolated vertices.
Proof. 
Let w C loop through all isolated vertices (i.e., for which d H ( w , C { w } ) > d holds). Using the algorithm in the proof of Proposition 6, replace each w with w so that d H ( w , C { w } ) = d . In this way, each vertex will have at least one edge.    □
This proposition has strong consequences, resulting in useful additional constraints to the ILP model that determines N ( H ; d ) .
Corollary 4.
The ILP in Equations (9)–(11) can be extended with the constraints
w H : x w d H ( w , w ) = d x w .
These constraints literally mean that for every selected codeword, there is another one selected at a distance d, i.e., there are no isolated vertices of the contact graph of the code. Given a lower bound s for the objective (typically an initial feasible solution to start with), we can also formulate dual bounds. By estimating an upper bound of the maximum s + k , if we can approach it close enough, then the optimum is found.
Corollary 5.
Let us extend the ILP in Equation (9) (possibly extended with the constraints in Equation (13)) with the following upper bound on the objective function:
( s ) w H x w s + k ,
where k 1 is any fixed integer. If the optimum of the extended ILP is at most s + k 1 , the optimum for the original ILP (with or without (13)) has also been found.
Otherwise, if the extended ILP saturates the arbitrarily chosen upper bound, the obtained solution may not be optimal for the original problem. If, however, the constraint is not saturated, the optimum has been found.
Proof. 
Due to the monotonicity of feasible packings, when removing one codeword from any feasible solution vector, the resulting vector will also be feasible. For the ILP in Equation (9), as a feasibility problem, this means that changing one element from 1 to 0 in any feasible solution vector x preserves feasibility. (In this ILP, all the constraints are of the form x w + x v 1 ; hence, decreasing the left-hand side by one results in constraints that are still satisfied for the originally feasible solutions.) Hence, for all objective values below the optimum, there exist feasible solutions of the model in (9). Because of Proposition 7, the same holds when extending the model with the constraints in Equation (13). Hence, if the ad hoc bound s + k is bigger than the actual optimum, it will not be active.    □
The bound ( s + k ) is referred to as a dual bound; it is typically chosen to be the best known bound or a plausible conjecture. According to our practical experience with solvers, starting the solution process with a best known initial solution having cardinality s and setting the dual bound ( s + 1 ) dramatically reduces the memory footprint. This is a practical way to verify if s is an optimum. This will be illustrated in Section 6.1. Due to the symmetry, e.g., that every Ø C H code has an equivalent one containing z = 00 0 ¯ , instead of Equation (9), the models Equation (10) or Equation (11) can be used in the previous corollaries.
Now, we present one of our main results.
Proposition 8.
Every Hamming packing C : | C | 2 with minimal distance d can be transformed to another Hamming packing C with the same number of codewords and minimal distance, whose contact graph C G ( C ) is connected.
Proof. 
The desired transformation can be carried out using Algorithm 1 (in which denotes a disjoint union).
Algorithm 1 Transforming C to C with the same number of codeword and minimal distance, and connected C G ( C )
  1:
INPUT: C H nonempty Hamming packing, d Z ,   d > 0 distance
  2:
START
  3:
pick a random w ^ C , then partition C : = C 1 C 2 , where C 1 = { w ^ } and C 2 = C C 1
  4:
while  C 2  do
  5:
   while  w , w : w C 1 , w C 2 : d H ( w , w ) = d  do
  6:
    update C 1 : C 1 : = C 1 { w }
  7:
    update C 2 : C 2 : = C 2 { w }
  8:
   end while
  9:
   if  ( C 2 Ø ) and ( d H ( C 1 , C 2 ) d )  then
10:
    sort increasing the elements of C 2 on distance of C 1
11:
    let d denote: d : = d H ( C 1 , C 2 )
12:
    select a w smallest element from C 2
13:
    select arbitrary w : w C 1 , where d H ( w , w ) = d
14:
    select arbitrary index j, where w [ j ] w [ j ]
15:
    denote a : = w [ j ] , b : = w [ j ]
16:
    for each y C 2  do
17:
     if  y [ j ] = a  then
18:
       y [ j ] = b
19:
     else if  y [ j ] = b  then
20:
       y [ j ] = a
21:
     end if
22:
    end for
23:
   end if
24:
end while
25:
STOP, OUTPUT: C : = C 1
The proposition is proven if Algorithm 1 terminates in a finite number of steps. The algorithm partitions the set of codewords C to subsets C 1 and C 2 . The subset C 1 initially contains an arbitrarily chosen single codeword, and codewords from the complementary set C 2 , which are exactly at a distance d from an element of C 1 , are gradually moved to C 1 in the first inner loop in lines 5–8. If the inner loop runs out of movable elements, then the distance d of the two sets is at least d + 1 . In this case, we pick a pair of elements w and w so that d H ( w , w ) = d , which always exists. Next, we modify the code so that the distances within C 1 and also within C 2 are preserved, while the distance between C 1 and C 2 decreases by one. This is always possible (see the described steps in the algorithm) and results in a code with the same cardinality. For the outer loop, and thus for the whole algorithm to terminate, the set C 2 has to be emptied. Observe also that
  • Code C remains a d-feasible Hamming packing at the end of each loop in lines 8, 22, and 24;
  • C G ( C 1 ) is a connected graph in line 8;
  • Once d reaches d + 1 at line 11, then C 2 must lose one of its elements that will move to C 1 at the end of the while loop at line 8;
  • After exiting the for loop (line 22), C G ( C 2 ) does not change; no Hamming distance will be altered between elements of C 2 ;
  • At the end of the for loop, the distance between C 1 and C 2 decreases exactly by one; as the distance was larger than d at the beginning of the loop, C 1 C 2 still cannot contain any pair with a distance less than d;
  • The cardinality of C 2 is strictly decreasing or d H ( C 1 , C 2 ) is strictly decreasing in lines 8, 22, and 24.
As the cardinality of all the involved sets, including that of C 2 , is finite (as our Hamming space is finite), d H ( C 1 , C 2 ) n holds, and the cardinality C 2 monotonically decreases during the iterations, C 2 will become empty, and thus, the algorithm will terminate in a finite number of steps. In the worst case, the maximal number of iterations is ( | C | 1 ) ( n + 1 d ) , which arises in the case when only one codeword is moved from C 2 to C 1 in each step. □
To demonstrate the usefulness of Proposition 8, we show that a lemma by Plotkin [34] arises as a simple corollary.
Corollary 6. 
N 2 ( n , d ) = 2 2 n 3 < d n .
Proof. 
The case d n is straightforward in both directions: C = { 00 0 ¯ , 11 1 ¯ } H : d H ( C ) = n d provides the lower bound 2 N 2 ( n , d ) . For the nontrivial cases, let us apply Proposition 8 and certain permutations on C. Suppose that, without loss of generality, | C | = 3 N 2 ( n , d ) , denote its elements by C = { z , w , v } . Let there be a pair of codewords on which the d distance is realized z = 00 0 ¯ , w = 00 0 n d 11 1 d ¯ . Using the fact that the code’s contact graph is connected, suppose that the third word v is connected to z, which is all zero; hence, w must have the weight d. From this, it follows that with suitable permutations, it has to take the form v = 00 0 n d x 11 1 x 11 1 d x 00 0 x ¯ . The distance must be d d H ( v , w ) , and since d H ( v , w ) = 2 x , we obtain d 2 x , which is necessary. In case when d > 2 n 3 , or, equivalently, n 3 > n d , from n d x 0 , we can determine that n 3 > n d x n 3 > x d > 2 n 3 > 2 x d , from which it follows that d > d , which is a contradiction. Hence, in this case, no v exists, which meets all the requirements. The remaining case is when d 2 n 3 . Let us choose x = n d . Considering again that d H ( v , w ) = 2 x , it follows that 2 x = 2 n 2 d 2 ( 3 d 2 ) 2 d = d , implying that d H ( v , w ) = 2 x d , and so, v is a feasible word far enough from w. □
Let us now establish a key property for mixed codes using characters from alphabets of size at least 3. This is our second main proposition.
Theorem 1.
Let H be a Hamming space so that k 1 3 . Then every optimal Hamming packing C H : | C | 3 with minimal distance d can be transformed to another Hamming packing C with the same number of codewords and minimal distance, whose contact graph C G ( C ) is connected and the minimal vertex degree in it is at least 2.
Proof. 
According to Proposition 8, for a given d Z , there exists a maximum packing C H : | C | 3 so that it has a connected C G ( C ) contact graph. If every v V ( C G ( C ) ) fulfils deg ( v ) 2 , the proof is complete. Otherwise, suppose the contrary, whereby every maximum packing with a connected contact graph has at least one vertex with degree 1. As the number of maximum packings is finite, we can consider one of these, C, with a minimal number of 1-degree vertices (see Figure 1 for an illustration). Let w 0 be a vertex of C so that deg ( w 0 ) = 1 , and let us denote its only neighbour by z. From now on, we allow for permuting the coordinates of the symbols, even without maintaining the ordering in the cardinality of the alphabets. By such permutations, we can assume, without loss of generality, that z and w 0 can be brought to the following equivalent forms: z = 00 0 ¯ and w 0 = 00 0 n d 11 1 d ¯ . In this way, the differences between z and w 0 are realized in the rightmost coordinates. Now for every 1 k d , let w k = 00 0 n d 11 1 d k 22 2 k ¯ . These codewords are in the Hamming space because each alphabet contains at least 3 elements.
The following consideration is illustrated in Figure 1. Observe that 0 < k < d : w k C , because d H ( C { w k } , w k ) d H ( w 0 , w k ) = k < d . w d C ; otherwise, d H ( w 0 , w d ) = d would imply that deg ( w 0 ) 2 , which is a contradiction. Let D k = d H ( C { w 0 , z } , w k ) 0 k d . It is known prevalently that D 0 > d and D d < d , and for | D k D k 1 | 1 for every k > 0 , because if at most one coordinate changes in a code, then its distance might change at most by 1. This implies that there must exist D k = d . Replacing w 0 with the corresponding word w k in C yields C = ( C { w 0 } ) { w k } . Contact graph vertex degrees will not be decremented in C members other than w k , and because d e g ( w k ) 2 , C code contains one less codeword with a degree of 1, thus contradicting the minimality of the number of 1-degree nodes in C G ( C ) . □
Corollary 7.
The ILP in Equation (9) can be extended with the following constraints if H is a product of nonbinary alphabets:
w H : 2 x w d H ( w , w ) = d x w .
This constraint prescribes the minimal vertex degree according to the theorem. Notice that Equations (14) and (15) cannot be applied together to determine N ( H ; d ) without further consideration, unless it is proven independently that the dual bound s + k in (14) is correct for the given case.
There is a nontrivial example, which illustrates that the alphabet cardinality and the maximality criteria are necessary in Theorem 1. It is known [35] that N 2 ( 5 ; d = 3 ) = 4 , and in this particular case, the optimal packing and any of its subset with 3 elements are unique up to isomorphism; thus, their contact graphs are also uniquely determined, namely the C 4 or the tree over 3 vertices. As the minimal distance d is odd and the alphabets are all binaries, the contact graphs of these packings are necessarily triangle-less, violating the minimal degree criterion for codes with a size of 3.

6. Computational Results

Our computational approach can be summarized as follows. We start with the ILP in Equation (9). We include the additional constraints in Equation (13). Next, we perform branching on equivalence classes of the contact graph’s small subgraphs, usually 2 or 3 nodes/codewords. Next, we apply a reduction technique per case, which consists of fixing variables. The key idea of our reduction technique is the following: in each branch, having chosen the initial pair of codewords, the words that are at a distance less than ( d 1 ) from the chosen pair can be eliminated from the model by fixing the respective variables. This is one of the most common operation that is performed by ILP presolvers. This leads to a drastic reduction in the size of the respective ILP models. The removal of at most 2 codewords in all distinct possible ways is a corollary of Proposition 6. In order to perform branching by adding to the models, initially, 3 or more codewords are fixed in all possible nonisomorphic ways; then, pruning is conducted by using the connectedness of the contact graph which is considerable using Proposition 8. In our particular calculations, we have used various combinations of the techniques described so far. We have used the ILP models in Equation (11) with constraints based on Corollary 13.
Dual bounds based on Corollary 5, i.e., Equation (14), were used in most of the cases, but not, for instance, for N 6 ( 4 ; d = 3 ) , N 3 ( 5 ; d = 3 ) , N 4 , 3 ( 1 , 4 ; d = 3 ) , N 5 , 3 ( 1 , 4 ; d = 3 ) , N 6 , 3 ( 1 , 4 ; d = 3 ) , N 4 , 3 ( 1 , 5 ; d = 4 ) , or N 4 , 3 ( 1 , 5 ; d = 5 ) . In these cases, Equation (11) and additional constraints based on Corollary 7 have been used. These constraints enforce that for each selected codeword, at least two additional codewords exactly at a distance d from it are also selected. This is the most important implication of Theorem 1, resulting in a significant decrease in computational time in these cases.
We performed our experiments using two different hardware/software configurations:
  • A 256 GB RAM. Two AMD EPYC 7302 16-Core processors (AMD, Santa Clara, CA, USA), totalling thirty-two cores. Version 22.1.0.0 of IBM CPLEX [36].
  • An 8 GB RAM, Apple ARM M1 Silicon with 8 cores (TSMC, Hsinchu, Taiwan); SCIP version 9.0 [37].
For the sake of completeness, we provide a summary of known bounds for a number of smaller Hamming spaces and distances in Appendix A. Importantly, we reproduced all these results using the methods described here. All the listed bounds are sharp. In what follows, we describe in detail certain cases in which our method has led to an improvement over existing results. We showcase our method on these examples.

6.1. Calculating N 2 , 3 , 4 ( 5 , 1 , 1 ; d = 3 ) = 24 and N 2 , 3 , 5 ( 5 , 1 , 1 ; d = 3 ) = 28

The cases N 2 , 3 , 4 ( 5 , 1 , 1 ; d = 3 ) = 24 and N 2 , 3 , 5 ( 5 , 1 , 1 ; d = 3 ) = 28 are similar to each other, and our approach has found bounds that were not previously known. We describe them here to illustrate one of the typical applications of our method.
We start from the ILP model in Equation (11). We use Corollary 13 for both models; each feasible solution includes, for each codeword, another one within a distance d. This is practically taken into account by adding the constraint in Equation (13). Altogether, the result can be obtained by solving the model in Equation (11), together with this additional constraint.
For the case N 2 , 3 , 4 ( 5 , 1 , 1 ; d = 3 ) , we have employed an additional ad hoc simplification. Because there is a feasible packing that has at least 3 codewords having 0s in their ternary and quaternary positions, and by the pigeonhole principle, any feasible packings with more than 24 codewords must have these. Formally, we have set the following equalities:
(16) x 0000000 = 1 , x 0000111 = 1 , x 0011001 = 1 ,
without loss of generality. This also means that the truncated code subset must be a feasible packing in the 5-dimensional binary Hamming space. These words can be arranged uniquely up to an isomorphism due to [22,35]. After starting with this initial feasible solution and fixing 3 corresponding variables, we have solved the resulting ILP. The memory consumption of the solution processes on the AMD platform is displayed in Figure 2; the growth and collapse of the branch-and-bound tree can be observed. The calculation took more than a day. In the same problem, SCIP was more efficient, solving the problem within an hour.
The calculation of N 2 , 3 , 5 ( 5 , 1 , 1 ; d = 3 ) was performed in two attempts, using the AMD platform. In the first attempt, no dual bounds were set. The solver aborted after exhausting all available memory in the growth phase of the branch-and-bound tree, as displayed in Figure 3. As the second attempt, we restarted the solver with the best primal value 28 found in the previous attempt, and a dual bound limit had been set to 29, using Corollary 5 (Note that the completion of the first attempt would not have been necessary; according to our experience, the solver finds the primal optimum rather fast, so it could have been stopped earlier). With this setting, the solver ran without flaws and stopped with an integer optimal solution. The memory footprint was significantly lower (see Figure 4).

6.2. Calculating N 2 , 4 ( 7 , 1 ; d = 3 ) = 32

We used the ARM platform only for the case N 2 , 4 ( 7 , 1 ; d = 3 ) . First, we tried to solve it and obtained a primary solution 32 within an hour, applying the ILP model in Equation (11) and adding constraints based on Corollary 13. The obtained solution had an interesting asymmetric property; it contained five codewords with trailing zeroes at the last three binary positions, so it was promising to restart the solver after setting an upper bound of thirty-three to the objective—or disprove its existence with a dual bound of thirty-two. In summary, if there is a code C having | C | 33 , then we can assume without loss of generality that C contains at least five codewords with the form α 1 α 2 α 3 α 4 α 5 000 ¯ due to the pigeonhole principle (the most significant character α 1 can still be chosen from four elements).
All distinct cases can be classified, and without loss of generality, it can be assumed that each inequivalent case has a code containing one the following pentaplets:
  • 00000000, 00111000, 11011000, 21101000, 31110000,
  • 00000000, 00111000, 11100000, 21010000, 31001000,
  • 00000000, 01111000, 11100000, 10011000, 21010000,
  • 00000000, 01111000, 11100000, 21010000, 30011000,
  • 00000000, 01111000, 11100000, 21010000, 31001000.
The corresponding five constraint pentaplets per case
x 00000000 = x 00111000 = x 11011000 = x 21101000 = x 31110000 = 1 ,
x 00000000 = x 00111000 = x 11100000 = x 21010000 = x 31001000 = 1 ,
x 00000000 = x 01111000 = x 11100000 = x 10011000 = x 21010000 = 1 ,
x 00000000 = x 01111000 = x 11100000 = x 21010000 = x 30011000 = 1 ,
x 00000000 = x 01111000 = x 11100000 = x 21010000 = x 31001000 = 1
had been used to formulate smaller, solvable subproblems. All subproblems had been solved in at most 6 h per instance. Notice that the case when the constraints defined in Equation (19) disallow one value at the quaternary position is a result of Östergård [22]; the 1-error correcting code in space Z 3 Z 2 Z 2 Z 2 Z 2 with size at least 5 is unique.
As a corollary, using the upper bound 32 and the fact that we determined the KJS sharp N 2 , 8 ( 7 , 1 ; d = 3 ) = 64 value using either GLPK (version 5.0) [38] or SCIP (version 9.0), Proposition 2 could be applied to establish three new exact optimal values N 2 , m ( 7 , 1 ; d = 3 ) = ( 8 · m ) , where m { 5 , 6 , 7 } .

7. Summary and Outlook

We have introduced a reduction technique for finding optimal mixed Hamming packings. Our technique is based on our idea to adopt the notion of contact graphs, motivated by continuous sphere packing problems. It enables us to extend known inefficient ILP models of the problem with constraints that make these models practically useful and efficiently solvable.
Our ILPs can be competitive with other known numerical methods. For instance, while in the cases N 2 , 3 ( 7 , 1 ; d = 3 ) = 26 and N 2 , 3 ( 4 , 3 ; d = 3 ) = 28 in [22], the calculations took an order of years in CPU time, with our approach, it took only days. Moreover, we have found some previously unknown exact solutions, for instance, N 2 , 3 , 4 ( 5 , 1 , 1 ; d = 3 ) = 24 and N 2 , 3 , 5 ( 5 , 1 , 1 ; d = 3 ) = 28 .
Our approach can work for bigger problem instances, and can be extended to constant weight binary Hamming packings. However, for bigger instances, the approaches based on algebraic structures will outperform our simple approach. In addition, it is not capable of exploiting properties of subobjects or performing automated pruning based on these.
Meanwhile, in spite of the limited number of variables in the models, they are challenging for classical solvers. Also, they can be easily reformulated to Quadratic Unconstrained Binary Optimization (QUBO) problems using penalties. The resulting QUBO problem instances are small, and thus, they can be used as benchmark problems, even for the currently available noisy intermediate scale quantum (NISQ) computing devices.

Author Contributions

Conceptualization, P.N., M.K.; methodology, P.N., M.K.; software, P.N.; validation, P.N., M.K.; formal analysis, P.N.; investigation, P.N.; writing—original draft preparation, M.K. and P.N.; writing—review and editing, M.K. and P.A.; visualization, P.N.; supervision, M.K. and P.A.; funding acquisition, P.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the National Research, Development, and Innovation Office of Hungary under the project no. TKP2021-NVA-04, and the Quantum Information National Laboratory of Hungary (grant No. 2022-2.1.1-NL-2022-00004). This project has received funding from the European Union under grant agreement No. 101081247 (QCIHungary project).

Data Availability Statement

The codes for which the bounds are summarized in Appendix A are available in a machine-parsable ASCII text file at the URL https://gitlab.wigner.hu/naszvadi.peter/codes (accessed on 26 May 2025).

Acknowledgments

The hardware resources were provided by the Wigner Scientific Computing Laboratory (WSCLAB). We thank Miklós Pintér (Corvinus University, Budapest) for running the CPLEX calculations, and Sándor Szabó (University of Pécs) and András Bodor (HUN-REN Wigner RCP) for useful discussions.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
CGcontact graph
CPUcentral processing unit
ILPinteger linear program
KJSKomamiya–Joshi–Singleton
MDSmaximum distance separable (code)
MOLSmutually orthogonal Latin squares
OAorthogonal array
RAMrandom access memory

Appendix A. Summary of Bounds

The following systematic list covers all lower and upper bounds for packings in the smallest Hamming spaces for | H | < 600 and 3 d n 1 , n 5 . The bounds have been calculated by us from the improved ILP models. Many of them were known; the relevant citations are indicated for the nontrivial cases. Equality noted iff the listed code is optimal. The cases in which our approach has produced direct results not known before are typeset in bold. They are discussed in detail in Section 6.1 and Section 6.2. A detailed enumeration of all these codes is provided as research data for this paper in a machine-parsable format, available under the URL https://gitlab.wigner.hu/naszvadi.peter/codes (accessed on 26 May 2025), including the codewords themselves, and the distribution of codeword distances within each code.

Appendix A.1. Codes with Length 4, Distance 3

N 2 ( 4 ; d ) = 2 [8].
N 2 , 3 ( 3 , 1 ; d ) = 3 [8].
N 2 , 3 ( 2 , 2 ; d ) = 4 [8], KJS sharp.
N 2 , 4 ( 3 , 1 ; d ) = 4 , KJS sharp.
N 5 , 6 ( 1 , 3 ; d ) = 30 , KJS sharp.
N 6 ( 4 ; d ) = 34 , and cannot meet its KJS bound, due the nonexistence of the M O L S ( 6 ) pair by Tarry [39]. Constructive lower bound was already known [40].
N 6 , 7 ( 3 , 1 ; d ) = 36 , KJS sharp.

Appendix A.2. Codes with Length 5, Distance 3

N 2 ( 5 ; d ) = 4 [8].
N 2 , 3 ( 4 , 1 ; d ) = N 2 , 3 ( 3 , 2 ; d ) = 6 [8].
N 2 , 4 ( 4 , 1 ; d ) = 8 , known perfect mixed code, KJS sharp.
N 2 , 3 ( 2 , 3 ; d ) = 9 [8].
N 2 , 3 , 4 ( 2 , 2 , 1 ; d ) = 11 .
N 2 , 3 , 4 ( 2 , 1 , 2 ; d ) = 12 , KJS sharp.
N 2 , 3 , 5 ( 2 , 2 , 1 ; d ) = 12 , KJS sharp.
N 2 , 3 , 4 ( 1 , 3 , 1 ; d ) = 14 .
N 2 , 3 , 5 ( 1 , 3 , 1 ; d ) = 15 .
N 2 , 3 , 4 ( 1 , 2 , 2 ; d ) = 18 , KJS sharp.
N 2 , 3 , 6 ( 1 , 3 , 1 ; d ) = 18 , KJS sharp.
N 3 ( 5 ; d ) = 18 [8].
N 3 , 4 ( 4 , 1 ; d ) = N 3 , 5 ( 4 , 1 ; d ) = 21 .
N 3 , 6 ( 4 , 1 ; d ) = 24 .
N 3 , 4 ( 3 , 2 ; d ) = 27 , KJS sharp, applied Corollary 2 for N 4 ( 5 ; d = 3 ) = 64 [9].
N 3 , 7 ( 4 , 1 ; d ) = 27 , KJS sharp.

Appendix A.3. Codes with Length 5, Distance 4

N 5 , 6 ( 1 , 4 ; d ) = 30 , KJS sharp.
N 6 , 7 ( 3 , 2 ; d ) = 36 , KJS sharp.
N 9 , 10 ( 1 , 4 ; d ) = 90 , KJS sharp, constructed using 3 M O L S ( 10 ) from Brouwer [41].
N 10 , 12 ( 4 , 1 ; d ) = 100 , KJS sharp, constructed using 3 M O L S ( 10 ) from Egan and Wanless [42].

Appendix A.4. Codes with Length 6, Distance 3

N 2 , 4 ( 5 , 1 ; d ) = 9 .
N 2 , 5 ( 5 , 1 ; d ) = 11 .
N 2 , 6 ( 5 , 1 ; d ) = 12 .
N 2 , 3 , 4 ( 4 , 1 , 1 ; d ) = 13 .
N 2 , 7 ( 5 , 1 ; d ) = 14 .
N 2 , 4 ( 4 , 2 ; d ) = 16 , KJS sharp.
N 2 , 3 , 5 ( 4 , 1 , 1 ; d ) = 16 , KJS sharp.
N 2 , 8 ( 5 , 1 ; d ) = 16 , KJS sharp.
N 2 , 3 , 4 ( 3 , 2 , 1 ; d ) = 18 .
N 2 , 3 , 5 ( 3 , 2 , 1 ; d ) = 21 .
N 2 , 3 , 4 ( 3 , 1 , 2 ; d ) = 24 , KJS sharp.
N 2 , 3 , 6 ( 3 , 2 , 1 ; d ) = 24 , KJS sharp.
N 2 , 3 , 4 ( 2 , 3 , 1 ; d ) = 27 .
N 2 , 3 , 4 ( 2 , 2 , 2 ; d ) = 36 , KJS sharp.
N 2 , 3 , 6 ( 2 , 3 , 1 ; d ) = 36 , KJS sharp.
N 2 , 4 ( 3 , 3 ; d ) = 32 , KJS sharp.

Appendix A.5. Codes with Length 6, Distance 4

N 2 ( 6 ; d ) = N 2 , 3 ( 5 , 1 ; d ) = N 2 , 7 ( 5 , 1 ; d ) = 4 , the first two were known [8].
N 2 , 3 ( 4 , 2 ; d ) = N 2 , 3 ( 3 , 3 ; d ) = N 2 , 3 , 7 ( 3 , 2 , 1 ; d ) = 6 , the first two were known [8].
N 2 , 4 ( 4 , 2 ; d ) = 8 , KJS sharp.
N 2 , 3 ( 2 , 4 ; d ) = N 2 , 3 , 4 ( 2 , 3 , 1 ; d ) = 8 , the first was known [8].
N 2 , 3 , 5 ( 2 , 3 , 1 ; d ) = N 2 , 3 , 9 ( 2 , 3 , 1 ; d ) = 9 .
N 2 , 3 , 4 ( 2 , 2 , 2 ; d ) = 10 .
N 2 , 3 , 4 , 5 ( 2 , 2 , 1 , 1 ; d ) = N 2 , 3 , 4 , 11 ( 2 , 2 , 1 , 1 ; d ) = 11 .
N 2 , 3 , 4 ( 2 , 1 , 3 ; d ) = 12 , KJS sharp.
N 2 , 3 , 5 ( 2 , 2 , 2 ; d ) = 12 , KJS sharp.
N 2 , 3 , 4 ( 1 , 4 , 1 ; d ) = N 2 , 3 , 5 ( 1 , 4 , 1 ; d ) = N 2 , 3 , 6 ( 1 , 4 , 1 ; d ) = 12 .
N 2 , 3 , 4 ( 1 , 3 , 2 ; d ) = 14 .
N 3 ( 6 ; d ) [8] = N 3 , 4 ( 5 , 1 ; d ) = 18 .

Appendix A.6. Codes with Length 6, Distance 5

N 2 , 3 , 4 ( 1 , 4 , 1 ; d ) = N 3 , 4 ( 5 , 1 ; d ) = N 2 , 3 , 4 ( 1 , 3 , 2 ; d ) = N 2 , 3 , 5 ( 1 , 4 , 1 ; d ) = 4 .

Appendix A.7. Codes with Length 7, Distance 3

N 2 , 4 ( 6 , 1 ; d ) = 18 .
N 2 , 5 ( 6 , 1 ; d ) = 20 .
N 2 , 3 , 4 ( 5 , 1 , 1 ; d ) = 24
N 2 , 6 ( 6 , 1 ; d ) = 24 .
N 2 , 7 ( 6 , 1 ; d ) = 28 .
N 2 , 3 , 5 ( 5 , 1 , 1 ; d ) = 28
N 2 , 4 ( 5 , 2 ; d ) = 32 , KJS sharp.
N 2 , 3 , 6 ( 5 , 1 , 1 ; d ) = 32 , KJS sharp.
N 2 , 8 ( 6 , 1 ; d ) = 32 , KJS sharp.
N 2 , 3 , 4 ( 4 , 2 , 1 ; d ) = 36 .

Appendix A.8. Codes with Length 7, Distance 4

N 2 , 4 ( 6 , 1 ; d ) = N 2 , 9 ( 6 , 1 ; d ) = 8 .
N 2 , 3 , 4 ( 5 , 1 , 1 ; d ) = N 2 , 3 , 9 ( 5 , 1 , 1 ; d ) = 8 .
N 2 , 4 ( 5 , 2 ; d ) = N 2 , 4 , 9 ( 5 , 1 , 1 , d ) = 9 .
N 2 , 5 ( 5 , 2 ; d ) = N 2 , 5 , 6 ( 5 , 1 , 1 ; d ) = 11 .
N 2 , 3 , 4 ( 4 , 2 , 1 ; d ) = N 2 , 3 , 5 ( 4 , 2 , 1 ; d ) = N 2 , 3 , 6 ( 4 , 2 , 1 ; d ) = 12 .
N 2 , 3 , 4 ( 4 , 1 , 2 ; d ) = N 2 , 3 , 4 , 5 ( 4 , 1 , 1 , 1 ; d ) = 13 .

Appendix A.9. Codes with Length 7, Distance 5

N 2 , 4 ( 6 , 1 ; d ) = 4 .
N 2 , 3 , 4 ( 4 , 2 , 1 ; d ) = 4 .
N 2 , 5 , 6 ( 5 , 1 , 1 ; d ) = 4 .
N 2 , 4 , 7 ( 5 , 1 , 1 ; d ) = 4 .
N 2 , 3 , 4 ( 4 , 1 , 2 ; d ) = 5 .
N 2 , 3 , 5 ( 4 , 2 , 1 ; d ) = 5 .
N 2 , 3 , 4 ( 3 , 3 , 1 ; d ) = 6 .
N 2 , 3 , 4 , 5 ( 4 , 1 , 1 , 1 ; d ) = 6 .
N 2 , 3 , 6 ( 4 , 2 , 1 ; d ) = 6 .

Appendix A.10. Codes with Length 8, Distance 3

N 2 , 3 ( 7 , 1 ; d ) = 26 [8].
N 2 , 4 ( 7 , 1 ; d ) = 32 .
N 2 , 5 ( 7 , 1 ; d ) = 40 .
N 2 , 6 ( 7 , 1 ; d ) = 48 .
N 2 , 7 ( 7 , 1 ; d ) = 56 .
N 2 , 8 ( 7 , 1 ; d ) = 64 , KJS sharp.

Appendix A.11. Codes with Length 8, Distance 4

N 2 , 4 ( 7 , 1 ; d ) = 16 .
N 2 , 3 , 4 ( 6 , 1 , 1 ; d ) = 16 .
N 2 , 5 ( 7 , 1 ; d ) = 16 .
N 2 , 3 , 5 ( 6 , 1 , 1 ; d ) = 16 .
N 2 , 6 ( 7 , 1 ; d ) = 16 .
N 2 , 7 ( 7 , 1 ; d ) = 16 .

Appendix A.12. Codes with Length 8, Distance 5

N 2 , 4 ( 7 , 1 ; d ) = 5 .
N 2 , 3 , 4 ( 6 , 1 , 1 ; d ) = 6 .
N 2 , 5 ( 7 , 1 ; d ) = 6 .
N 2 , 6 ( 7 , 1 ; d ) = 6 .
N 2 , 3 , 5 ( 6 , 1 , 1 ; d ) = 7 .
N 2 , 7 ( 7 , 1 ; d ) = 7 .

Appendix A.13. Codes with Length 8, Distance 6

N 2 , 4 ( 7 , 1 ; d ) = N 2 , 5 ( 7 , 1 ; d ) = N 2 , 6 ( 7 , 1 ; d ) = N 2 , 7 ( 7 , 1 ; d ) = 2 .
N 2 , 3 , 4 ( 6 , 1 , 1 ; d ) = N 2 , 3 , 5 ( 6 , 1 , 1 ; d ) = 3 .

References and Note

  1. Stinson, D.R. Combinatorial Designs: Constructions and Analysis; Springer: New York, NY, USA, 2004. [Google Scholar] [CrossRef]
  2. Etzion, T. Perfect Codes and Related Structures; World Scientific: Singapore, 2022. [Google Scholar] [CrossRef]
  3. Huffman, W.C.; Kim, J.L.; Solé, P. (Eds.) Concise Encyclopedia of Coding Theory; Chapman and Hall/CRC: New York, NY, USA, 2021. [Google Scholar] [CrossRef]
  4. Virtakallio, J. A football pool system with 729 columns. Published in 3 parts in Veikkaus-Lotto (Finnish Football Magazine), in 3 volumes 27, 28 and 33, in 1947. ISSN 0784-7114. (In Finnish)
  5. Best, M.; Brouwer, A. The triply shortened binary Hamming code is optimal. Discret. Math. 1977, 17, 235–245. [Google Scholar] [CrossRef]
  6. Best, M.; Brouwer, A.; MacWilliams, F.; Odlyzko, A.; Sloane, N. Bounds for binary codes of length less than 25. IEEE Trans. Inf. Theory 1978, 24, 81–93. [Google Scholar] [CrossRef]
  7. Etzion, T.; Greenberg, G. Constructions for perfect mixed codes and other covering codes. IEEE Trans. Inf. Theory 1993, 39, 209–214. [Google Scholar] [CrossRef]
  8. Brouwer, A.; Hamalainen, H.; Ostergard, P.; Sloane, N. Bounds on mixed binary/ternary codes. IEEE Trans. Inf. Theory 1998, 44, 140–161. [Google Scholar] [CrossRef]
  9. Bogdanova, G.T.; Brouwer, A.E.; Kapralov, S.N.; Östergård, P.R.J. Error-correcting codes over an alphabet of four elements. Des. Codes Cryptogr. 2001, 23, 333–342. [Google Scholar] [CrossRef]
  10. Laaksonen, A.; Östergård, P.R.J. New Lower Bounds on Error-Correcting Ternary, Quaternary and Quinary Codes. In Coding Theory and Applications, Proceedings of the 5th International Castle Meeting, ICMCTA 2017, Vihula, Estonia, 28–31 August 2017; Springer International Publishing: Cham, Switzerland, 2017; pp. 228–237. [Google Scholar] [CrossRef]
  11. Delsarte, P. Bounds for Unrestricted Codes, by Linear Programming. Philips Res. Rep. 1972, 27, 272–289. [Google Scholar]
  12. van Wee, G. Bounds on packings and coverings by spheres inq-ary and mixed Hamming spaces. J. Comb. Theory Ser. A 1991, 57, 117–129. [Google Scholar] [CrossRef]
  13. van Lint, J.; van Wee, G. Generalized bounds on binary/ternary mixed packing and covering codes. J. Comb. Theory Ser. A 1991, 57, 130–143. [Google Scholar] [CrossRef]
  14. Perkins, S.; Sakhnovich, A.; Smith, D. On an upper bound for mixed error-correcting codes. IEEE Trans. Inf. Theory 2006, 52, 708–712. [Google Scholar] [CrossRef]
  15. Gijswijt, D.; Schrijver, A.; Tanaka, H. New upper bounds for nonbinary codes based on the Terwilliger algebra and semidefinite programming. J. Comb. Theory Ser. A 2006, 113, 1719–1731. [Google Scholar] [CrossRef]
  16. Litjens, B. Semidefinite bounds for mixed binary/ternary codes. Discret. Math. 2018, 341, 1740–1748. [Google Scholar] [CrossRef]
  17. Singleton, R. Maximum distance q-nary codes. IEEE Trans. Inf. Theory 1964, 10, 116–118. [Google Scholar] [CrossRef]
  18. Tietäväinen, A. On the Nonexistence of Perfect Codes over Finite Fields. SIAM J. Appl. Math. 1973, 24, 88–96. [Google Scholar] [CrossRef]
  19. Ostergard, P.R.J.; Pottonen, O.; Phelps, K.T. The Perfect Binary One-Error-Correcting Codes of Length 15: Part II—Properties. IEEE Trans. Inf. Theory 2010, 56, 2571–2582. [Google Scholar] [CrossRef]
  20. Best, M. Binary codes with a minimum distance of four (Corresp.). IEEE Trans. Inf. Theory 1980, 26, 738–742. [Google Scholar] [CrossRef]
  21. Brouwer, A.E. Mixed Binary/Ternary Codes. 2018. Available online: https://www.win.tue.nl/~aeb/codes/23codes.html (accessed on 5 April 2023).
  22. Östergård, P.R. Classification of binary/ternary one-error-correcting codes. Discret. Math. 2000, 223, 253–262. [Google Scholar] [CrossRef]
  23. Östergård, P.R.J. On Binary/Ternary Error-Correcting Codes with Minimum Distance 4. In Applied Algebra, Algebraic Algorithms and Error-Correcting Codes; Springer: Berlin/Heidelberg, Germany, 1999; pp. 472–481. [Google Scholar] [CrossRef]
  24. Östergård, P.R.J. Constructing combinatorial objects via cliques. In Surveys in Combinatorics 2005; Cambridge University Press: Cambridge, UK, 2005; pp. 57–82. [Google Scholar] [CrossRef]
  25. Schütte, K.; van der Waerden, B.L. Auf welcher Kugel haben 5, 6, 7, 8 oder 9 Punkte mit Mindestabstand Eins Platz? Math. Ann. 1951, 123, 96–124. [Google Scholar] [CrossRef]
  26. Komamiya, Y. Application of logical mathematics to information theory. In Proceedings of the 3rd Japan National Congress on Applied Mathematics, Tokyo, Japan, 9 September 1953; pp. 437–442. [Google Scholar]
  27. Joshi, D. A note on upper bounds for minimum distance codes. Inf. Control 1958, 1, 289–295. [Google Scholar] [CrossRef]
  28. MacWilliams, F.; Sloane, N. (Eds.) 11 MDS codes. In The Theory of Error-Correcting Codes; Elsevier: Amsterdam, The Netherlands, 1977; Volume 16, pp. 317–331. [Google Scholar] [CrossRef]
  29. Chowla, S.; Erdős, P.; Straus, E.G. On the Maximal Number of Pairwise Orthogonal Latin Squares of a Given Order. Can. J. Math. 1960, 12, 204–208. [Google Scholar] [CrossRef]
  30. Wang, S.; Wilson, R.M. A few more squares II. Congr. Numer. 1978, 21, 688. [Google Scholar]
  31. Todorov, D.T. Four Mutually Orthogonal Latin Squares of Order 14. J. Comb. Des. 2012, 20, 363–367. [Google Scholar] [CrossRef]
  32. Ji, L.; Yin, J. Constructions of new orthogonal arrays and covering arrays of strength three. J. Comb. Theory Ser. A 2010, 117, 236–247. [Google Scholar] [CrossRef]
  33. Li, D.; Cao, H. New results on orthogonal arrays OA(3,5,4n+2). J. Comb. Theory Ser. A 2024, 204, 105864. [Google Scholar] [CrossRef]
  34. Plotkin, M. Binary codes with specified minimum distance. IEEE Trans. Inf. Theory 1960, 6, 445–450. [Google Scholar] [CrossRef]
  35. Ostergard, P.; Baicheva, T.; Kolev, E. Optimal binary one-error-correcting codes of length 10 have 72 codewords. IEEE Trans. Inf. Theory 1999, 45, 1229–1231. [Google Scholar] [CrossRef]
  36. IBM Corporation. CPLEX, version 22.1.0.0; IBM Corporation: Armonk, NY, USA, 2022. Available online: https://www.ibm.com/docs/en/icos/22.1.0 (accessed on 26 May 2025).
  37. Bolusani, S.; Besançon, M.; Bestuzheva, K.; Chmiela, A.; Dionísio, J.; Donkiewicz, T.; van Doornmalen, J.; Eifler, L.; Ghannam, M.; Gleixner, A.; et al. The SCIP Optimization Suite 9.0. Technical Report ZIB-Report 24-02-29, Optimization Online; Zuse Institute: Berlin, Germany, 2024; Available online: https://optimization-online.org/wp-content/uploads/2024/02/scipopt-90-2.pdf (accessed on 26 May 2025).
  38. Makhorin, A. GNU Linear Programming Kit, version 5.0. Free Software Foundation (FSF): Boston, MA, USA, 2020. Available online: https://www.gnu.org/software/glpk/ (accessed on 26 May 2025).
  39. Tarry, G. Le Problème des 36 Officiers; Secrétariat de L’Association Française pour L’avancement des Sciences: Paris, France, 1900. [Google Scholar]
  40. Horton, J. Sub-latin squares and incomplete orthogonal arrays. J. Comb. Theory Ser. A 1974, 16, 23–33. [Google Scholar] [CrossRef]
  41. Brouwer, A. Four mols of order 10 with a hole of order 2. J. Stat. Plan. Inference 1984, 10, 203–205. [Google Scholar] [CrossRef]
  42. Egan, J.; Wanless, I.M. Enumeration of MOLS of small order. Math. Comput. 2015, 85, 799–824. [Google Scholar] [CrossRef]
Figure 1. Orphan contact graph vertex improvement. See the last paragraph of the proof of Theorem 1 for a detailed explanation.
Figure 1. Orphan contact graph vertex improvement. See the last paragraph of the proof of Theorem 1 for a detailed explanation.
Mathematics 13 02633 g001
Figure 2. Memory consumption as a function of execution time for the case N 4 , 3 , 2 ( 1 , 1 , 5 ; d = 3 ) .
Figure 2. Memory consumption as a function of execution time for the case N 4 , 3 , 2 ( 1 , 1 , 5 ; d = 3 ) .
Mathematics 13 02633 g002
Figure 3. Memory consumption as a function of execution time for the case N 5 , 3 , 2 ( 1 , 1 , 5 ; d = 3 ) in the first, unsuccessful attempt. The process was killed because of memory exhaustion.
Figure 3. Memory consumption as a function of execution time for the case N 5 , 3 , 2 ( 1 , 1 , 5 ; d = 3 ) in the first, unsuccessful attempt. The process was killed because of memory exhaustion.
Mathematics 13 02633 g003
Figure 4. Memory consumption as a function of execution time for the case N 5 , 3 , 2 ( 1 , 1 , 5 ; d = 3 ) in the second attempt, i.e., after setting a dual bound.
Figure 4. Memory consumption as a function of execution time for the case N 5 , 3 , 2 ( 1 , 1 , 5 ; d = 3 ) in the second attempt, i.e., after setting a dual bound.
Mathematics 13 02633 g004
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Naszvadi, P.; Adam, P.; Koniorczyk, M. Reduction and Efficient Solution of ILP Models of Mixed Hamming Packings Yielding Improved Upper Bounds. Mathematics 2025, 13, 2633. https://doi.org/10.3390/math13162633

AMA Style

Naszvadi P, Adam P, Koniorczyk M. Reduction and Efficient Solution of ILP Models of Mixed Hamming Packings Yielding Improved Upper Bounds. Mathematics. 2025; 13(16):2633. https://doi.org/10.3390/math13162633

Chicago/Turabian Style

Naszvadi, Péter, Peter Adam, and Mátyás Koniorczyk. 2025. "Reduction and Efficient Solution of ILP Models of Mixed Hamming Packings Yielding Improved Upper Bounds" Mathematics 13, no. 16: 2633. https://doi.org/10.3390/math13162633

APA Style

Naszvadi, P., Adam, P., & Koniorczyk, M. (2025). Reduction and Efficient Solution of ILP Models of Mixed Hamming Packings Yielding Improved Upper Bounds. Mathematics, 13(16), 2633. https://doi.org/10.3390/math13162633

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop