Next Article in Journal
The City as a Tool for STEAM Education: Problem-Posing in the Context of Math Trails
Next Article in Special Issue
Word-Based Processor Structure for Montgomery Modular Multiplier Suitable for Compact IoT Edge Devices
Previous Article in Journal
Limit Cycles and Integrability of a Class of Quintic System
Previous Article in Special Issue
LLAKEP: A Low-Latency Authentication and Key Exchange Protocol for Energy Internet of Things in the Metaverse Era
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Two-State Alien Tiles: A Coding-Theoretical Perspective

1
Institute of Network Coding, The Chinese University of Hong Kong, Shatin, New Territories, Hong Kong
2
Department of Information Engineering, The Chinese University of Hong Kong, Shatin, New Territories, Hong Kong
3
Department of Physics, The Chinese University of Hong Kong, Shatin, New Territories, Hong Kong
4
Department of Computer Science and Engineering, The Chinese University of Hong Kong, Shatin, New Territories, Hong Kong
5
Department of Mathematics, The Chinese University of Hong Kong, Shatin, New Territories, Hong Kong
6
Department of Mathematics, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong
*
Authors to whom correspondence should be addressed.
Mathematics 2022, 10(16), 2994; https://doi.org/10.3390/math10162994
Submission received: 22 June 2022 / Revised: 15 August 2022 / Accepted: 16 August 2022 / Published: 19 August 2022
(This article belongs to the Special Issue Codes, Designs, Cryptography and Optimization, 2nd Edition)

Abstract

:
Most studies on the switching game Lights Out and its variants focus on the solvability of given games or the number of solvable games, but when the game is viewed in a coding-theoretical perspective, more interesting questions with special symbolizations in coding theory will naturally pop up, such as finding the minimal number of lit lights among all solvable games apart from the solved game, or finding the minimal number of lit lights that the player can achieve from a given unsolvable game, etc. However, these problems are usually hard to solve in general from the perspective of algorithmic complexity. This study considers a Lights Out variant called two-state Alien Tiles, which toggles all the lights in the same row and those in the same column of the clicked light. We investigate its properties, discuss several coding-theoretical problems about this game, and explore this game as an error-correcting code and investigate its optimality. The purpose of this paper is to propose ways of playing switching games in a think-outside-the-box manner, which benefits the recreational mathematics community.
MSC:
00A08; 94B05; 68Q25; 94B75; 11Y16

1. Introduction

The mathematics of Lights Out was extensively studied in the past decades. The Lights Out game board is a rectangular grid of lights, where each light is either on (lit) or off (unlit). By performing a move, a selected light and its adjacent rectilinear neighbors are toggled. At the beginning of the game, some lights have already been switched on. The player is asked to solve the game, i.e., switching off all the lights, by performing a sequence of moves. Many variants such as the setting up of different toggle patterns were explored, e.g., in [1,2].
Regardless of the variant of Lights Out, most studies focused on the solvability (or the attainability) of given games or the number of solvable (or attainable) games. These problems can usually be solved efficiently by elementary linear algebra and group theory approaches, e.g., in [3,4]. More precisely, these problems are in the complexity class P. A brief introduction of algorithmic complexity can be found in the Supplementary Materials. In a nutshell, problems in P can be computed efficiently and are considered as simple mathematical problems. Sometimes, the analysis may require the use of other mathematical tools, such as Fibonacci polynomials [5] and cellular automata [6].
However, not every derived problem has a known algorithm for solving it in an efficient manner. For example, the shortest solution problem focuses on solving a game with minimal number of moves, which is related to the minimum distance decoding (or the maximum likelihood decoding) in coding theory. In a more general setting, this problem was proven to have no known efficient algorithms for solving [7,8]. Precisely, this problem is NP-hard and its decision problem is NP-complete. Approximating the solution within a constant factor is also NP-hard [9,10]. Yet, it is possible to solve the shortest solution problem efficiently if certain special properties of the game can be detected and found.
The motivation of this study is inspired by the Gale–Berlekamp switching game, a Lights Out variant such that the player can either toggle an entire row or an entire column per move. Example moves of a 5 × 5 Gale–Berlekamp switching game are illustrated in Figure 1, where the gray cells indicate the toggle pattern. Despite the simple toggle patterns, Roth and Viswanathan [11] proved that finding the minimal number of lights that the player cannot switch off from a given Gale–Berlekamp switching game is NP-complete, although a linear time approximation algorithm was later discovered by Karpinski and Schudy [12].
This problem has a special symbolization in coding theory, and has attracted researchers to further investigate this game, e.g., [13,14,15]. In other words, when we view the game from a coding-theoretical perspective, we can ask more questions with special symbolizations in coding theory, such as:
  • Hamming weight. What is the minimal number of lit lights among all solvable games except the solved game?
  • Coset leader. Which game has the minimal number of lit lights that the player can achieve from a given game? What is this minimal number? How many such games can the player achieve?
  • Covering radius. Among all possible games, what is the maximal number of lit lights that remained when the number of lit lights is minimized?
  • Error correction. Which is the “closest” solvable game from a given unsolvable game in the sense of toggling the minimal number of individual lights? Furthermore, is such closest solvable game unique?
A brief introduction of coding theory can be found in the Supplementary Materials, which we discuss the physical meaning of the aforementioned terminologies in coding theory. Again, not all problems have known algorithms to be solved in an efficient manner. For example, the first problem about Hamming weight, which is also known as the minimum distance problem, is NP-hard in general [16].
Previous studies of switching games in recreational mathematics mostly focused on the solvability check, the way to solve a game, and the counting of the number of solvable games. On the other hand, the coding-theoretical view enhances the ways to play the games, but the existing studies mostly focused on the hardness results, e.g., for Gale–Berlekamp switching games and its generalization. One natural question to ask is as follows: Is there any commonly played switching game where its coding-theoretical results are not computationally hard? We give an affirmative answer to this question, which is the switching game that we are focusing within this study.
In this study, we consider a natural variant of Lights Out in a way that each toggle pattern is changed from a “small cross” to a “big cross” covering the entire row and the entire column. This variant is the two-state version of a game called Alien Tiles, where the original Alien Tiles has four states per light [17]. A comparison between the ordinary Lights Out and the two-state Alien Tiles is as illustrated in Figure 2 and Figure 3. The two-state Alien Tiles has a neat and simple parity condition, so that this game has various discussions on the Internet such as [18,19]. Similar setups also appear as training questions of programming contests [20]. The solvability and the number of solvable games of arbitrary-state Alien Tiles have been answered by Maier and Nickel [21]. However, the coding-theoretical problems that we listed above were not investigated, and we will answer these questions within this study.
This paper is organized as follows. We first discuss some general techniques for Lights Out and its two-state variants in Section 2. Next, we discuss the basic properties of two-state Alien Tiles in Section 3. We then solve our proposed coding-theoretical problems of the two-state Alien Tiles in Section 4. In Section 5, we apply the two-state Alien Tiles as an error-correcting code and investigate its optimality. Lastly, we discuss a states decomposition technique to link up the original four-state Alien Tiles and the two-state variant in Section 6, and then conclude this study in Section 7.

2. General Techniques for Lights out and Its Two-State Variants

We now describe some existing techniques in the literature for tackling Lights Out and its two-state variants.

2.1. Model

A general two-state Lights Out game consists of two basic elements:
  • A game board of lights;
  • A set of toggle patterns for toggling certain lights.
When we toggle the lights according to a toggle pattern, we say we apply (or perform) a move.
We first model the game board. For convenience, we use 1 to represent a lit light, and 0 to represent an unlit light. We consider an m × n grid of lights. Each possible grid can be represented by an m × n matrix over a binary field F 2 . A notable property of F 2 is that the addition operation is having the same effect as the subtraction operation.
We can represent each toggle pattern by an m × n grid of lights, where each light in the grid is on if and only if the toggle pattern toggles this light. For example, the gray cells in Figure 2 and Figure 3 are the lit lights in the toggle pattern when the central light is clicked. Similarly, each toggle pattern is represented by an m × n matrix over F 2 . Throughout the process, we can model the game board after performing a move by matrix addition, i.e., the original game board matrix plus the toggle pattern matrix. Figure 4 illustrates an example of the model in matrix form.
To simplify the notations, we vectorize each m × n matrix into an m n × 1 column vector by stacking the columns of the matrix on top of one another. The collection of all possible (vectorized) grids forms a vector space G = F 2 m n , which is called the game space. When it is clear from the context, we do not vectorize the matrix form of the game for readability. In a similar manner, we can transform an arbitrary-shape game board into a vector; thus, a similar vector space model also works for non-rectangular game boards.
Definition 1
(Solvability). A game g G is solvable if and only if there exists a sequence of moves to switch g into a zero vector.The zero vector is also called a solved game.
The concept of solvability and attainability are similar. We mention both terminologies here because some literature, such as [21], consider attainability in lieu of solvability. A game g G is attainable if and only if there exists a sequence of moves that can switch a zero vector into g.
We define the solvable game space by the vector space S over F 2 spanned by the set of all vectorized toggle patterns. Note that S G . A game g is solvable if and only if g S . This is due to the nature of the game that:
  • Applying a move twice will not toggle any lights, i.e., apply the same move again will undo the move;
  • Every permutation of a sequence of moves toggles the same set of lights, i.e., the order of the moves is not taken into consideration.
It is easy to observe that solvability is equivalent to attainability in Lights Out and its two-state variants: the sequence of moves that solves a game can also attain the game.
For most Lights Out variants, including the two-state Alien Tiles, we can model the moves in this manner. There are totally m n toggle patterns, so there are m n possible moves. We can bijectively associate each move by a light in the grid. For the two-state Alien Tiles, the ( i , j ) -th light in the grid is associated with the move that toggles all the lights in the i-th row and the j-th column. That is, this move is represented by a binary matrix where only the ( i , j ) -th entry is 1. When we perform such a move, we also say we click on the ( i , j ) -th light.
The move space, denoted by K, is the vector space over F 2 spanned by the set of vectorized moves, where each vector in K corresponds to a sequence of (vectorized) moves, which is unique up to permutation. We remark that when there are totally m n toggle patterns, K = G .
For a more general setting, the number of toggle patterns may not be m n , for example, the Gale–Berlekamp switching game. Although the physical meaning of “clicking” a light may not be valid, the concept of move space still works in this mathematical model. In such scenario, K may not equal to G. Yet, the remaining discussions are still valid by changing certain lengths and dimensions of the vectors and the matrices.
We can relate K and G in the following way. Define a linear map ψ : K G that outputs a game by performing the moves stated in the input on a solved game. The image of ψ is S. Being a linear map, ψ is also a homomorphism. Figure 5 illustrates two examples of mapping a k K to a g G . We can also write ψ in its matrix form Ψ , where Ψ is an m n × m n matrix formed by juxtaposing all vectorized toggle patterns. That is, we have ψ ( k ) = Ψ k .

2.2. Linear Algebra Approach

Suppose a k K such that ψ ( k ) = s for some s S is given, we can solve s by clicking the lights lit in k because s + ψ ( k ) returns a zero vector due to the fact that subtraction is the same as addition in F 2 . In other words, solving a game g G is equivalent to finding a k K such that ψ ( k ) = g . When we represent ψ in its matrix form Ψ , we have a system of linear equations Ψ k = g . Since S = { ψ ( k ) : k K } , the dimension of the solvable game space S is the same as the dimension of the vector space spanned by the columns in Ψ . In other words, the number of solvable games equals 2 rank ( Ψ ) .
If Ψ is a full rank matrix, then every game in the game space is solvable, i.e., S = G . Kreh [1] coined the following terminology to describe this type of game.
Definition 2
(Completely Solvable). The game is completely solvable if and only if S = G , i.e., every g G is a solvable game.
We can find a sequence of moves to solve a game g G after Gaussian elimination is applied to solve Ψ k = g . If the number of columns in Ψ is more than m n , then some toggle patterns must be linearly dependent of some other toggle patterns. We only need to take at most m n (independent) columns for finding a k K that solves the game; hence, we can regard Ψ in its worst case as an m n × m n binary matrix. The problem formulation is as follows:
  • Problem:  Solvability Check and/or Finding a Solution for General Two-State Lights Out Variants;
  • Instance: An m n × 1 binary vector g G , and an m n × c binary matrix Ψ , with c m n ;
  • Question:  Is there a k that satisfies Ψ k = g ?
  • Objective:  Find a k that satisfies Ψ k = g .
As c m n , we consider the worst case c = m n here. The complexity for Gaussian elimination is therefore O ( m 3 n 3 ) , which is in P.
We can use Gaussian elimination to check the solvability of the game g by observing whether the solution of Ψ k = g exists. Other than brutally applying Gaussian elimination, there is another approach as stated in [3], which is summarized as follows: The orthogonal complement of S, denoted by S , is the null space of Ψ . An unsolvable game, i.e., a game that is not in S, cannot be orthogonal to S . Therefore, we can determine the solvability of the game by calculating E T g (where each column of E corresponds to a vector in a basis of S ). The game s is said to be solvable if and only if E T s = 0 . In fact, this technique was also applied in coding theory for detecting errors of a linear code.

2.3. Minimal Number of Moves (The Shortest Solution)

Consider ψ ( k ) = s , i.e., the game s can be solved by the move sequence k. Based on the property of the kernel (null space) of ψ , as denoted by ker ( ψ ) , that ψ ( u ) = 0 for all u ker ( ψ ) , we have ψ ( k + u ) = s for all u ker ( ψ ) due to homomorphism.
If we want to solve s with the minimal number of moves, we need to find the candidate u that minimizes the 1-norm k + u 1 . This move sequence k + u is then called the shortest solution of the solvable game s. If the dimension of ker ( ψ ) is in terms of m or n, then the size of ker ( ψ ) is exponentially large in terms of m or n; thus, an exhaustive search among all possible u ker ( ψ ) is inefficient.
We now formulate the shortest solution problem for general two-state Lights Out variants (including the ordinary Lights Out). As a general problem, we need to support an arbitrary ker ( ψ ) . The dimension of the kernel is no larger than m n ; thus, the input for describing the kernel is m n amounts of m n × 1 binary vectors in ker ( ψ ) that can span the kernel. The problem is stated as follows:
  • Problem:  Shortest Solution Problem for General Two-State Lights Out Variants;
  • Instance:  A set B of m n × 1 binary vectors in ker ( ψ ) , where | B | = m n and the span of B is ker ( ψ ) , and an m n × 1 binary vector k K ;
  • Objective:  Find a vector u ker ( ψ ) that can minimize k + u 1 .
Theorem 1.
The shortest solution problem for general two-state Lights Out variants is NP-hard.
Proof. 
We prove this theorem by considering the minimum distance decoding of linear code in coding theory. As a recall, the (function) problem of minimum distance decoding of a linear code is equivalent to the following: Given a parity check matrix H of the linear code and a (column vector) word k, the goal is to find out the error (column) vector e that results in the smallest Hamming weight such that H ( k e ) = 0 . This is due to the property of parity check matrix that H c = 0 if and only if c is a codeword.
The reduction is as follows. Consider the span of the columns in H as the orthogonal complement of ker ( ψ ) , that is, every u ker ( ψ ) is a codeword. Furthermore, suppose k is the move sequence that solves the game s = ψ ( k ) . Write k = u + e , with u ker ( ψ ) . Since ψ ( k ) = s = ψ ( k u ) = ψ ( e ) , we know that e is a move sequence that can solve s. That is, by finding the shortest solution e (which has the smallest Hamming weight), we have u = k e ker ( ψ ) ; thus, H ( k e ) = 0 . This solves the minimum distance decoding problem, and such problem is no harder than the shortest solution problem for general two-state Lights Out variants. As minimum distance decoding is NP-hard [7,8], the shortest solution problem for general two-state Lights Out variants is also NP-hard. □
Nevertheless, it is still possible to have an efficient algorithm for finding the shortest solution of a specific Lights Out variant. For example, if the game space is completely solvable, then we must have e 1 = 0 . In such case, the move sequence obtained by the Gaussian elimination process is unique. Therefore, this problem is in P (in cubic time in terms of m n ).
As a remark, we can also reduce this shortest solution problem to the minimum distance decoding problem in a similar manner, by treating the juxtaposition of the (column) vectors in the basis of the orthogonal complement of ker ( ψ ) as the parity check matrix H. This shows that the two problems are actually equivalent to each other.

3. Two-State Alien Tiles

We now discuss some specific techniques for the two-state Alien Tiles problem. Note that we can regard the game space G and the solvable game space S as abelian groups under vector addition. Then, we have S G , i.e., S is a normal subgroup of G.
Definition 3
(Syndrome). Two games have the same syndrome if and only if they are in the same coset in the quotient group G / S .
One crucial component that will be applied into solving our coding-theoretical problems is the function that outputs the syndrome of a game. The quotient group G / S is isomorphic to the set of all syndromes.
We adopt the term “syndrome” in coding theory here because each coset represents an “error” from the solvable game space that cannot be recovered by any move sequences. In other words, an unsolvable game becomes solvable after we remove the “error” by toggling some individual lights.

3.1. Easy Games and Doubly Easy Games

For two-state Alien Tiles, we can click on any light on the game board; thus, K = G , i.e., the move space K equals the game space G. In other words, we can regard a game s G as a move in K and vice versa.
Definition 4
(Easy Games). A game s is called easy if and only if it can be solved by clicking the lights lit in s, i.e., ψ ( s ) = s .
Figure 6 illustrates an example of an easy two-state Alien Tiles game. The terminology “easy” was due to Torrence [22]. In other words, s is easy if vec ( s ) is an eigenvector of Ψ . We also extend the terminology as follows.
Definition 5
(Doubly Easy Games). A game s is doubly easy if and only if it can be solved by first clicking the lights lit in s to obtain a game s + ψ ( s ) , then clicking the lights lit in the game s + ψ ( s ) .
That is, a game s is doubly easy if and only if:
s + ψ ( s ) + ψ ( s + ψ ( s ) ) = 0 .
Due to homomorphism and the property of binary field that s = s (i.e., s equals its additive inverse), we have:
s + ψ ( s ) + ψ ( s + ψ ( s ) ) = s + ψ ( s + ( s + ψ ( s ) ) ) = s + ψ ( ψ ( s ) ) = 0 ψ ( ψ ( s ) ) = s .
To write down Ψ explicitly, we consider the following. Denote by 𝟙 a , b and 𝟙 c an a × b and a c × c all-ones matrix, respectively. Similarly, denote by 0 a , b and 0 c an a × b and a c × c zero matrix, respectively. Let I c be a c × c identity matrix. By juxtaposing all (vectorized) toggle patterns, we obtain an m n × m n matrix:
Ψ = 𝟙 m I m I m I m I m 𝟙 m I m I m I m I m 𝟙 m I m I m I m I m 𝟙 m = 𝟙 n I m + I n 𝟙 m I m n .
To analyze doubly easy games, we need to apply ψ multiple times on a given k K . For any positive integer c and matrix A over F 2 , we write c A to denote ( c mod 2 ) A for simplicity. In addition, note that:
𝟙 c 2 = 0 c if c is even , 𝟙 c if c is odd .
In other words, 𝟙 c 2 = c 𝟙 c . By making use of the mixed-product property of the Kronecker product:
( A B ) ( C D ) = ( A C ) ( B D )
where A, B, C, and D are matrices of suitable sizes that can form matrix products A C and B D , we can now evaluate the powers of Ψ (over F 2 ):
Ψ 2 = 𝟙 n 2 I m + 𝟙 n 𝟙 m 𝟙 n I m + 𝟙 n 𝟙 m + I n 𝟙 m 2 I n 𝟙 m 𝟙 n I m I n 𝟙 m + I m n = n 𝟙 n I m + m I n 𝟙 m I n m , Ψ 3 = n ( 𝟙 n 2 I m + 𝟙 n 𝟙 m 𝟙 n I m ) + m ( 𝟙 n 𝟙 m + I n 𝟙 m 2 I n 𝟙 m ) 𝟙 n I m I n 𝟙 m + I m n = n ( n 1 ) 𝟙 n I m + m ( m 1 ) I n 𝟙 m 𝟙 n I m I n 𝟙 m + I m n + ( m + n ) 𝟙 m n = 𝟙 n I m + I n 𝟙 m I m n + ( m + n ) 𝟙 m n = Ψ + ( m + n ) 𝟙 m n , Ψ 4 = Ψ 2 + ( m + n ) Ψ 𝟙 m n = Ψ 2 + ( m + n ) ( m + n 1 ) 𝟙 m n = Ψ 2 .
That is, for any positive integer c 4 , we have Ψ c = Ψ 2 + ( c mod 2 ) .

3.2. Even-by-Even Games

When both m and n are even, we have Ψ 2 = I m n . In other words, Ψ 1 = Ψ and ψ is an isomorphism; thus, S = K = G . This means that all games are solvable, i.e., every even-by-even two-state Alien Tiles is completely solvable, and the number of solvable games is 2 m n .
Consider an arbitrary solvable game s S . There exists a unique k K such that ψ ( k ) = s . Then, we have k = ψ 1 ( s ) = ψ ( s ) , thus ψ ( ψ ( s ) ) = s . In other words, all solvable games are doubly easy. The size of ker ( ψ ) is 1 as it is a zero vector space. A similar result in the case of square grids was also discussed in [23]. An interesting property of even-by-even games is the duality that k K solves the game s S if and only if s K solves the game k S .
We can also view the solvability in another manner. If we know the way to toggling an arbitrary light, we can repeatedly apply this strategy to switch off the lights one by one. This also means that all games are solvable. We can toggle a single light by clicking on all the lights in the same row and the same column (where the light at the intersection of the row and the column is clicked once only). This can be observed from ψ ( ψ ( s ) ) = s , when s only consists of a single 1.
To compute k = ψ 1 ( s ) = ψ ( s ) , the worst case (i.e., the case that takes the maximal number of computational steps) occurs when s is an all-ones game (a game that has all lights lit, which is an all-ones column vector of length m n ). Every 1 in s toggles m + n 1 lights. Therefore, the complexity to compute k is O ( m n ( m + n 1 ) ) = O ( m 2 n + m n 2 ) , which is of the same complexity as solving the linear system Ψ k = s directly by Gaussian elimination. On the other hand, due to the uniqueness of k, the minimal number of moves to solve s is k 1 = ψ ( s ) 1 . To compute k 1 , we scan the status of the lights after we obtain k; thus, the complexity is O ( m n ( m + n 1 ) + m n ) = O ( m 2 n + m n 2 ) .

3.3. Even-by-Odd (and Odd-by-Even) Games

Due to symmetry, we only need to discuss even-by-odd games. We first discuss how to solve these games.
Note that Ψ 3 = Ψ + 𝟙 m n . For each solvable game s S , pick an arbitrary k K such that ψ ( k ) = s . Since ψ ( ψ ( ψ ( k ) ) ) = ψ ( ψ ( s ) ) , by applying the ψ map twice, we have:
ψ ( ψ ( s ) ) = ψ ( k ) + 𝟙 m n k = s + k 1 𝟙 m n , 1 .
By clicking the lit lights in s, we obtain the game s + ψ ( s ) . Afterwards, we click the lit lights in s + ψ ( s ) to obtain the game:
s + ψ ( s ) + ψ ( s + ψ ( s ) ) = s + ψ ( ψ ( s ) ) = k 1 𝟙 m n , 1 ,
which is either a solved game or an all-ones game.
To see how the all-ones game is being solved, we consider the following: An arbitrary row of odd length n is selected, and every light in this row is clicked. In this way, every light in this row is toggled n times, and every light not belonging to this row is toggled once. Note that n mod 2 = 1 ; thus, every light in the game is toggled once. In other words, this move sequence can solve the all-ones game.
The (worst-case) complexity to compute s + ψ ( s ) is O ( m 2 n + m n 2 ) . Given s + ψ ( s ) , we have the same complexity for computing s + ψ ( s ) + ψ ( s + ψ ( s ) ) . The complexity to click every light in an arbitrary row is O ( m n 2 ) . Therefore, the overall complexity to solve a solvable game is O ( m 2 n + m n 2 ) . We have the same complexity for finding the move sequence because the complexity for checking whether k 1 mod 2 = 1 is O ( m n ) , which is being absorbed into O ( m 2 n + m n 2 ) .
Next, we discuss the move sequence to toggle all the lights in an arbitrary column, which will be used in the remaining discussion. Suppose we want to toggle the j-th column. Consider this move sequence: Except the ( 1 , j ) -th light, click every light in the j-th column and every light in the 1-st row. Based on such operations, we have the followings:
  • The ( 1 , j ) -th light is toggled for m + n 2 times, where m + n 2 is odd.
  • Each of the other lights in the j-th column is toggled for m 1 times, where m 1 is odd.
  • Each of the other lights in the 1-st row is toggled for n 1 times, where n 1 is even.
  • All other lights that are not in the j-th column and not in the i-th row are toggled twice.
As a result, only the lights in the j-th column are toggled for odd number of times, then all the lights in the j-th column are being toggled.
We notice that if we apply the above move sequence to every column, the resultant move sequence is that, except the first row, all the other lights are clicked once. Although this can solve the all-ones game, this move sequence is longer than what we have previously described. This suggests that the move sequence for solving a solvable game is not unique.
We now discuss the method to check whether a game is solvable or not. Denote by g i j the ( i , j ) -th light in the game g. Let r i ( g ) be the row parity of the i-th row in g, i.e.,
r i ( g ) = j = 1 n g i j mod 2 .
Let r ( g ) = ( r 1 ( g ) , r 2 ( g ) , , r m ( g ) ) be a column vector over F 2 , called the row parity vector. We can consider this row parity vector as exclusive or-ing (XOR) in all the columns, or the nim-sum of the columns.
Suppose we click an arbitrary light, say, the ( y , x ) -th light. Then, the values of g i j at all i = y or j = x are toggled. We can see that the new r y satisfies r y + n = r y + 1 . Except when i = y , the new r i will become r i + 1 . That is, all parity bits in r are flipped. This is as illustrated in Figure 7.
To map both row parity vectors into one invariant such that any move sequence acting on a game does not change the invariant, we define the invariant function r ˜ : G F 2 ( m 1 ) × 1 by:
r ˜ ( g ) = ( r 2 ( g ) , r 3 ( g ) , , r m ( g ) ) + r 1 ( g ) 𝟙 m 1 , 1 = ( r 2 ( g ) , r 3 ( g ) , , r m ( g ) ) if r 1 ( g ) = 0 , ( r 2 ( g ) , r 3 ( g ) , , r m ( g ) ) + 𝟙 m 1 , 1 if r 1 ( g ) = 1 .
As every solvable game s S gives r ˜ ( s ) = 0 m 1 , 1 , we have s 1 mod 2 = r ( s ) 1 mod 2 = 0 , i.e., the number of lit lights in a solvable game is even. In addition, r ˜ ( g ) 0 m 1 , 1 implies that the game g is not solvable. This suggests that not all games are solvable, say, a game where only the ( 1 , 1 ) -st light is lit has an invariant 𝟙 m 1 , 1 0 m 1 , 1 .
On the other hand, any move sequence acting on a game in a coset C G / S gives a game in the same coset, because S is the span of all toggle patterns. Then, the following question arises: What is the relation between the invariant function and the cosets? To be more specific: Does the invariant function give a syndrome of the game? The answer is affirmative.
Theorem 2.
Let g be an even-by-odd game. r ˜ is a surjective map that maps g to its syndrome.
Proof. 
See Appendix A. □
The above theorem has the following consequences:
  • r ˜ ( g ) = 0 m 1 , 1 if and only if g S , because 0 m 1 , 1 is the syndrome of the solved game.
  • The number of cosets in G / S , i.e., the index [ G : S ] , is 2 m 1 , due to the surjection of r ˜ .
  • The number of solvable games, i.e., | S | , is | G | / [ G : S ] = 2 m n m + 1 .
  • A game is solvable if and only if the parities of all rows are the same. The complexity of verifying the solvability is thus O ( m n ) .
The method to find the minimal number of moves for solving an even-by-odd game efficiently is remained as an open problem. To justify that exhaustive search for this problem is inefficient, we calculate the size of ker ( ψ ) . By the rank-nullity theorem, we know that dim ( ker ( ψ ) ) = m 1 . Therefore, | ker ( ψ ) | = 2 m 1 , which means that exhaustive search cannot be performed in polynomial time.

3.4. Odd-by-Odd Games

When both m and n are odd, we have Ψ 2 = Ψ . In other words, we have ψ ψ = ψ . For each solvable game s S , pick an arbitrary k K such that ψ ( k ) = s . By applying ψ again, we obtain ψ ( ψ ( k ) ) = ψ ( s ) . Using the fact that ψ ( ψ ( k ) ) = ψ ( k ) , we have ψ ( s ) = ψ ( k ) = s . That is, all solvable games are easy. Similar as the above subsections, the worst-case complexity to solve the game is O ( m 2 n + n m 2 ) . To find a move sequence k, we only need to output s directly; thus, the complexity is O ( m n ) .
To check whether a game g is solvable or not, it is insufficient to only consider the row parity vector r ( g ) . Let c j ( g ) be the column parity of the j-th column in g, i.e.,
c j ( g ) = i = 1 m g i j mod 2 ,
and let c ( g ) = ( c 1 ( g ) , c 2 ( g ) , , c n ( g ) ) be a row vector over F 2 , called the column parity vector. When we click an arbitrary light, say, the ( y , x ) -th light, we can see that the new r y becomes r y + n = r y + 1 , and the new c x is c x + m = c x + 1 . Except when ( i , j ) = ( y , x ) , where the new r i is r i + 1 and the new c j is c j + 1 , all the parity bits of r ( g ) and c ( g ) are flipped. This is as illustrated in Figure 8.
We call the pair ( r ( g ) , c ( g ) ) the parity pair of the game g. We remark that not all combinations of parity pairs are valid. The exact criteria are as stated in thm:paritydiff.
Theorem 3.
For odd-by-odd games, g G if and only if r ( g ) 1 c ( g ) 1 is even.
Proof. 
We first prove the “only if” part. The result is trivial for the solved game g = 0 m n , 1 . Consider an arbitrary non-zero game g G \ { 0 m n , 1 } . When we toggle an arbitrary lit light in g, say, the ( y , x ) -th light, to form a new game g , the row parity of the y-th row and the column parity of the x-th column are both flipped. That is, ( r ( g ) 1 c ( g ) 1 ) mod 2 equals ( r ( g ) 1 c ( g ) 1 ) mod 2 . Inductively, the procedure is repeated until all lit lights are off. Then, ( r ( 0 m n , 1 ) 1 c ( 0 m n , 1 ) 1 ) mod 2 is equivalent to ( r ( g ) 1 c ( g ) 1 ) mod 2 , where the former one equals 0. In other words, g G implies that r ( g ) 1 c ( g ) 1 is even.
For the “if” part, we prove by construction. Let { y 1 , y 2 , , y r ( g ) 1 } be an index set such that r y ( g ) = 1 if and only if y is in this set. Similarly, let { x 1 , x 2 , , x c ( g ) 1 } be an index set such that c x ( g ) = 1 if and only if x is in this set.
First, we construct a game such that only the ( y i , x i ) -th lights are lit, where i = 1 , 2 , , min { r ( g ) 1 , c ( g ) 1 } . If r ( g ) 1 = c ( g ) 1 , then the proof is done. If r ( g ) 1 > c ( g ) 1 , let T = { y c ( g ) 1 + 1 , y c ( g ) 1 + 2 , , y r ( g ) 1 } . Then, we partition T into sets of size 2, as denoted by P 1 , P 2 , , P r ( g ) 1 c ( g ) 1 2 . For every i = 1 , 2 , , r ( g ) 1 c ( g ) 1 2 , let P i = { a i , b i } , and we switch on the ( a i , j ) -th and the ( b i , j ) -th lights for an arbitrary j { 1 , 2 , , n } . Afterwards, the row parity vector of the game matches with r ( g ) . For each partition, we have switched on 2 lights in the same column, so the column parity of this column remains unchanged. That is, the parity pair of the resultant game is ( r ( g ) , c ( g ) ) . We can prove the case c ( g ) 1 > r ( g ) 1 in a similar manner by symmetry arguments, and the game exists in the prescribed game space. □
The construction in the above proof will also be applicable when we discuss the coding-theoretical problems. We illustrate an example of such construction in Figure 9. In this example, we start from a solved game, i.e., all lights are off. In the first phase, we match the bits in the row parity vector with the column parity vector. This example has fewer bits in the column parity vector, thus all the bits in the column parity vector are matched with some bits in the row parity vector. The bits in the row parity vector that are matched are arbitrary. In Section 3.4, the red bits are matched, followed by the blue bits. Each pair of matched bits indicates a location, which is marked by a crosshair of the same color in the game grid. By switching on the lights marked by the crosshairs, the second phase is proceeded.
The bits that are not matched must all fall into either the row or the column parity vector. In this example, there are four bits remaining in the row parity vector. We group the bits two by two arbitrarily. In Section 3.4, the two cyan bits and the two brown bits are respectively grouped. For each group, we can select an arbitrary column (if the unmatched bits fall into the column parity vector), followed by the selection of an arbitrary row. The crosshairs of the same color mark the column selected by the same group. By switching on the lights marked by the crosshairs, the desired game is constructed.
We now discuss the invariant function. Similar to the r ˜ in even-by-odd games, we define c ˜ : G F 2 1 × ( n 1 ) by:
c ˜ ( g ) = ( c 2 ( g ) , c 3 ( g ) , , c n ( g ) ) + c 1 ( g ) 𝟙 1 , n 1 = ( c 2 ( g ) , c 3 ( g ) , , c n ( g ) ) if c 1 ( g ) = 0 , ( c 2 ( g ) , c 3 ( g ) , , c n ( g ) ) + 𝟙 1 , n 1 if c 1 ( g ) = 1 .
The invariant function for odd-by-odd games, denoted by inv : G F 2 ( m 1 ) × 1 × F 2 1 × ( n 1 ) , is defined as:
inv ( g ) = ( r ˜ ( g ) , c ˜ ( g ) ) .
Again, any move sequence acting on a game will never alter the invariant. Therefore, inv ( g ) ( 0 m 1 , 1 , 0 1 , n 1 ) implicates that g is not solvable. In particular, a game with only the ( 1 , 1 ) -st light lit has an invariant ( 𝟙 m 1 , 1 , 𝟙 1 , n 1 ) ( 0 m 1 , 1 , 0 1 , n 1 ) .
Unlike the case of even-by-odd games, the number of lit lights in a solvable game s S can either be odd or even. For example, an all-ones game has odd number of lit lights, which can be solved by clicking all the lights so that every light is toggled for m + n 1 times, where m + n 1 is odd. On the other hand, a solved game is a solvable game with no lit lights, hence the number of lit lights is even.
Theorem 4.
Let g be an odd-by-odd game. inv is a surjective function that maps g to its syndrome.
Proof. 
See Appendix B. □
Similar to the discussion in the last subsection, the consequences of the above theorem include:
  • inv ( g ) = ( 0 m 1 , 1 , 0 1 , n 1 ) if and only if g S , because ( 0 m 1 , 1 , 0 1 , n 1 ) is the syndrome of the solved game.
  • The number of cosets in G / S , i.e., the index [ G : S ] , is 2 m + n 2 , due to the surjection of inv.
  • The number of solvable games, i.e., | S | , is | G | / [ G : S ] = 2 m n m n + 2 .
  • A game is solvable if and only if the parities of all rows and all columns are the same. The complexity of verifying its solvability is thus O ( m n ) .
Seeking an efficient algorithm to find the minimal number of moves in solving an odd-by-odd game remains an open problem nowadays. For square grids, i.e., when n = m , the simplified problem was discussed in [24,25], but the answers there for odd-by-odd grids are certain exhaustive searches, which have to undergo the trial of 2 2 n 2 possibilities, where this number equals to the size of ker ( ψ ) for odd-by-odd square grids as discussed in [23]. For the general sense, we can easily show that | ker ( ψ ) | = 2 m + n 2 by the rank-nullity theorem. This means that an exhaustive search on the kernel cannot be efficiently conducted.

4. Coding-Theoretical Perspective of Two-State Alien Tiles

The solvable game space S is a vector subspace of the game space G. That is, we can regard S as a linear code. Our task here is to find out the properties of S from the perspective of coding theory.

4.1. Hamming Weight

The basic parameter of an error-correcting code is the distance of the code. When the distance d is known, the error-detecting ability d 1 and the error-correcting ability d 1 2 are then obtained. As a linear code, the distance of the code equals the Hamming weight of the code, which is defined as the minimal Hamming weight among all the non-zero codewords of the code. However, finding the Hamming weight of a general linear code is NP-hard [16]. For two-state Alien Tiles, we can determine the Hamming weight of the solvable game space with ease, via the use of invariant functions.
Even-by-Even Games. These games are completely solvable, so a game with only one lit light has the minimal Hamming weight among all non-zero solvable games, i.e., the Hamming weight of such code is 1. This also means that even-by-even games have no error-detecting and correcting abilities.
Even-by-Odd Games. The syndrome of an even-by-odd solvable game is 0 m 1 , 1 . There are two possible row parity vectors (of size m × 1 ) that correspond to this syndrome, namely, 0 m , 1 and 𝟙 m , 1 .
We first consider the row parity vector 0 m , 1 . Each row has an even number of lit lights. As the Hamming weight of a code only considers non-zero codewords, we cannot have a row parity vector 0 m , 1 for any non-zero solvable game if n = 1 . For n > 1 , the smallest g 1 equals to 2, where 0 m n , 1 g S and r ( g ) = 0 m , 1 .
Now, we consider the row parity vector 𝟙 m , 1 . Each row has an odd number of lit lights. That is, the smallest g 1 equals to m, where 0 m n , 1 g S and r ( g ) = 𝟙 m , 1 . As m > 0 is even, the smallest possible m is 2. That is, we do not need to consider the row parity vector 𝟙 m , 1 unless n = 1 .
As a summary, we have:
For even-by-odd games , min g S \ { 0 m n , 1 } g 1 = m if n = 1 , 2 otherwise .
Odd-by-Odd Games. The syndrome of an odd-by-odd solvable game is the pair ( 0 m 1 , 1 , 0 1 , n 1 ) . If m = 1 , only two games are in S, which are the all-zeros and the all-ones games. Therefore, the Hamming weight of the code is n. If n = 1 , similarly, the Hamming weight is m. Both cases hold at the same time when m = n = 1 .
Now, consider the case with both m and n greater than 1, i.e., each of them is at least 3. There are two possible parity pairs (in F 2 m × 1 × F 2 1 × n ), namely, ( 0 m , 1 , 0 1 , n ) and ( 𝟙 m , 1 , 𝟙 1 , n ) . For the former case, each row and each column has an even number of lit lights. In other words, the smallest g 1 , where 0 m n , 1 g S and ( r ( g ) , c ( g ) ) = ( 0 m , 1 , 0 1 , n ) , is 4. For the latter case, each row and each column has an odd number of lit lights. That is, the minimal number of lit lights is no fewer than max { m , n } .
When m = n , it is easy to see that the minimal number of lit lights can be achieved by a game represented by an identity matrix. If m n , without loss of generality, consider the case of n > m . An example to achieve the minimal number of lit lights max { m , n } is the game:
I m 0 m 1 , n m 𝟙 1 , n m .
Note that unless m = n = 3 , we have max { m , n } 5 . Therefore, we reach the conclusion that:
For odd-by-oddd games , min g S \ { 0 m n , 1 } g 1 = n if m = 1 , m if n = 1 , 3 if m = n = 3 , 4 otherwise .
One may further combine the aforementioned results of all game sizes as follows:
min g S \ { 0 m n , 1 } g 1 = n if m = 1 , m if n = 1 , 3 if m = n = 3 , ( 1 + ( m mod 2 ) ) ( 1 + ( n mod 2 ) ) otherwise .

4.2. Coset Leader

The quotient group G / S contains all the cosets, where each coset contains all games achieved by any possible finite move sequence from a certain setup of the game. Two games are in the same coset if and only if there exists a move sequence to transform one to another. We have discussed the technique to identify the coset that a game belongs to, mainly by finding the syndrome of the game, where each coset is associated with a unique syndrome.
Any game g C G / S , with C S being unsolvable. In the view of coding theory, we are interested in the minimal number of independent lights that we need to toggle such that the game becomes solvable. In the algebraic point of view, we want to know how far the coset C and the solvable space S are separated. That is, the mission is to find a game e C with smallest e 1 such that g + e S . Such e can be regarded as an error, so if we know the error pattern of each coset, an error-correcting code can be applied. In coding-theoretical terminology, this e is called the coset leader of the coset C.
Even-by-Even Games. As even-by-even games have no error-correcting ability, it is not an interesting problem to find out the coset leader. In fact, we have G / S = { S } = { G } , and thus the coset leader of the only coset is the solved game 0 m n , 1 , which contains no lit lights.
Even-by-Odd Games. We consider the syndrome r ˜ ( g ) = ( r ˜ 2 , r ˜ 3 , , r ˜ m ) F 2 ( m 1 ) × 1 of a game g in a coset C. We have two possible row parity vectors, namely,
r = ( 0 , r ˜ 2 , r ˜ 3 , , r ˜ m ) and r = r + 𝟙 m , 1 .
First, note that the row parities of different rows are independent of each other. If the row parity of a row is 0, then there are an even number of lit lights in this row. That is, the minimal number of lit lights in this row is 0. If the row parity is 1, then there are odd number of lit lights in this row, where the minimal number is 1. In other words, the smallest g 1 such that r ( g ) = r is r 1 . The situation is similar for r .
Note that r ˜ ( g ) = r 1 and r 1 = m r 1 . As both r and r correspond to the same syndrome r ˜ ( g ) , we know that:
min g C g 1 = min { r 1 , r 1 } = min { r ˜ ( g ) 1 , m r ˜ ( g ) 1 } ,
which is of time complexity O ( m ) if r ˜ ( g ) is known. The coset leader is an arbitrary game that belongs to arg min g C g 1 , for example:
0 r ˜ ( g ) 0 m , n 1 if r ˜ ( g ) 1 m r ˜ ( g ) , 1 r ˜ ( g ) + 𝟙 m 1 , 1 0 m , n 1 otherwise .
The construction can be completed within O ( m n ) time. During the construction of the coset leader, if we need to put a bit in a row, we can locate it arbitrarily. Therefore, the number of coset leaders in the coset is n min { r ˜ ( g ) 1 , m r ˜ ( g ) 1 } .
Odd-by-Odd Games. Consider the syndrome inv ( g ) = ( r ˜ ( g ) , c ˜ ( g ) ) . Write r ˜ ( g ) = ( r ˜ 2 , r ˜ 3 , , r ˜ m ) and c ˜ ( g ) = ( c ˜ 2 , c ˜ 3 , , c ˜ n ) . Let
r ¯ = ( 0 , r ˜ 2 , r ˜ 3 , , r ˜ m ) and c ¯ = ( 0 , c ˜ 2 , c ˜ 3 , , c ˜ n ) .
The syndrome can be induced by four parity pairs, namely,
( r ¯ , c ¯ ) , ( r ¯ + 𝟙 m , 1 , c ¯ ) , ( r ¯ , c ¯ + 𝟙 1 , n ) and ( r ¯ + 𝟙 m , 1 , c ¯ + 𝟙 1 , n ) .
According to thm:paritydiff, we know that only two out of the four parity pairs are valid, because the difference between the number of bits in the row and the column parity vectors must be an even number. Let ( r , c ) be a valid parity pair. If the parity of a row/column is 0, then the minimal number of lit lights in this row/column is 0. If the parity is 1, then the minimal number is 1. Thus, the minimal number of lit lights is bounded below by max { r 1 , c 1 } .
Now, we use the construction in the proof of thm:paritydiff to construct a game g with parity pair ( r , c ) . Note that the number of lit lights in g is max { r 1 , c 1 } , which matches with the lower bound. As there are two valid parity pairs, we need to see which one results in a game that has fewer lit lights. The one with fewer lit lights is the coset leader.
The overall procedure to find out the coset leader is as follows:
  • Generate ( r , c ) = ( r ¯ , c ¯ ) from inv ( g ) .
  • If r 1 c 1 is not an even number, flip all the parity bits in either r or c.
  • If max { m r 1 , n c 1 } < max { r 1 , c 1 } , then flip all the parity bits in both r and c.
  • Construct a game that has a parity pair ( r , c ) (by using the construction in the proof of thm:paritydiff), and such game is denoted as the coset leader.
As we need O ( m n ) time to construct such game, the complexities of other computation steps that take either O ( m ) or O ( n ) are being absorbed by this dominant term; thus, the overall complexity to construct a coset leader is O ( m n ) .
At the end of this construction, we can also obtain the minimal number of lit lights among all games in the coset, which is max { r 1 , c 1 } . Let
P = ( r ˜ ( g ) 1 c ˜ ( g ) 1 ) mod 2 { 0 , 1 } .
We can also directly calculate this number as follows, with c being flipped:
min max { r ˜ ( g ) 1 , P n + ( 1 ) P c ˜ ( g ) 1 } , max { m r ˜ ( g ) 1 , ( 1 P ) n + ( 1 ) 1 P c ˜ ( g ) 1 } ,
or equivalently (flipping r):
min max { P m + ( 1 ) P r ˜ ( g ) 1 , c ˜ ( g ) 1 } , max { ( 1 P ) m + ( 1 ) 1 P r ˜ ( g ) 1 , n c ˜ ( g ) 1 } .
This number can be calculated in O ( m + n ) time when inv ( g ) is given.
Let ( r , c ) be the parity pair of a coset leader. We remark that the coset leader is non-unique when r 1 > 1 or c 1 > 1 . Denote:
α = max { r 1 , c 1 } and β = min { r 1 , c 1 } .
To count the number of coset leaders, we recall the idea of the construction that we have used. The first phase matches each 1 in r with a distinct 1 in c to minimize the number of lit lights. There are totally α β β ! combinations. If r 1 = c 1 , then the construction is done, and the number of combinations α β β ! = r 1 ! = c 1 ! is achieved. Otherwise, we proceed to the second phase, which is described as follows:
The remaining unmatched 1’s must either be only in the row parity vector or only in the column parity vector. We group them two by two so that if we put each group in the same row/column, the row/column parity will not be toggled. If r 1 > c 1 , then each group can choose any of the n columns. If c 1 > r 1 , then each group can choose any of the m rows. By applying a similar technique to generate the Lagrange coefficients for Lagrange interpolation, we can combine the two cases into one formula. That is, each group has α c 1 r 1 c 1 n + α r 1 c 1 r 1 m choices. It is not hard to see that we have enumerated all possibilities that can achieve the minimal number of lit lights. The number of groups is:
α β 2 α β 2 2 2 2 = ( α β ) ! 2 β α 2 .
Therefore, the number of possible coset leaders when r 1 c 1 is:
α β β ! ( α β ) ! 2 β α 2 α c 1 r 1 c 1 n + α r 1 c 1 r 1 m = α ! 2 β α 2 α c 1 r 1 c 1 n + α r 1 c 1 r 1 m .
Combining the two cases, the number of possible coset leaders is:
r 1 ! if r 1 = c 1 , α β β ! ( α β ) ! 2 β α 2 α c 1 r 1 c 1 n + α r 1 c 1 r 1 m otherwise .

4.3. Covering Radius

The covering radius problem of the game asks for the maximal number of lights remained among all games when the player minimizes the number of lights. With the understanding of cosets, the desired number is represented by:
max C G / S min g C g 1 .
Even-by-Even Games. All even-by-even games are solvable, so the covering radius is trivially 0.
Even-by-Odd Games. Recall that for each coset C,
min g C g 1 = min { r ˜ ( g ) 1 , m r ˜ ( g ) 1 } .
As the syndrome is a vector in F 2 ( m 1 ) × 1 , and each possible vector in F 2 ( m 1 ) × 1 corresponds to a distinct coset, we can calculate the covering radius of even-by-odd games as follows:
max C G / S min g C g 1 = max b { 0 , 1 , , m 1 } min { b , m b } = m 2 .
Odd-by-Odd Games. Consider a syndrome ( r ˜ , c ˜ ) of a game in a coset C. Let b = r ˜ 1 and a = c ˜ 1 . Each possible syndrome in F 2 ( m 1 ) × 1 × F 2 1 × ( n 1 ) corresponds to a distinct coset; therefore, it suffices to consider all possible b { 0 , 1 , , m 1 } and a { 0 , 1 , , n 1 } . Note that we are considering the syndrome but not the parity pair, and therefore a b can either be even or odd.
Case I: a b is even. Then, we have:
min g C g 1 = min { max { b , a } , max { m b , n a } } .
To maximize this number among all cosets, we need to check both ( b , a ) = ( 0 , n 1 ) and ( b , a ) = ( m 1 , 0 ) , which gives min { n 1 , m } and min { m 1 , n } , respectively. That is, if we restrict ourselves to those a b that are even, the covering radius is:
max { min { n 1 , m } , min { m 1 , n } } = min { m , n } δ m , n ,
where δ m , n is the Kronecker delta, i.e.,
δ m , n = 1 if m = n , 0 otherwise .
Case II: a b is odd. Then, we have:
min g C g 1 = min { max { b , n a } , max { m b , a } } .
To maximize this number among all cosets, we need to check both ( b , a ) = ( 1 , 0 ) and ( b , a ) = ( 0 , 1 ) , which gives min { n , m 1 } and min { n 1 , m } , respectively. That is, if we restrict ourselves to those a b that are odd, the covering radius is max { min { n , m 1 } , min { n 1 , m } } , which is the same as when a b is even.
Combining these two cases, the covering radius of odd-by-odd games can be represented as:
max C G / S min g C g 1 = min { m , n } δ m , n .

4.4. Error Correction

The error correction problem is to find a solvable game by toggling the fewest number of lights individually. Depending on the capability of error correcting, such a corrected game may not be unique.
Even-by-Even Games. All even-by-even games are solvable; therefore, there is no unsolvable game for discussion.
Even-by-Odd Games. When n > 1 , the Hamming distance is 2. Thus, the code has single error-detecting ability, but with no error-correcting ability.
When n = 1 , the Hamming distance is an even number. Denote such even number as m. Thus, the code has m 1 error-detecting ability and m 1 2 = m 2 1 error-correcting ability. Note that an m × 1 game only consists of two codewords, namely, 0 m , 1 and 𝟙 m , 1 .
Consider an arbitrary vector w F 2 m × 1 . If there are m 2 number of 0’s and m 2 number of 1’s in w, then we cannot correct the error because the Hamming distance from w to either of the codewords is the same. If the number of 0’s in w is more than that of 1’s, then w is closer to 0 m , 1 . If the number of 1’s in w is more than that of 0’s, then w is closer to 𝟙 m , 1 . In the latter two cases, the number of errors is at most m 2 1 ; thus, the closest solvable game is unique. In fact, this code is known as a repetition code in coding theory.
Odd-by-Odd Games. When m > 1 and n = 1 , the code is again a repetition code as previously mentioned. However, in this context, m is an odd number; therefore, the code has m 1 2 error-correcting ability.
Consider an arbitrary vector w F 2 m × 1 . It is impossible to have the same number of 0’s and 1’s in w. Therefore, we can always correct w to the closest solvable game and such a solvable game is unique. A similar result holds for m = 1 and n > 1 based on symmetry. However, when m = n = 1 , the Hamming distance is 1, thus, the code has no error-detecting nor error-correcting abilities.
The non-trivial cases are the odd-by-odd games when m , n > 1 . Suppose we have an unsolvable odd-by-odd game g G \ S . We first calculate the syndrome inv ( g ) to identify the coset that the game g belongs to. Then, we find an arbitrary coset leader e of such coset. Recall that the coset leader of a coset is the game that has the minimal number of lit lights. Therefore, the game g e is a closest solvable game.
The code distance of a 3 × 3 game is 3; thus, it has 2 error-detecting abilities and 1 error-correcting ability. For games of any other size, the distance is 4; therefore, the code has 3 error-detecting abilities and 1 error-correcting ability. This also implies that for any unsolvable odd-by-odd game g G \ S , the closest solvable game is unique if and only if the coset leader of the coset where the game it belongs to has one and only one lit light.
In principle, we can use the syndrome of the game to directly locate the error when there is only 1 error. The procedure is as follows:
  • Calculate the syndrome inv ( g ) = ( ( r ˜ 2 , r ˜ 3 , , r ˜ m ) , ( c ˜ 2 , c ˜ 3 , , c ˜ n ) ) of the given game g;
  • Initialize r = ( 0 , r ˜ 2 , r ˜ 3 , , r ˜ m ) , c = ( 0 , c ˜ 2 , c ˜ 3 , , c ˜ n ) , A = r 1 , and B = c 1 ;
  • If A B is odd, then let A = m r 1 and flip all the bits in r;
  • If max { A , B } = 0 or max { m A , n B } = 0 , then the game g is already solvable, and this procedure is then completed;
  • If max { A , B } > 1 and max { m A , n B } > 1 , then there is more than one error, which implicates that the errors cannot be uniquely corrected. Therefore, the procedure can again be terminated;
  • If max { A , B } > max { m A , n B } , then flip all the bits in both vectors r and c;
  • Let y and x be the indices where the y-th entry in r is 1 and the x-th entry in c is 1. Then, the ( y , x ) -th light in g is the only error position.
The idea of the above procedure is similar to the construction of the coset leader. We first find a valid parity pair of g, i.e., A B is even. Next, we filter out the cases where the game has no errors, or has more than one error. If 1 = max { A , B } max { m A , n B } , then the current parity pair has exactly one 1 in each of the row and column parity vectors. Otherwise, we have max { m A , n B } = 1 ; then, we flip both the row and column parity vectors such that there is exactly one 1 in both parity vectors. As a result, the coset leader has exactly one 1 in this case, where the location of the 1 is indicated by the parity pair.

5. As an Error-Correcting Code

In this section, we discuss the use of the solvable game space of a two-state Alien Tiles game as an error-correcting code. Although the general linear code technique based on the generator matrix and the parity check matrix also work, the corresponding matrices have a much more complicated form. Here, we describe a natural and simple way for encoding and decoding. We also discuss whether the code is an optimal linear code or not, where optimality means that the linear code has the maximal number of codewords among all possible linear codes under the same set of parameters, for example, field size, codeword length, and code distance.

5.1. Even-by-Odd Games

We first discuss the m × 1 games. An m × 1 game only consists of two codewords, namely, 0 m , 1 and 𝟙 m , 1 , so it is simply a repetition code. The message that we can encode is either the bit 0 or 1. To encode, we simply map 0 to 0 m , 1 and map 1 to 𝟙 m , 1 . The way to decode was discussed in the last section, which is precisely described as follows: Let a , b { 0 , 1 } , where a b . If the number of a’s in a to-be-decoded word is more than that of b’s, then we decode to a. If their numbers are equal, then we cannot uniquely decode the word.
Now, consider n > 1 . All these games have single error-detecting ability but no error-correcting abilities, because the code distance is 2. As the size of the solvable game space, i.e., the number of codewords, is | S | = 2 m n m + 1 , we have m n ( m n m + 1 ) = m 1 bits acting as the redundancy for error detection. The message that we can encode is an m n m + 1 -bits string. Denote the message by ( b 1 , b 2 , , b m n m + 1 ) , a natural way to encode the message is as follows: We first consider an m × ( n 1 ) matrix:
D = b 1 b 2 b n 1 b n b n + 1 b 2 n 2 b m n m n + 2 b m n m n + 3 b m n m
and calculate its row parity vector r = ( r 1 , r 2 , , r m ) . If b m n m + 1 = r 1 , then we construct the game:
r D .
Otherwise, we construct the game:
r + 𝟙 m , 1 D .
The row parity vector of the former game is 0 m , 1 , and the one of the latter game is 𝟙 m , 1 . That is, both games are solvable. Thus, we have encoded the message.
Note that the ( 1 , 1 ) -st entry of the encoded message is the same as b m n m + 1 . Therefore, we can perform the following steps to decode a game g:
  • Calculate the row parity vector r ( g ) of the game g.
  • If r ( g ) 0 m , 1 and r ( g ) 𝟙 m , 1 , then we have detected the existence of errors, but we cannot decode the game; thus, the decoding procedure can be terminated.
  • The game has no errors; thus, the message can be recovered by referring to the matrix.
Lastly, we discuss the optimality of the code. The optimality of m × 1 games follows the optimality of repetition codes, which can easily be verified by the use of the Singleton bound [26].
Theorem 5.
For even-by-odd games (where n > 1 ), only the 2 × n games are optimal linear codes.
Proof. 
We first consider m = 2 . Recall that | S | = 2 2 n 1 . By applying the Singleton bound [26], we have:
A 2 ( 2 n , 2 ) 2 2 n 2 + 1 = 2 2 n 1 ,
where A q ( , d ) denotes the maximal number of codewords among all possible linear or non-linear codes with field size q, codeword length , and code distance d. As A 2 ( 2 n , 2 ) = | S | , we know that all 2 × n games are optimal linear codes.
Now, we consider m > 2 . If there exists a linear code having the same set of parameters, but of a greater number of codewords, we can conclude that the error-correcting code (ironically without error-correcting ability) formed by even-by-odd games (with n > 1 ) is not an optimal linear code. The existence of such code is shown by the following construction. Consider D = F 2 m n 1 , where m n 1 is odd. Take any two arbitrary a , b D such that their Hamming distance is 1. One of them has odd number of bits and the other has even number of bits. Without loss of generality, assume a has an odd number of bits. We append a parity bit, i.e., 1, at the end of a, and append a parity bit, i.e., 0, at the end of b. The extended words (vectors) are elements in F 2 m n , and their Hamming distance is 2. That is, by appending a parity bit to every word in D, we obtain a linear code that has the same set of parameters as the even-by-odd games when n > 1 . The number of codewords is 2 m n 1 , which is greater than | S | = 2 m n m + 1 . This proves the non-optimality of even-by-odd games, with m > 2 and n > 1 . □

5.2. Odd-by-Odd Games

When m = 1 or n = 1 (but not both), the code is an repetition code; thus, it is optimal. The way to encode and decode is similar to that of even-by-1 games, except that the numbers of 0’s and 1’s are never the same. If m = n = 1 , then there is no unsolvable game. Thus, we do not need to consider this case.
In the remaining discussion of this sub-section, we only consider odd-by-odd games, with m , n 3 .
Theorem 6.
For odd-by-odd games (where m , n 3 ), only the 3 × 3 games form an optimal linear code.
Proof. 
See Appendix C. □
Recall that for all 3 × 3 games, the Hamming distance of this code is 3. Thus, it is 2 error-detecting and 1 error-correcting. For games of larger sizes, i.e., m , n 3 but not m = n = 3 , their Hamming distances are all 4. Thus, they are 3 error-detecting and 1 error-correcting. That is, we can only correct up to one error. Regardless of the optimality, we now discuss the natural way to encode and decode.
The dimension of the solvable game space is m n m n + 2 ; thus, the message that we can encode is an m n m n + 2 -bits string. Denote the message by ( b 1 , b 2 , , b m n m n + 2 ) . First, we put the message into a matrix in the form:
b m n m n + 2 0 1 , n 1 0 m 1 , 1 D where D : = b 1 b 2 b n 1 b n b n + 1 b 2 n 2 b m n m 2 n + 3 b m n m 2 n + 4 b m n m n + 1 .
Let ( r , c ) be the parity pair of D. By thm:paritydiff, the number of bits in r and c must either be both odd or both even. If the number of bits in r (or c) is even, then the game
0 c r D
has a parity pair ( 0 m , 1 , 0 1 , n ) . If the number of bits in r (or c) is odd, then the game
1 c r D
has a parity pair ( 0 m , 1 , 0 1 , n ) . In both of the above games, if the ( 1 , 1 ) -st entry is not equal to b m n m n + 2 , then we click the ( 1 , 1 ) -st light so that all the lights in the first row and the first column are toggled. Combining these two cases, Table 1 shows the condition to flip all the bits in the parity pair.
The table is actually the truth table of exclusive OR (XOR). As addition in binary field is equivalent to modulo 2 addition, we can write the desired codeword as:
b m n m n + 2 c + ( c 1 + b m n m n + 2 ) 𝟙 1 , n 1 r + ( r 1 + b m n m n + 2 ) 𝟙 m 1 , 1 D .
To decode a game, we can run the procedure as described in Section 4.4. The procedure has three types of output:
  • The first type of output indicates the game is a solvable one. Therefore, we can directly extract the message ( b 1 , b 2 , , b m n m n + 2 ) .
  • The second type of output indicates the game has more than one error, so the error correction is non-unique. Thus, we consider the code as non-decodable.
  • The third type of output indicates the location of the single error. Let ( y , x ) be the location of the error, we can correct the error by flipping the ( y , x ) -th bit of the game grid.

5.3. Example

To demonstrate the error-correcting code, we provide an example in this sub-section. Consider the 3 × 3 games. Suppose we want to encode the message ( 1 , 1 , 0 , 1 , 0 ) . First, the matrix D that consists of the first four bits of the message is:
D = 1 1 0 1 .
Then, we fill in the parity pair of D, which is ( r , c ) = ( ( 0 , 1 ) , ( 1 , 0 ) ) , to a 3 × 3 game:
0 1 0 0 1 1 1 0 1 .
As r 1 mod 2 = 1 and the last bit of the message is 0, according to Table 1, we need to flip the first row and the first column except the ( 1 , 1 ) -st entry. Thus, the encoded codeword is:
0 0 1 1 1 1 0 0 1 .
Now, we introduce an error to the codeword. Note that the error can be located at any entry in the codeword, including those bits that are not the message bits. For example, the erroneous codeword is:
w = 0 0 1 0 1 1 0 0 1 .
To correct such error, we first calculate the syndrome of this game, which is ( ( 1 , 0 ) , ( 1 , 1 ) ) . Next, we initialize r = ( 0 , 1 , 0 ) and c = ( 0 , 1 , 1 ) . As r 1 c 1 = 1 is odd, we flip the bits in r. Thus, r becomes ( 1 , 0 , 1 ) . Now, we have:
2 = max { r 1 , c 1 } > max { 3 r 1 , 3 c 1 } = 1 ,
so we flip both r and c. That is, we have r = ( 0 , 1 , 0 ) and c = ( 1 , 0 , 0 ) . This pair of bits indicates that the error is located at:
e = 0 0 0 1 0 0 0 0 0 .
Afterwards, we can correct the error by w e and obtain the codeword
0 0 1 1 1 1 0 0 1 .
By directly reading this matrix, the decoded message ( 1 , 1 , 0 , 1 , 0 ) can be recovered.

6. States Decomposition

One may wonder whether we can practically decompose the four states of the ordinary (four-state) Alien Tiles to obtain two two-state Alien Tiles games. We describe a states decomposition method below, where the number of states and the toggle patterns can be arbitrary. Suppose there are q states. We reuse Ψ to denote the matrix of all vectorized toggle patterns. Suppose q has P distinct prime factors. We can express q in canonical representation:
q = i = 1 P p i r i
where p i are distinct primes.
To show whether the (vectorized) game g is solvable, it is equivalent to show whether the linear system
Ψ k g ( mod q )
is solvable. By applying the Chinese remainder theorem, we can decompose the system into
Ψ k g ( mod p 1 r 1 ) Ψ k g ( mod p 2 r 2 ) Ψ k g ( mod p P r P )
and combine the solutions of the above system into a unique modulo q solution afterwards. Our task becomes solving the linear system in the form of:
Ψ k g ( mod p r )
where p is a prime. If r = 1 , we cannot further decompose the system, which means that we have to directly solve the problem.
Consider r 2 . The solution of Equation (1) is also a solution of the linear system
Ψ k g ( mod p ) .
Let k 0 be a solution of this system. We have:
Ψ k 0 g = p z
for some integer vector z (after vectorization). Let k 0 + p y be a solution of Equation (1). We have:
Ψ ( k 0 + p y ) g 0 ( mod p r ) ( Ψ k 0 g ) + p Ψ y 0 ( mod p r ) p z + p Ψ y 0 ( mod p r ) .
By dividing p, we obtain:
Ψ y z ( mod p r 1 ) .
Repeatedly applying this procedure, we can reduce the power of the prime in the modulo until r = 1 .
The main issue of this decomposition is that the solution k 0 may not be unique. We need to try which k 0 + u is solvable after we reduce the prime power, where Ψ u mod p is equivalent to a zero game. A wrong choice can lead to an unsolvable system. This also happens when we reduce the four-state Alien Tiles to the two-state ones.
However, if m and n are both even, all four-state Alien Tiles games are solvable [4,21]. In other words, the move sequence to solve an arbitrary game is unique (up to permutation). We do not need to concern the choice of k 0 as every even-by-even two-state Alien Tiles is solvable. A summary of the procedures in solving a four-state Alien Tiles g by states decomposition is as follows:
  • Find a move sequence k that solves the two-state Alien Tiles g mod 2 .
  • Calculate z = 1 2 ( Ψ k g ) without performing modulo.
  • Find a move sequence y that solves the two-state Alien Tiles z mod 2 .
  • The move sequence in solving the four-state Alien Tiles g is k + 2 y .
As an example, consider the four-state Alien Tiles:
g = 0 0 0 0 0 1 2 0 0 3 0 0 0 0 0 0
where the four states are mapped to 0, 1, 2, and 3, respectively. The first step is to solve the two-state game:
g mod 2 = 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 .
The move sequence is
k = 0 0 0 0 1 0 1 1 1 0 1 1 0 0 0 0 .
Next, we calculate:
z = 1 2 ( Ψ k g ) = 1 2 2 0 2 2 0 3 0 0 0 3 0 0 2 0 2 2 0 0 0 0 0 1 2 0 0 3 0 0 0 0 0 0 = 1 0 1 1 0 1 1 0 0 0 0 0 1 0 1 1 .
After that, we solve another two-state game:
z mod 2 = 1 0 1 1 0 1 1 0 0 0 0 0 1 0 1 1 .
The move sequence is:
y = 0 0 1 0 0 0 0 0 0 1 1 0 0 0 1 0 .
Finally, we obtain the move sequence that solves the four-state game g, which is:
k + 2 y = 0 0 2 0 1 0 1 1 1 2 3 1 0 0 2 0 .

7. Conclusions

The study of Lights Out and its variants is mostly for theoretical interests and pedagogical purposes. The main goals and purposes of this paper are to stimulate the interests of the recreational mathematics community in investigating special features of switching games, apart from dealing with the solvability problems. Furthermore, this study can provide an entry point for recreational mathematicians to easily pick up and explore related switching game problems, even though they may not excel in coding theory and related mathematical disciplines.
In this paper, we investigated the properties of two-state Alien Tiles, including the solvability, invariants, etc., based on different settings. We also discussed the efficient methods to deal with coding-theoretical problems for the game, such as the Hamming weight, the coset leader, and the covering radius. We also demonstrated how to apply the game as an error-correcting code with a natural way to encode and decode and verified that particular game sizes can form optimal linear codes. An open problem that remains in this paper is the existence of an efficient method to find out the minimal number of moves (the shortest solution) in solving a solvable game. Lastly, we discussed the states decomposition method and left an open problem on the issue of kernel selection.
One future direction is to study the coding-theoretical problems on the ordinary Lights Out and also its other variants. Different variants possess different structures and have to be explored in depth. In particular, some variants may have easy solutions, e.g., the two-state Alien Tiles that we have discussed in this paper; while some of them may not, e.g., the Gale–Berlekamp switching game. In addition to constructing easy solutions, proving how hard a game problem is will also be a potential research direction.
Another future direction is to investigate more structural properties of the ordinary four-state Alien Tiles. In the four-state version, there is a subtle issue when we model the problem via the use of linear algebra. Take the odd-by-odd games as an example: We can click all the lights twice in a) two arbitrary distinct columns, b) two arbitrary distinct rows, or c) an arbitrary column, followed by an arbitrary row, to keep an arbitrary game unchanged. The combination of these actions forms a kernel. As each light must be clicked for even number of times, the kernel is a free module instead of a vector space over F 4 , which means that it is a module that has a basis. This is because multiplication in this finite field is not having the same effect as taking modulo after integer multiplication. In particular, applying two double clicks is equivalent to “no clicking”; however, 2 × 2 0 in F 4 .
On top of this, we have concluded that the states decomposition has an issue in finding a proper element in the kernel that leads to a solvable game in the lower state. However, this also means that we have reduced the search space by eliminating those elements in the kernel that lead to an unsolvable game in the lower state. A future direction is to investigate whether this decomposition can help in solving constraint programming problems like that described in [27]. This problem is a recreational mathematics problem on Alien Tiles, which aims to find the solvable game that has the longest “shortest solution ”. Existing approaches reduce the search space mainly by symmetry breaking [28,29,30]. The use of our states decomposition together with symmetry breaking in obtaining desired solvable games (which may not be unique) is a potential research direction.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/math10162994/s1, a brief introduction to algorithmic complexity and coding theory. References [7,8,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52] are cited in the Supplementary Materials.

Author Contributions

Conceptualization, H.H.F.Y.; methodology, H.H.F.Y., K.H.N. and S.K.M.; validation, H.H.F.Y., H.W.H.W. and H.W.L.M.; formal analysis, H.H.F.Y., K.H.N. and S.K.M.; investigation, H.H.F.Y., K.H.N. and S.K.M.; writing—original draft preparation, H.H.F.Y.; writing—review and editing, H.W.H.W. and H.W.L.M.; visualization, H.H.F.Y.; project administration, H.H.F.Y.; funding acquisition, H.H.F.Y., K.H.N. and H.W.L.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

Most technical details in this paper were included in the final year project of Hoover H. F. Yin for his B. Eng. degree in 2014 when he was with both the Department of Information Engineering and the Department of Mathematics, The Chinese University of Hong Kong, Shatin, New Territories, Hong Kong. We thank Raymond W. Yeung for allowing Hoover H. F. Yin to explore freely in his final year project. We also thank Lap Chi Lau and Javad B. Ebrahimi for their discussion on the Gale-Berlekamp switching game. Lastly, we thank Linqi Guo for his insightful discussion.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
Ppolynomial time
NPnon-deterministic polynomial time
XORexclusive OR

Appendix A. Proof of Theorem 2

This appendix outlines the proof of Theorem 2 for even-by-odd games, which states that r ˜ is a surjective function that maps a game to its syndrome. In the following, the game space G consists of all m × n games, where m is even and n is odd.
Recall that r ˜ is an invariant function in the case of even-by-odd games, i.e., for any move sequence k K , we have r ˜ ( g ) = r ˜ ( g + ψ ( k ) ) for all g G . Furthermore, for any game g C with C G / S , we have g + ψ ( k ) C .
If r ˜ is surjective, we know that [ G : S ] 2 m 1 . Suppose we have a set M that consists of 2 m 1 games. If, for every game g G , there is a move sequence k K such that g + ψ ( k ) M , then we know that [ G : S ] 2 m 1 . That is, if both of the above conditions are true, then we have [ G : S ] = 2 m 1 . In addition, due to the property of a surjective map and the invariance properties, we know that r ˜ gives a syndrome, and this completes the proof. In the remaining text of this appendix, we prove the above two conditions.
First, we consider the following construction. Let ( r ¯ 2 , r ¯ 3 , , r ¯ m ) F 2 ( m 1 ) × 1 be an arbitrary binary vector. We construct a game g ¯ with entries:
g ¯ i j = r ¯ i if j = 1 and i > 1 , 0 otherwise .
That is,
g ¯ = 0 0 1 , n 1 r ¯ 2 0 m 1 , n 1 r ¯ m .
By the definition of r ˜ , we have r ˜ ( g ¯ ) = ( r ¯ 2 , r ¯ 3 , , r ¯ m ) . Thus, r ˜ is automatically surjective based on the construction.
For the second condition, we construct the set M, where | M | = 2 m 1 as follows:
M = 0 0 1 , n 1 v 0 m 1 , n 1 : v F 2 ( m 1 ) × 1 .
Consider an arbitrary game g G . Let g ¯ be the sub-game formed by removing the first column of g. Note that g ¯ is solvable because it is an even-by-even game. By applying the move sequence that solves g ¯ on g, we obtain a new game g ˜ , with the second to the n-th columns being all zero vectors. As a result, if g ˜ 11 = 0 , then g ˜ M .
Suppose g ˜ 11 = 1 , we apply the move sequence to flip the first column (i.e., click every light in the first column and the first row on, except the ( 1 , 1 ) -st light) of g ˜ . As a result, the ( 1 , 1 ) -st light is switched off. That is, after the above moves, the game is in M. This completes the proof of the second condition, and thus the entire proof of Theorem 2 is completed as well.

Appendix B. Proof of Theorem 4

This Appendix outlines the proof of Theorem 4 for odd-by-odd games, which states that inv is a surjective function that maps g to its syndrome. Here, the game space G is only applicable for m × n games, where both m and n are odd.
The idea of the proof is the same as that for even-by-odd games in Appendix A, but its construction process is a bit different. We first recall the following properties. For any game g G and any move sequence k K , we have inv ( g ) = inv ( g + ψ ( k ) ) . Hence, the game g must fall into one of the cosets in G / S . If g C , where C G / S , then g + ψ ( k ) C .
If inv is surjective, then we have [ G : S ] 2 m + n 2 . Suppose we have a set M that consists of 2 m + n 2 games. If, for every game g G , there is a move sequence k K such that g + ψ ( k ) M , then we have [ G : S ] 2 m + n 2 . When the above two conditions are both true, we have [ G : S ] = 2 m + n 2 . Due to the properties of a surjective map and invariance, we conclude that inv gives a syndrome, and the proof is completed. Similarly, we now proceed to the proof of the above two conditions.
Let R ˜ = ( r ˜ 2 , r ˜ 3 , , r ˜ m ) F 2 ( m 1 ) × 1 and C ˜ = ( c ˜ 2 , c ˜ 3 , , c ˜ n ) F 2 1 × ( n 1 ) be two arbitrary binary vectors. We construct two games A , B G with entries:
A i j = r ˜ i + k = 2 m r ˜ k if j = 1 and i > 1 , k = 2 m r ˜ k if j = 1 and i = 1 , 0 otherwise ; and B i j = c ˜ j + k = 2 n c ˜ k if j > 1 and i = 1 , k = 2 n c ˜ k if j = 1 and i = 1 , 0 otherwise .
The summation in the definition of A i j means that if the number of bits in R ˜ is odd, we flip all entries of the first column. Similarly, we flip the whole first row of B i j . Thus, the number of lit lights in the first column of A and the number of lit lights in the first row of B are both even. The invariants of A and B are then respectively given by inv ( A ) = ( R ˜ , 0 1 , n 1 ) and inv ( B ) = ( 0 m 1 , 1 , C ˜ ) .
To merge these two invariants, consider the game g = A + B G . There are four cases.
Case I: A 11 = B 11 = 0 . We have inv ( g ) = ( R ˜ , C ˜ ) .
Case II: A 11 = B 11 = 1 . Note that g 11 = 0 , and the numbers of lit lights in the first row and the first column of g are both odd. Therefore, we have:
r ( g ) = ( 0 , r ˜ 2 , r ˜ 3 , , r ˜ m ) + 𝟙 m , 1 and c ( g ) = ( 0 , c ˜ 2 , c ˜ 3 , , c ˜ n ) + 𝟙 1 , n .
As r 1 ( g ) = c 1 ( g ) = 1 , by the definition of r ˜ and c ˜ , we have inv ( g ) = ( R ˜ , C ˜ ) .
Case III: A 11 = 1 and B 11 = 0 . Note that g 11 = 1 , and the number of lit lights in the first column is even, but that in the first row is odd. Therefore, we have:
r ( g ) = ( 0 , r ˜ 2 , r ˜ 3 , , r ˜ m ) + 𝟙 m , 1 and c ( g ) = ( 0 , c ˜ 2 , c ˜ 3 , , c ˜ n ) .
As r 1 ( g ) = 1 and c 1 ( g ) = 0 , we have inv ( g ) = ( R ˜ , C ˜ ) from the definitions of r ˜ and c ˜ .
Case IV: A 11 = 0 and B 11 = 1 . Note that g 11 = 1 , which is precisely the symmetry case of Case III. The number of lit lights in the first column is odd but that in the first row is even. Therefore, we have:
r ( g ) = ( 0 , r ˜ 2 , r ˜ 3 , , r ˜ m ) and c ( g ) = ( 0 , c ˜ 2 , c ˜ 3 , , c ˜ n ) + 𝟙 1 , n .
As r 1 ( g ) = 0 and c 1 ( g ) = 1 , we have inv ( g ) = ( R ˜ , C ˜ ) from the definitions of r ˜ and c ˜ .
Combining all above four cases, for any ( R ˜ , C ˜ ) F 2 ( m 1 ) × 1 × F 2 1 × ( n 1 ) , there exists a game g G such that inv ( g ) = ( R ˜ , C ˜ ) . This implicates that inv is a surjective map.
We now proceed to prove the second condition. Construct the set M, where | M | = 2 m + n 2 by:
M = 0 w v 0 m 1 , n 1 : v F 2 ( m 1 ) × 1 , w F 2 1 × ( n 1 ) .
Consider an arbitrary game g G . By removing the first row and the first column of g, we obtain an even-by-even sub-game g ¯ . Being an even-by-even game, g ¯ is a solvable game. By applying the move sequence that solves g ¯ on g, we obtain another game g ˜ of the form
g ˜ = u w v 0 m 1 , n 1 ,
where u F 2 , v F 2 ( m 1 ) × 1 and w F 2 1 × ( n 1 ) .
If u = 0 , then g ˜ M . If not, then we click the ( 1 , 1 ) -st light on, so that the first row and the first column are flipped. The resultant game is then in M. Therefore, the proof of the second condition is done, and thm:oddodd is proved as well.

Appendix C. Proof of thm:optimal

We first show the optimality of the linear code formed by the 3 × 3 games. Note that the Hamming distance of this code is 3; therefore, we first apply the Hamming bound [52] and obtain
A 2 ( 9 , 3 ) 2 9 9 0 + 9 1 = 51.2 .
As the number of codewords of a linear code (the size of a vector space) must be a power of the size of the field, we have:
B 2 ( 9 , 3 ) 2 log 2 51.2 = 32 ,
where B q ( , d ) is the linear-codes-only version of A q ( , d ) . The size of the solvable game space, i.e., the number of codewords, is | S | = 2 m n m n + 2 = 2 5 = 32 , which means that the upper bound of B 2 ( 9 , 3 ) is attainable. This also implicates that the error-correcting code formed by 3 × 3 games is an optimal linear code.
For other games of larger sizes, i.e., m , n 3 , but not both m = n = 3 , their Hamming distances are all 4. Therefore, the number of codewords is | S | = 2 m n m n + 2 . We can show that this code is not optimal by showing the existence of a linear code that has more than | S | codewords, where the codeword length is m n and the code distance is 4. The existence of such a linear code is the result of shortening an extended (binary) Hamming code.
Recall that an extended Hamming code is a linear code of codeword length 2 r , dimension 2 r r 1 , and code distance 4, where r 2 is an integer. Our goal is to show that there exist integers r 2 and t 0 such that 2 r t = m n and 2 r r 1 t > m n m n + 2 are satisfied.
We proceed as follows: first show that there exist integers r 2 and t 0 satisfying 2 r t = m n and 2 r r 1 t = m n m n + 2 . In other words, we want to show that we can shorten an extended Hamming code to obtain a linear code that has the same set of parameters as that of the code formed by the game. To satisfy both conditions, we have 2 r m n = 2 r r 1 m n + m + n 2 , which gives r = m + n 3 > 2 . Furthermore, t = 2 m + n 3 m n . We now show that t 0 by induction.
Due to symmetry, we only consider the parameter m in the induction process. The base case is either m = 3 , n = 5 or m = 5 , n = 3 . That is, t = 2 5 15 = 17 0 . Assume that 2 m + n 3 m n 0 for some m and n. For the case of m + 2 , we have:
2 ( m + 2 ) + n 3 ( m + 2 ) n = ( 2 m + n 3 m n ) + ( 2 m + n 3 2 n ) + 2 m + n 2 > 2 ( 2 m + n 3 m n ) + 2 m + n 2 0 .
Therefore, the propositions r = m + n 3 > 2 and t = 2 m + n 3 m n 0 are verified by induction arguments.
Next, we show that there exist integers 2 r < m + n 3 and t 0 satisfying 2 r t = m n and 2 r r 1 t > m n m n + 2 . In particular, we let r = r 1 2 . Then,
m n = 2 r t = 2 r ( t 2 r ) = 2 r t ,
where t = t 2 r = 2 m + n 4 m n . We adopt a similar induction approach to show that t 0 . The base case is t = 2 4 15 = 1 0 . Applying the induction step on the case of m + 2 , we have: 2 ( m + 2 ) + n 4 ( m + 2 ) n > 2 ( 2 m + n 4 m n ) + 2 m + n 1 0 . Now, consider:
2 r r 1 t = ( 2 r r t ) 2 = m n m n + 2 .
Therefore, we have 2 r r r = m n m n + 4 > m n m n + 2 . This shows that we can shorten an extended Hamming code to obtain a linear code that possesses the same codeword length and code distance as the code formed by the game; however, this linear code has more than | S | codewords. In other words, the code formed by the game is not optimal.

References

  1. Kreh, M. “Lights Out” and Variants. Am. Math. Mon. 2017, 124, 937–950. [Google Scholar] [CrossRef]
  2. The Mathematics of Lights Out. Available online: https://www.jaapsch.net/puzzles/lomath.htm (accessed on 20 March 2022).
  3. Anderson, M.; Feil, T. Turning Lights Out with Linear Algebra. Math. Mag. 1998, 71, 300–303. [Google Scholar] [CrossRef]
  4. Rhoads, G.C. A Group Theoretic Solution to the Alien Tiles Puzzle. J. Recreat. Math. 2007, 36, 92–103. [Google Scholar]
  5. Goldwasser, J.; Klostermeyer, W.; Trapp, G. Characterizing Switch-Setting Problems. Linear Multilinear Algebra 1997, 43, 121–135. [Google Scholar] [CrossRef]
  6. Sutner, K. The σ-Game and Cellular Automata. Am. Math. Mon. 1990, 97, 24–34. [Google Scholar]
  7. Berlekamp, E.; McEliece, R.; van Tilborg, H. On the Inherent Intractability of Certain Coding Problems. IEEE Trans. Inf. Theory 1978, 24, 384–386. [Google Scholar] [CrossRef]
  8. Barg, S. Some New NP-Complete Coding Problems. Probl. Peredachi Informatsii 1994, 30, 23–28. [Google Scholar]
  9. Stern, J. Approximating the Number of Error Locations within a Constant Ratio is NP-Complete. In Proceedings of the 10th International Symposium on Applied Algebra, Algebraic Algorithms and Error-Correcting Codes, San Juan de Puerto Rico, Puerto Rico, 10–14 May 1993; pp. 325–331. [Google Scholar]
  10. Arora, S.; Babai, L.; Stern, J.; Sweedyk, Z. The Hardness of Approximate Optima in Lattices, Codes, and Systems of Linear Equations. J. Comput. Syst. Sci. 1997, 54, 317–331. [Google Scholar] [CrossRef]
  11. Roth, R.M.; Viswanathan, K. On the Hardness of Decoding the Gale-Berlekamp Code. IEEE Trans. Inf. Theory 2008, 54, 1050–1060. [Google Scholar] [CrossRef]
  12. Karpinski, M.; Schudy, W. Linear Time Approximation Schemes for the Gale-Berlekamp Game and Related Minimization Problems. In Proceedings of the Forty-First Annual ACM Symposium on Theory of Computing, Bethesda, MD, USA, 31 May–2 June 2009; pp. 313–322. [Google Scholar]
  13. Carlson, J.; Stolarski, D. The Correct Solution to Berlekamp’s Switching Game. Discret. Math. 2004, 287, 145–150. [Google Scholar] [CrossRef]
  14. Schauz, U. Colorings and Nowhere-Zero Flows of Graphs in Terms of Berlekamp’s Switching Game. Electron. J. Comb. 2011, 18. [Google Scholar] [CrossRef]
  15. Brualdi, R.A.; Meyer, S.A. A Gale-Berlekamp Permutation-Switching Problem. Eur. J. Comb. 2015, 44, 43–56. [Google Scholar] [CrossRef]
  16. Vardy, A. Algorithmic Complexity in Coding Theory and the Minimum Distance Problem. In Proceedings of the Twenty-Ninth Annual ACM Symposium on Theory of Computing, El Paso, TX, USA, 4–6 May 1997; pp. 92–109. [Google Scholar]
  17. Pickover, C.; Mckechnie, C. Alien Tiles Official Web Page. Available online: http://www.alientiles.com/ (accessed on 31 March 2022).
  18. Lights Out Variant: Flipping the Whole Row and Column. Available online: https://math.stackexchange.com/questions/441571/lights-out-variant-flipping-the-whole-row-and-column (accessed on 7 May 2021).
  19. Strategy for Modified Lights Out. Available online: https://puzzling.stackexchange.com/questions/58374/strategy-for-modified-lights-out (accessed on 26 September 2020).
  20. The Pilots Brothers’ Refrigerator. Available online: http://poj.org/problem?id=2965 (accessed on 25 September 2021).
  21. Maier, P.; Nickel, W. Attainable Patterns in Alien Tiles. Am. Math. Mon. 2007, 114, 1–13. [Google Scholar] [CrossRef]
  22. Torrence, B. The Easiest Lights Out Games. Coll. Math. J. 2011, 42, 361–372. [Google Scholar] [CrossRef]
  23. Characterize the Nullspace of a Given Matrix in F2n×n. Available online: https://math.stackexchange.com/questions/1056944/characterize-the-nullspace-of-a-given-matrix-in-mathbbf-2n-times-n (accessed on 7 May 2021).
  24. Minimal Number of Moves Needed to Solve a “Lights Out” Variant. Available online: https://math.stackexchange.com/questions/1052609/minimal-number-of-moves-needed-to-solve-a-lights-out-variant (accessed on 7 May 2021).
  25. How Can I Further Optimize This Solver of a Variant of “Lights Out”? Available online: https://stackoverflow.com/questions/27436275/how-can-i-further-optimize-this-solver-of-a-variant-of-lights-out (accessed on 6 May 2021).
  26. Singleton, R. Maximum Distance q-nary Codes. IEEE Trans. Inf. Theory 1964, 10, 116–118. [Google Scholar] [CrossRef]
  27. 027: Alien Tiles Problem. Available online: https://www.csplib.org/Problems/prob027/ (accessed on 10 December 2021).
  28. Gent, I.; Linton, S.; Smith, B. Symmetry Breaking in the Alien Tiles Puzzle; Technical Report APES-22-2000; APES Research Group: Glasgow, UK, 2000. [Google Scholar]
  29. Gent, I.P.; Harvey, W.; Kelsey, T. Groups and Constraints: Symmetry Breaking during Search. In Proceedings of the International Conference on Principles and Practice of Constraint Programming, Ithaca, NY, USA, 9–13 September 2002; pp. 415–430. [Google Scholar]
  30. McDonald, I.; Smith, B. Partial Symmetry Breaking. In Proceedings of the International Conference on Principles and Practice of Constraint Programming, Ithaca, NY, USA, 9–13 September 2002; pp. 431–445. [Google Scholar]
  31. Cormen, T.H.; Leiserson, C.E.; Rivest, R.L.; Stein, C. Introduction to Algorithms, 3rd ed.; MIT Press: Cambridge, MA, USA, 2009. [Google Scholar]
  32. Wegener, I. Complexity Theory: Exploring the Limits of Efficient Algorithms; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2005. [Google Scholar]
  33. Jaffe, A.M. The Millennium Grand Challenge in Mathematics. Not. AMS 2006, 53, 652–660. [Google Scholar]
  34. Cook, S. The Importance of the P versus NP Question. J. ACM 2003, 50, 27–29. [Google Scholar] [CrossRef]
  35. Fortnow, L. The Golden Ticket: P, NP, and the Search for the Impossible; Princeton University Press: Princeton, NJ, USA, 2013. [Google Scholar]
  36. Gasarch, W.I. Guest Column: The P=?NP Poll. ACM SIGACT News 2002, 33, 34–47. [Google Scholar]
  37. Gasarch, W.I. Guest Column: The Second P=?NP Poll. ACM SIGACT News 2012, 43, 53–77. [Google Scholar] [CrossRef]
  38. Gasarch, W.I. Guest Column: The Third P=?NP Poll. ACM SIGACT News 2019, 50, 38–59. [Google Scholar] [CrossRef]
  39. Huffman, W.C.; Pless, V. Fundamentals of Error-Correcting Codes; Cambridge University Press: Cambridge, UK, 2003. [Google Scholar]
  40. Ling, S.; Xing, C. Coding Theory: A First Course; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar]
  41. Reed, I.S.; Solomon, G. Polynomial Codes over Certain Finite Fields. J. Soc. Ind. Appl. Math. 1960, 8, 300–304. [Google Scholar] [CrossRef]
  42. Cohen, G.; Honkala, I.; Litsyn, S.; Lobstein, A. Covering Codes; Elsevier: Amsterdam, The Netherlands, 1997. [Google Scholar]
  43. Hämäläinen, H.; Honkala, I.; Litsyn, S.; Östergård, P. Football Pools—A Game for Mathematicians. Am. Math. Mon. 1995, 102, 579–588. [Google Scholar] [CrossRef]
  44. Plotkin, M. Binary Codes with Specified Minimum Distance. IRE Trans. Inf. Theory 1960, 6, 445–450. [Google Scholar] [CrossRef]
  45. Gilbert, E.N. A Comparison of Signalling Alphabets. Bell Syst. Tech. J. 1952, 31, 504–522. [Google Scholar] [CrossRef]
  46. Varshamov, R.R. Estimate of the Number of Signals in Error Correcting Codes. Docklady Akademii Nauk SSSR 1957, 117, 739–741. [Google Scholar]
  47. Delsarte, P. An Algebraic Approach to the Association Schemes of Coding Theory. Ph.D. Thesis, Université Catholique de Louvain, Ottignies-Louvain-la-Neuve, Belgium, 1973. [Google Scholar]
  48. Schrijver, A. New Code Upper Bounds from the Terwilliger Algebra and Semidefinite Programming. IEEE Trans. Inf. Theory 2005, 51, 2859–2866. [Google Scholar] [CrossRef]
  49. Tietäväinen, A. On the Nonexistence of Perfect Codes over Finite Fields. SIAM J. Appl. Math. 1973, 24, 88–96. [Google Scholar] [CrossRef]
  50. Laeser, R.P.; McLaughlin, W.I.; Wolff, D.M. Engineering Voyager 2’s Encounter with Uranus. Sci. Am. 1986, 255, 36–45. [Google Scholar] [CrossRef]
  51. Forney, G. Generalized Minimum Distance Decoding. IEEE Trans. Inf. Theory 1966, 12, 125–131. [Google Scholar] [CrossRef]
  52. Hamming, R.W. Error Detecting and Error Correcting Codes. Bell Syst. Tech. J. 1950, 29, 147–160. [Google Scholar] [CrossRef]
Figure 1. Example moves of a 5 × 5 Gale–Berlekamp switching game.
Figure 1. Example moves of a 5 × 5 Gale–Berlekamp switching game.
Mathematics 10 02994 g001
Figure 2. Example moves of a 5 × 5 Lights Out game.
Figure 2. Example moves of a 5 × 5 Lights Out game.
Mathematics 10 02994 g002
Figure 3. Example moves of a 5 × 5 two-state Alien Tiles game.
Figure 3. Example moves of a 5 × 5 two-state Alien Tiles game.
Mathematics 10 02994 g003
Figure 4. An example of the matrix model.
Figure 4. An example of the matrix model.
Mathematics 10 02994 g004
Figure 5. Two examples of mapping a sequence of moves in the move space K to a game in the game space G via the homomorphism ψ for two-state Alien Tiles.
Figure 5. Two examples of mapping a sequence of moves in the move space K to a game in the game space G via the homomorphism ψ for two-state Alien Tiles.
Mathematics 10 02994 g005
Figure 6. An example of an easy two-state Alien Tiles game.
Figure 6. An example of an easy two-state Alien Tiles game.
Mathematics 10 02994 g006
Figure 7. The row parity vector is flipped after performing a move.
Figure 7. The row parity vector is flipped after performing a move.
Mathematics 10 02994 g007
Figure 8. Both the row and column parity vectors are flipped after performing a move.
Figure 8. Both the row and column parity vectors are flipped after performing a move.
Mathematics 10 02994 g008
Figure 9. An example of constructing a game from the parity pairs. (a) Bit-matching phase. (b) Bit-pairing phase. (c) Constructed game.
Figure 9. An example of constructing a game from the parity pairs. (a) Bit-matching phase. (b) Bit-pairing phase. (c) Constructed game.
Mathematics 10 02994 g009
Table 1. The condition to flip all the bits in the parity pair ( r , c ) for encoding a message to an odd-by-odd game, where m , n 3 .
Table 1. The condition to flip all the bits in the parity pair ( r , c ) for encoding a message to an odd-by-odd game, where m , n 3 .
r 1 mod 2 (or c 1 mod 2 ) b mn m n + 2 Flip?
00No
01Yes
10Yes
11No
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yin, H.H.F.; Ng, K.H.; Ma, S.K.; Wong, H.W.H.; Mak, H.W.L. Two-State Alien Tiles: A Coding-Theoretical Perspective. Mathematics 2022, 10, 2994. https://doi.org/10.3390/math10162994

AMA Style

Yin HHF, Ng KH, Ma SK, Wong HWH, Mak HWL. Two-State Alien Tiles: A Coding-Theoretical Perspective. Mathematics. 2022; 10(16):2994. https://doi.org/10.3390/math10162994

Chicago/Turabian Style

Yin, Hoover H. F., Ka Hei Ng, Shi Kin Ma, Harry W. H. Wong, and Hugo Wai Leung Mak. 2022. "Two-State Alien Tiles: A Coding-Theoretical Perspective" Mathematics 10, no. 16: 2994. https://doi.org/10.3390/math10162994

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop