A Method of Constructing Measurement Matrix for Compressed Sensing by Chebyshev Chaotic Sequence

In this paper, the problem of constructing the measurement matrix in compressed sensing is addressed. In compressed sensing, constructing a measurement matrix of good performance and easy hardware implementation is of interest. It has been recently shown that the measurement matrices constructed by Logistic or Tent chaotic sequences satisfy the restricted isometric property (RIP) with a certain probability and are easy to be implemented in the physical electric circuit. However, a large sample distance that means large resources consumption is required to obtain uncorrelated samples from these sequences in the construction. To solve this problem, we propose a method of constructing the measurement matrix by the Chebyshev chaotic sequence. The method effectively reduces the sample distance and the proposed measurement matrix is proved to satisfy the RIP with high probability on the assumption that the sampled elements are statistically independent. Simulation results show that the proposed measurement matrix has comparable reconstruction performance to that of the existing chaotic matrices for compressed sensing.


Introduction
Compressed sensing (CS) [1] samples the signal at a rate far lower than the Nyquist rate by utilizing the signal's sparsity. The original signal can be reconstructed from the sampled data by adopting corresponding reconstruction methods. As a new way of signal processing, CS reduces the amount of sampled data and the space of storage, which would significantly decrease the hardware complexity. CS has a broad application prospect in medical imaging [2], wideband spectrum sensing [3], dynamic mode decomposition [4], etc.
CS projects a high-dimensional K-sparse signal x ∈ R N×1 into a low-dimensional space through the measurement matrix Φ ∈ R M×N (M < N) and gets a set of incomplete measurements y ∈ R M×1 obeying the linear model Equation (1) is an underdetermined equation which usually has an infinite number of solutions. However, with prior information on the signal sparsity and a condition imposed on Φ, x can be exactly reconstructed by solving the l 1 minimization problem [5]. The most commonly used condition of Φ is the restricted isometric property (RIP) [6]. Definition 1. RIP: For any K-sparse vector x, if there is always a constant δ ∈ (0, 1) that makes then Φ is said to satisfy the K-order RIP with δ.
The minimum of all constants satisfying inequality (2) is referred to as the restricted isometry constant (RIC) δ K . Candès and Tao showed that exact reconstruction could be achieved from certain measurements provided Φ satisfies the RIP.
Constructing a proper measurement matrix is an essential issue in CS. The measurement matrix has significant influences not only on the reconstruction performance but also on the complexity of hardware implementation. The measurement matrices can be divided into random ones and deterministic ones. Gaussian random matrix [7] and Bernoulli random matrix [8] are frequently used because they satisfy the RIP and are uncorrelated with most sparse domains. They guarantee good reconstruction performance but bring a specific challenge to hardware such as the requirement for storage space and system design [9]. On the contrary, deterministic measurement matrices have the advantages not only of economic storage space but also of convenience in engineering design. Nevertheless, the commonly used deterministic measurement matrices such as deterministic polynomial matrix [10] and Fourier matrix [11] are correlated with certain sparse domains, resulting in a restriction on practical application.
For solving the problems, the Logistic chaotic sequence is employed in constructing the measurement matrix-called the Logistic chaotic measurement matrix-in [12]. With the natural gifted pseudo-random property of a chaotic system, the Logistic chaotic measurement matrix possesses the advantages of random matrices and overcomes the shortcoming of the above deterministic matrices. This kind of matrix is respectively used for secure data collection in wireless sensor networks in [13] and speech signal compression in [14]. In [15], the Tent chaotic sequence is employed in constructing the measurement matrix. In [16], Zhou and Jing construct the measurement matrix with a composite chaotic sequence generated by combining Logistic and Tent chaos. Compared with the Gaussian random matrix, the chaotic measurement matrices have lower hardware complexities and better reconstruction performances. However, large sample distances (at least 15 [17,18]) are required to obtain uncorrelated samples from the above chaotic sequences in chaotic measurement matrix construction. Numerous useless data will be generated when the measurement matrix is large-scale, which results in the waste of system resources. In [19], the Chebyshev chaotic sequence is transformed into a new sequence of elements obeying Gaussian distribution. The new sequence is employed in constructing the measurement matrix that satisfies the RIP with high probability. This method avoids sampling the chaotic sequence, but the measurement matrix does not significantly improve the reconstruction performance.
In this paper, we propose a method of constructing the measurement matrix by the Chebyshev chaotic sequence. The primary contributions are twofold:

•
We analyze the high-order correlations among the elements sampled from the Chebyshev chaotic sequence.

•
We use the sampled elements to construct a measurement matrix, termed the Chebyshev chaotic measurement matrix. Based on the assumption that the elements are statistically independent, we prove that the Chebyshev chaotic measurement matrix satisfies the RIP with high probability.
The remainder of this paper is organized as follows. In Section 2, we describe the expression of the Chebyshev chaotic sequence and analyze the high-order correlations among elements sampled from the Chebyshev chaotic sequence. In Section 3, we present the construction method of the Chebyshev chaotic measurement matrix and analyze its probability of satisfying the RIP. In Section 4, simulations are carried out to verify the effectiveness of the Chebyshev chaotic matrix. In the end, the conclusion is drawn.

Chebyshev Chaotic Sequence
Chaotic systems generate deterministic sequences by recursive methods. The sequence naturally enjoys certain properties that greatly resemble what we perceive as randomness. Since the sequence is reproducible and passes tests of randomness, it is often used to generate pseudo-random numbers [20]. Chebyshev chaos is a typical nonlinear dynamic chaos. Its one-dimensional expression is written as where q (≥ 2) denotes the order number of Chebyshev chaos. With the initial value x 0 , the Chebyshev chaotic sequence is produced by applying Equation (3) recursively. On its excellent randomness, sensitivity to initial values, spatial ergodicity, and easy implementation in physical electric circuits [21], Chebyshev chaos is widely valued. Based on the fact that some regions of the state space are visited more frequently than others by the Chebyshev chaotic sequence, it is possible to associate an invariant probability density function, denoted ρ(x), to a chaotic attractor. ρ(x) is as follows [22]: where C t/2 t denotes the number of t/2 subsets of a t-element set.

The Internal Randomness of Chebyshev Chaos
Internal randomness is one of the main characteristics of chaos. The chaos system is stable on the whole, but it will show regional instability due to the internal randomness. Regional instability is manifested in the sensitivity to initial conditions. The stronger the sensitivity, the stronger the internal randomness. The Lyapunov exponent [23] is a quantitative description of the sensitivity, which characterizes the average rate of divergence between the adjacent trajectories. The Lyapunov exponent expression of the one-dimensional discrete mapping system x n+1 = F(x n ) is written as where F x j denotes the derivative of F x j with respect to x j . The trajectories will gradually close up until reclosing when λ ≤ 0 and diverge when λ > 0. A system with λ > 0 is defined to be chaotic. The bigger the λ, the stronger the internal randomness. Set x 0 = 0.5, n = 10000, the change tendency of λ with system parameters in different chaos is shown in Figure 1. The single control parameter is denoted as µ in Logistic chaos and p in Tent chaos. As can be seen from Figure 1, in Logistic chaos, when µ is 4, λ reaches the maximum value of 0.69. In Tent chaos, λ reaches the maximum value of 0.69 when p is 0.5. In Chebyshev chaos, λ is 0.69 when q = 2 and λ increases with q. Therefore, when q is greater than 2, the Lyapunov exponent of Chebyshev chaos is larger than that of Logistic chaos and Tent chaos, which means stronger internal randomness. Considering that the hardware complexity increases exponentially over q, we set q = 8 so that Chebyshev chaos has a good balance between randomness and hardware implementation. As can be seen from Figure 1, in Logistic chaos, when μ is 4, λ reaches the maximum value of 0.69. In Tent chaos, λ reaches the maximum value of 0.69 when p is 0.5. In Chebyshev chaos, λ is 0.69 when 2 q = and λ increases with q . Therefore, when q is greater than 2, the Lyapunov exponent of Chebyshev chaos is larger than that of Logistic chaos and Tent chaos, which means stronger internal randomness. Considering that the hardware complexity increases exponentially over q , we set 8 q = so that Chebyshev chaos has a good balance between randomness and hardware implementation.

Statistical Property and Sample Distance
Generally, the elements in the measurement matrix need to be independent of each other. However, the elements in chaotic sequences do not meet the requirements. Yu et al. [12] measured the independence between the elements sampled from the Logistic chaotic sequence through the high-order correlations. The elements are considered approximately independent if ideal high-order correlations are available. These "independent" elements are used to construct the chaotic measurement matrix and good reconstruction performance is obtained. To construct the measurement matrix with the Chebyshev chaotic sequence, the primary issue is how to get independent elements from the sequence. Inspired by [12], we measure the independence through the high-order correlations with a certain sample distance and have the following theorem.

Statistical Property and Sample Distance
Generally, the elements in the measurement matrix need to be independent of each other. However, the elements in chaotic sequences do not meet the requirements. Yu et al. [12] measured the independence between the elements sampled from the Logistic chaotic sequence through the high-order correlations. The elements are considered approximately independent if ideal high-order correlations are available. These "independent" elements are used to construct the chaotic measurement matrix and good reconstruction performance is obtained. To construct the measurement matrix with the Chebyshev chaotic sequence, the primary issue is how to get independent elements from the sequence. Inspired by [12], we measure the independence through the high-order correlations with a certain sample distance and have the following theorem. Theorem 1. Denote X = x n , x n+1 · · · x n+k · · · as the sequence generated by Equation (3), and integer d as the sample distance. When q(≥ 2) is even, for an arbitrary positive integerm 0 , Proof. See Appendix A.
n+d for all m 0 , m 1 < 32768. In [12], the elements sampled from the Logistic chaotic sequence share the same high-order correlations with d = 15 and are considered approximately independent. In [17,18], the independence test algorithms are applied to determine the sample distance of the Logistic and Tent chaotic sequences. The test procedures indicate that the sampled elements are statistically independent when d ≥ 15. Figures 2 and 3 illustrate the joint probability densities of x n and x n+d , and denote ρ(x n , x n+d ), in Logistic, Tent, and 8-order Chebyshev chaotic sequences when d takes 5 and 15. The smoother the surface of ρ(x n , x n+d ), the more uniform the distribution of x n and x n+d , and the weaker the correlation between x n and x n+d . In Figure 2a,b, the surfaces of ρ(x n , x n+d ) in Logistic and Tent have large fluctuations when d = 5, which means strong correlations between x n and x n+d . In Figure 2c,d, the surfaces of ρ(x n , x n+d ) in Logistic and Tent are relatively gentle, indicating a weak correlation between x n and x n+d when d = 15. It can be seen intuitively from Figure 3 that the surface of ρ(x n , x n+d ) in Chebyshev is almost the same as that in Figure 2c. These figures show that the elements sampled from the 8-order Chebyshev chaotic sequence with d = 5 share the same correlations with those sampled from the Logistic chaotic sequence with d = 15.
algorithms are applied to determine the sample distance of the Logistic and Tent chaotic sequences. The test procedures indicate that the sampled elements are statistically independent when 15 d  . share the same correlations with those sampled from the Logistic chaotic sequence with 15 d  . Therefore, to guarantee a very small correlation between the sampled elements, the sample distance required by the 8-order Chebyshev chaotic sequence is significantly smaller than those required by the Logistic and Tent chaotic sequences. According to Section 2.2, the internal randomness of 8-order Chebyshev chaos is stronger than that of Logistic and Tent chaos. The increase in internal randomness leads to the decrease in the correlation between n x and nd x  , and the decrease in the sample distance.      Therefore, to guarantee a very small correlation between the sampled elements, the sample distance required by the 8-order Chebyshev chaotic sequence is significantly smaller than those required by the Logistic and Tent chaotic sequences. According to Section 2.2, the internal randomness of 8-order Chebyshev chaos is stronger than that of Logistic and Tent chaos. The increase in internal randomness leads to the decrease in the correlation between x n and x n+d , and the decrease in the sample distance.

Construction of Chebyshev Chaotic Measurement Matrix
Denote Z = {z 1 , z 2 · · · z n } as the sequence extracted from X with sample distance d. We use Z to construct a Chebyshev chaotic measurement matrix as shown in Algorithm 1.
2/M is the normalization factor. When the order number, initial value, and sample distance are set, Φ is determined.

RIP Analysis
The RIP is a sufficient condition on the measurement matrix to recover guarantee. However, certifying the RIP is NP-hard, so we analyze the performance of the measurement matrix Φ by calculating the probability that Φ satisfies the RIP instead.
When d = 5, the adjacent elements of Z share the same high-order correlations with the elements sampled from the Logistic chaotic sequence with d = 15. In addition, the test procedures show that the sampled elements of the Logistic chaotic sequence are statistically independent when d = 15. Hence, we assume that the elements of Z are statistically independent. Based on this assumption, we calculate the minimum of the probability that our proposed measurement matrix satisfies the RIP and show that the minimum is close to 1 provided some parameters are suitably valued. The main result is shown in Theorem 2.
Theorem 2. TheChebyshev chaotic measurement matrix Φ ∈ R M×N constructed by Algorithm 1 satisfies the K-order RIP with high probability provided K ≤ O(M/ log(N/K)).
Before proving Theorem 2, we briefly summarize some useful notations. Denote (l 1 , l 2 · · · l K ) as the position of non-zero elements in x and u = (u 1 , u 2 · · · u K ) T as the normalized vector composed of non-zero elements in x. Φ K is a submatrix composed of Φ that only contains columns indexed by (l 1 , l 2 · · · l K ). Rewrite Φ K as Φ K = √ 1/M(ϕ 1 , ϕ 2 · · · ϕ M ) T , where ϕ T m = (v 1 , v 2 · · · v K ) denotes a vector formed by multiplying the elements positioned at (l 1 , l 2 · · · l K ) in the m th row of Φ by √ 2, and m = 1, 2 · · · M. Let S = s 2 2 = M m=1 Q 2 m /M, s = Φ K u, and Q m = ϕ T m u. Denote G g = Φ K u 2 2 − 1 ≥ δ as one complementary event of the condition in inequality (2), where g = 1, 2 · · · C K N . G = ∪ C K N g=1 G g is the union of all possible complementary events. The idea of proving Theorem 2 is as follows: First, calculate the probability of G g , which is denoted as P G g . Then, compute the probability of G, denoted as P(G). Finally, the probability of Φ satisfying the RIP is P RIP = 1 − P(G).
From the definition of G g , we have where holds for any α > 0. According to Markov inequality, the following inequality holds Noting that the elements of Φ K are statistically independent of each other, it is evident that Q h and Q m are independent of each other for l m, l, m = 1, 2 · · · M. With E exp αQ 2 j = E exp αQ 2 h , the inequality can be rewritten as In the same way, we have By expanding exp −αQ 2 j into the form of a second-order Lagrange remainder, exp −αQ 2 m = 2 t=0 −αQ 2 m t /t! + R 2 holds, where R 2 is the Lagrange remainder. It is straightforward to verify R 2 ≤ 0.
Accordingly, we have exp −αQ 2 m ≤ 2 t=0 −αQ 2 m t /t! and To calculate the probabilities in inequalities (10) and (12), let us introduce some useful lemmas. Lemma 1. Denote r 1 , r 2 i.i.d. as random variables having a probability distribution like Equation (4). For any real numbers a, b, let c = (a 2 + b 2 )/2, then for all T ∈ R and t ∈ N, we have Proof. See Appendix B.
Recall T ∼ N(0, 1). For arbitrary α ∈ [0, 1/2), the Taylor series expansion of E exp αT 2 is Applying Lemma 2 and 3 gives holds. Now, we calculate the probabilities in inequalities (10) and (12). Let α = δ/[2(1 + δ)], we get As Therefore, According to Boole's inequality, we get Let C 1 > 0 and C 1 M ≥ K log(N/K), we have Let Then, Φ satisfies the RIP with a probability of Indeed, choosing C 1 as sufficiently small, we always have C 2 > 0 and high P RIP . For instance, let N = 512 and K = 5, the probability of Φ satisfying the K-order RIP with δ c = 0.9 is no less than 95% when C 1 = 0.0589.

Results and Discussion
When a measurement matrix is applied in CS, it is always expected to lead to good reconstruction performance. In this section, we examine the performance of the Chebyshev chaotic measurement matrix and compare it with Gaussian random matrix and the well-established similar matrices in [12,15,16,19] by presenting the empirical results obtained from a series of CS reconstruction simulations. Each matrix is denoted as Propose, Gaussian, Logistic, Tent, Composite, and the matrix in [19] for convenience. The test includes the following steps. First, generate the synthetic sparse signals x and construct the measurement matrix Φ. Then, obtain the measurement vector y by y = Φx. Last, reconstruct the signals by approximating the solution of x e = argmin x o s.t. y = Φx.
x adopted throughout this section only contains K non-zero entries with length N = 120. The locations and amplitudes of the peaks are subject to Gaussian distribution. The proposed measurement matrix in this paper is constructed by the method shown in Algorithm 1. The orthogonal matching pursuit (OMP) [24] algorithm is selected as a minimization approach which iteratively builds up an approximation of x. The system parameters in Logistic and Tent chaos are 4 and 0.5, respectively, and the order of Chebyshev chaos is 8. The sample distance is set as d = 15 in Logistic, Tent, and Composite, and d = 5 in Propose. Each experiment is performed for 1000 random sparse ensembles. The initial value x 0 is randomly set for each ensemble and the performance is averaged over sequences with different initial values. Denote ε = x e − x 2 / x 2 as the reconstruction error. The reconstruction is identified as a success, namely exact reconstruction, provided ε ≤ 10 −6 . Denote P suc as the percentage of successful reconstruction.

Case 1.
Comparison of the percentage of exact reconstruction in the noiseless case P suc .
In this case, we conduct two separate CS experiments, first by fixing M = 40 and varying K from 2 to 14, and second by fixing K = 8 and varying M from 20 to 60. Figure 4 illustrates the change tendency of P suc with argument K while M = 40 in the noiseless case. The figure shows that P suc decreases with the increase in K. From inequalities (20) and (21), it can be seen that the upper bound of P(G) increases and the lower bound of P RIP decreases when K increases. The probability of the measurement matrix satisfying the RIP is reduced, which makes it against the exact reconstruction. . Denote suc P as the percentage of successful reconstruction.

Case 1.
Comparison of the percentage of exact reconstruction in the noiseless case suc P .
In this case, we conduct two separate CS experiments, first by fixing 40 M  and varying K from 2 to 14, and second by fixing 8 K  and varying M from 20 to 60. Figure 4 illustrates the change tendency of suc P with argument K while 40 M  in the noiseless case. The figure shows that suc P decreases with the increase in K. From inequalities (20) and (21), it can be seen that the upper bound of   PG increases and the lower bound of RIP P decreases when K increases. The probability of the measurement matrix satisfying the RIP is reduced, which makes it against the exact reconstruction.    Figure 5 illustrates the change tendency of P suc with argument M while K = 8 in the noiseless case. As can be seen from the figure, P suc increases with M. According to inequalities (20) and (21), P(G) decreases and the lower bound of P RIP increases when M increases. The increase in the probability of the measurement matrix satisfying the RIP is beneficial to the exact reconstruction. Figures 4 and 5 reveal that the percentage of the exact reconstruction of the proposed measurement matrix is significantly higher than that of Gaussian and the matrix in [19], slightly higher than that of Tent, and almost the same as that of Logistic and Composite.

Case 2. Comparison of the reconstruction error in the noisy case.
In this case, we consider the noisy model  y Φxv , where v is the vector of additive Gaussian noise with zero means. We conduct the CS experiment by fixing 40 M  , 8 K  , and varying the signal to noise ratio (SNR) from 10 to 50. It can be seen from Figure 6 that the errors decrease with the increase in the SNR, and the error of the proposed measurement matrix is smaller than that of Gaussian and the matrix in [19], slightly smaller than that of Tent, and almost the same as Logistic and Composite. When noise is included in the measurements, the reconstruction errors increase greatly. At this time, reconstruction algorithms with anti-noise ability can be used, such as the adaptive iterative threshold (AIT) algorithm and entropy minimization-based matching pursuit (EMP) algorithm which can be found in references [25,26] for details. 20 25 Figures 4 and 5 reveal that the percentage of the exact reconstruction of the proposed measurement matrix is significantly higher than that of Gaussian and the matrix in [19], slightly higher than that of Tent, and almost the same as that of Logistic and Composite.

Case 2.
Comparison of the reconstruction error in the noisy case.
In this case, we consider the noisy model y = Φx + v, where v is the vector of additive Gaussian noise with zero means. We conduct the CS experiment by fixing M = 40, K = 8, and varying the signal to noise ratio (SNR) from 10 to 50. It can be seen from Figure 6 that the errors decrease with the increase in the SNR, and the error of the proposed measurement matrix is smaller than that of Gaussian and the matrix in [19], slightly smaller than that of Tent, and almost the same as Logistic and Composite. Figures 4 and 5 reveal that the percentage of the exact reconstruction of the proposed measurement matrix is significantly higher than that of Gaussian and the matrix in [19], slightly higher than that of Tent, and almost the same as that of Logistic and Composite.

Case 2.
Comparison of the reconstruction error in the noisy case.
In this case, we consider the noisy model  y Φxv , where v is the vector of additive Gaussian noise with zero means. We conduct the CS experiment by fixing 40 M  , 8 K  , and varying the signal to noise ratio (SNR) from 10 to 50. It can be seen from Figure 6 that the errors decrease with the increase in the SNR, and the error of the proposed measurement matrix is smaller than that of Gaussian and the matrix in [19], slightly smaller than that of Tent, and almost the same as Logistic and Composite. When noise is included in the measurements, the reconstruction errors increase greatly. At this time, reconstruction algorithms with anti-noise ability can be used, such as the adaptive iterative threshold (AIT) algorithm and entropy minimization-based matching pursuit (EMP) algorithm which can be found in references [25,26] for details. 20 25  When noise is included in the measurements, the reconstruction errors increase greatly. At this time, reconstruction algorithms with anti-noise ability can be used, such as the adaptive iterative threshold (AIT) algorithm and entropy minimization-based matching pursuit (EMP) algorithm which can be found in references [25,26] for details.
As shown in the simulations, when the Chebyshev chaotic matrix is applied in CS, a good reconstruction performance is obtained. This coincides with the theoretical results obtained in the previous subsection where the Chebyshev chaotic matrix satisfies the RIP with high probability. Recall from Lemma 4 that E exp αQ 2 m ≤ E exp αT 2 , where T ∼ N(0, 1). It is clear that the maximums of P(S ≥ 1 + δ) and P(S ≤ 1 − δ) are achievable only if Q m ∼ N(0, 1). That is to say, the lower boundary of the probability that the proposed measurement satisfies the RIP is no less than that of the Gaussian random matrix. As seen from the simulations, our proposed measurement matrix outperforms the Gaussian random matrix. Then, the results coincide with the theoretical analysis above. As mentioned in the Introduction, Zhu et al. [19] transformed the Chebyshev chaotic sequence into a new sequence of elements obeying Gaussian distribution and applied the new sequence to construct the measurement matrix. In fact, the measurement matrix is similar to the Gaussian random matrix in terms of elements distribution. This coincides with the simulations that the reconstruction performance of OMP with the matrix in [19] is similar to that of OMP with the Gaussian random matrix. Accordingly, our proposed measurement matrix outperforms the matrix in [19].
The simulations reveal that by using the proposed measurement matrix, one can achieve the same reconstruction performance as that obtained using Logistic, Tent, and Composite. In fact, these chaotic measurement matrices share a similar independence among the elements and probability of satisfying the RIP. To construct those measurement matrices, MN uncorrelated samples need to be extracted from the chaotic sequences at certain sample distances. The length of the chaotic sequence is usually no less than MNd. The larger the sample distance, the longer the chaotic sequence, and the larger the resources consumption. Since the sample distance of the Chebyshev chaotic sequence is greatly reduced, the proposed measurement matrix can effectively reduce the consumption of system resources.

Conclusions
In this paper, we propose a method of constructing a measurement matrix for compressed sensing by the Chebyshev chaotic sequence. We first show that the elements sampled from the Chebyshev chaotic sequence with a small distance have very small correlations. Then, we use these sampled elements to construct the measurement matrix and prove that the matrix satisfies the RIP with high probability in detail. With the natural gifted pseudo-random property of a chaotic system, the proposed chaotic measurement matrix possesses the advantages of economic storage and convenience in engineering design. Moreover, the probability that the proposed measurement matrix satisfies the RIP is more likely to be higher than that of the Gaussian random matrix dose, which results in better reconstruction performance. Obviously, the proposed measurement matrix outperforms the Gaussian random one. Compared with the similar chaotic measurement matrices, the proposed measurement matrix effectively reduces the consumption of system resources without the loss of reconstruction performance. Therefore, our method outperforms the existing approaches in terms of practical applications.
Noting that j ∈ ((m 1 − 1)/2, (m 1 + 1)/2), the only choice of j is m 1 /2, where m 1 must be even. Accordingly, h must be equal to m 0 /2. However, h = m 0 /2 is not an integer when m 0 is odd, which contradicts that h must be an integer. Therefore, the hypothesis fails, that is, E x In conclusion, the proof is completed.