Construction of Structured Random Measurement Matrices in Semi-Tensor Product Compressed Sensing Based on Combinatorial Designs

A random matrix needs large storage space and is difficult to be implemented in hardware, and a deterministic matrix has large reconstruction error. Aiming at these shortcomings, the objective of this paper is to find an effective method to balance these performances. Combining the advantages of the incidence matrix of combinatorial designs and a random matrix, this paper constructs a structured random matrix by the embedding operation of two seed matrices in which one is the incidence matrix of combinatorial designs, and the other is obtained by Gram–Schmidt orthonormalization of the random matrix. Meanwhile, we provide a new model that applies the structured random matrices to semi-tensor product compressed sensing. Finally, compared with the reconstruction effect of several famous matrices, our matrices are more suitable for the reconstruction of one-dimensional signals and two-dimensional images by experimental methods.


Introduction
In the era of data explosion, with the increasing amount of information, data acquisition, transmission and storage devices are facing increasingly severe pressure. At the same time, the data processing process will also be accompanied by the risk of information disclosure. The loss of some data may threaten the safety of life and property, and now, data disclosure is common. Therefore, in the era of big data, people urgently need to find a new data processing technique to decrease the risk of data leakage during information processing and release the pressure of hardware equipment such as internal storage and sensors.
Compressed sensing (CS) theory can be used for signal acquisition, encoding and decoding [1]. No matter what type of signals, sparse or compressible representations always exist in the original domain or in some transform domains. During transmission, the linear projection value that far lower than the traditional Nyquist sampling can be used to realize the accurate or high probability reconstruction of the signal. For a discrete signal x ∈ R n , the standard model of CS is where Φ ∈ R m×n (m < n) is a measurement matrix, and y ∈ R m is the corresponding measurement vector.
It shows that a vector x of n-dimensional can be compressed into a vector y of mdimensional by CS. Therefore, the compression ratio θ can be represented by θ = m n . If a measurement vector y is given, it is urgently important to reconstruct x by measurement matrix Φ. However, this problem is usually NP-hard [2]. If there are less than k(k n) non-zero elements in a signal x, then the signal x is k-sparse. Candès and Tao confirmed that if a signal x is k-sparse and Φ meets the restricted isometry property (RIP), then y can accurately reconstruct x [3] by solving the following equation, min x∈R n ||x|| 0 s.t. y = Φx, where ||x|| 0 = |{i|x i = 0}|.
Another important standard is coherence [4] in measurement matrices of CS.
Then, the coherence of Φ can be expressed by the following equation where Φ i , Φ j denotes the Hermite inner product of Φ i and Φ j . There is a relationship between the coherence and RIP of a matrix as follows.
If Φ is a unit-norm matrix and µ = µ(Φ), then Φ is said to satisfy the RIP of order k with δ k ≤ µ(k − 1) for all k < 1 µ + 1. Furthermore, for a matrix Φ with size m × n-dimensional, the coherence of Φ can be represented by Welch bound as follows [5] µ(Φ) ≥ n − m m(n − 1) . (6) The main problem in CS is to find deterministic constructions based on coherence which beat this square root bound.
In CS theory, measurement matrices are not only the vital step to guarantee the quality of signal sampling but also the vital step to determine the difficulty of compressed sampling hardware implementation. There are two main types of measurement matrices. One is random matrices. Random matrices consist of Gaussian matrices, Bernoulli matrices, local Fourier matrices and so on [6][7][8][9][10][11]. Although these matrices can reconstruct the original signals well, they are hard to be implemented in hardware, and the matrix elements require a lot of storage space. Some scholars have proposed using the Toplitz matrices to construct the measurement matrices [12,13]. Although the Toplitz matrices can save some storage space, it is still difficult to be implemented in hardware. Deterministic matrices can improve the transmission efficiency and reduce the storage space [14,15], but they have large reconstruction errors. When constructing this kind of matrices, as long as the system and construction parameters are determined, the size and elements of the matrix will also be determined. DeVore used polynomials over finite field F p to construct measurement matrices in [16]. Li et al. gave a construction method of a sparse measurement matrix based on algebraic curves in [17]. The main tools for constructing deterministic measurement matrices are coding [18][19][20][21][22], geometry over finite fields [23][24][25][26][27][28], design theory [29][30][31][32], and so on.
Compared with CS, for signals of the same size, the advantage of semi-tensor product compressed sensing (STP-CS) is that the number of columns of the measurement matrices can be a factor of CS, which greatly reduces the storage space of measurement matrices. Therefore, we are more interested in the research of STP-CS. The main contribution of the paper is to give a construction of structured random matrices and apply these matrices to STP-CS. The structured random matrices can be obtained by the embedding operation of two seed matrices in which one is determined, and the other is random. In addition, as long as the system and constructed parameters generate structured random matrices, the size of the matrix is determined, but the elements of the matrix are arranged in a structured random manner. When transmitting and storing the matrix, the system, constructed parameters and a random seed matrix need to be transmitted or stored, which can improve the transmission efficiency and reduce the storage scale of a random matrix. Compared with random matrices, the structured random matrices overcome the disadvantage of large storage space of random matrices and is relatively convenient for hardware implementation. Compared with deterministic matrices, the structured random matrices have good reconstruction accuracy. Therefore, a structured random matrix has greater application value in STP-CS model.
Aiming at existing shortcomings-a random matrix needs large storage space and is difficult to be implemented in hardware, and a deterministic matrix has large reconstruction error-the objective of this paper is to find an effective method to balance these performances. The main contributions of our work are summarized as follows: • A construction method of structured random matrices is given, where one is the incidence matrices of combinatorial designs, and the other is obtained by the Gram-Schmidt orthonormalization of random matrices. • A STP-CS model based on the structured random matrices is proposed.

•
Experimental results indicate that our matrices are more suitable for the reconstruction of one-dimensional signals and two-dimensional images.
The difference between this paper and previous works [14,31] is as follows: • The measurement matrices constructed in this paper are structured random matrices, while the measurement matrices constructed in [14,31] are determined matrices. • This paper studies STP-CS model, while [14] studies the block compressed sensing model (BCS), and [31] studies CS model.
The details of each section are as follows. Section 2 introduces some related knowledge. Section 3 proposes a new model, which applies the structured random matrices to STP-CS. Section 4 gives simulation experiments, analyzes and compares the performance of our matrices with several famous matrices.

Projective Geometry
Let F q be the finite field with q elements. F (n+1) q is the (n + 1)-dimensional row vector space over F q , where q is a prime power, and n is a positive integer. The 1-dimensional, 2-dimensional, 3-dimensional, and n-dimensional vector subspaces of F (n+1) q are called points, lines, planes, and hyperplanes, respectively. In general, the (r + 1)-dimensional vector subspaces of F (n+1) q are called projective r-flats, or simply r-flats (0 ≤ r ≤ n). Thus, 0flats, 1-flats, 2-flats, and (n − 1)-flats are points, lines, planes, and hyperplanes, respectively. If an r-flat as a vector subspace contains or is contained in an s-flat as a vector subspace, then the r-flat is called incidented with the s-flats. Then, the set of points, i.e., the set of 1-dimensional vector subspaces of F (n+1) q , together with the r-flats (0 ≤ r ≤ n) and the incidence relation among them defined above is said to be the n-dimensional projective space over F q and is denoted by PG(n, F q ).

Balanced Incomplete Block Design
Definition 1. Let v, k, b, r, λ be positive integers, and v ≥ k ≥ 2. For a finite set x = {x 1 , x 2 , · · · , x v }, a subset family B = {B 1 , B 2 , · · · , B b } of x, where x 1 , x 2 , · · · , x v are called points, B 1 , B 2 , · · · , B b are called blocks, if (1) There are k (k < v) points in each block; (2) Each point in x appears in r blocks; (3) Each pair of distinct points is contained in exactly λ blocks.
, then this design is symmetric. Symmetric BIBD is simply denoted by SBIBD.

Embedding Operation of Binary Matrix
In addition, A is a matrix with size d × n 1 -dimensional, each element 1 in h i is substitute for a distinct row of A, and each element 0 is substitute for the 1 × n 1 row vector (0, 0, · · · , 0). The result matrix Φ is expressed as and Φ is an m × n 1 -dimensional matrix, where " " denotes the embedding operation of the matrix A in the matrix H.
The specific process of the above embedding operation is shown in Figure 1.

Semi-Tensor Product Compressed Sensing
Definition 4. Let x be a row vector with size np-dimensional and y = [Y 1 , · · · , Y p ] T be a column vector with size p-dimensional. Split x into p blocks, named x 1 , · · · , x p ; the size of each block is n-dimensional. The semi-tensor product (STP) is defined as Definition 5. Let A ∈ R m×np and B ∈ R p×q ; then, the STP of A and B is defined as follows, C has m × q blocks as C = (c i,j ) and each block is where a i is the i-th row of A and b j is the j-th column of B.
For a signal x ∈ R p and a measurement matrix Φ ∈ R m×n (m < n), the STP-CS model [36] is as follows where y ∈ R mp n and p = lcm(n, p). Similarly, we can also define the STP-CS by using Kronecker product as follows where I p n is a p n × p n -dimensional identity matrix, p n is a positive integer, and " ⊗ " denotes the Kronecker product.

Construction of Structured Random Measurement Matrices in STP-CS
Compared with CS, for signals of the same size, the advantage of STP-CS is that the number of columns of the measurement matrix can be a factor of CS, which greatly reduces the storage space of measurement matrices. Compared with measurement matrices in STP-CS, the structured random matrices only need to store two seed matrices instead of the whole matrix. To sum up, the structured matrices have lower storage space in STP-CS. In this section, we give a new model that applies the structured random matrices to STP-CS.
3.1. Construction of (q 2 + q + 1, q + 1, 1)-SBIBD The 1-dimensional projective space over F q only has q + 1 points, so it is less interesting. So, let us start our discussion with the 2-dimensional projective planes PG(2, F q ). In PG(2, F q ), there are q 2 + q + 1 points and q 2 + q + 1 lines; every line contains q + 1 points and every point passes through q + 1 lines; any two different points are connected by exactly one line; any two different lines intersect in exactly one point. It is easy to find that (i) A finite projective plane of order q is (q 2 + q + 1, q + 1, 1)-BIBD. A block is called a line in a finite projective plane.
In the following, the relationship between some known projective planes and BIBD is shown in Table 1.

Gram-Schmidt Orthonormalization
Let A = (a 1 , a 2 , · · · , a q+1 ) be a random matrix, where a i ∈ R (q+1) denotes the i-th column of A, 1 ≤ i ≤ q + 1. In order to ensure that the random matrix A has small coherence, all columns in matrix A are Gram-Schmidt orthonormalization, and the process is as follows In this way, we obtain a normalized orthogonal matrix C of matrix A. Remark 1. According to Definition 3, let Φ = M C ∈ R (q 2 +q+1)×(q 3 +2q 2 +2q+1) ; there are two cases in the following

•
If A is a deterministic matrix, then C must also be deterministic. Therefore, Φ is a deterministic matrix; • If A is a random matrix, then C must also be random. Therefore, Φ is a structured random matrix.
There are many researches on deterministic matrices and random matrices, but few on structured random matrices. Combining the advantages of random matrices and the incidence matrices of combinatorial designs, this paper constructs the structured random measurement matrices and applied them in STP-CS.

Sampling Model
In the following, we consider Φ = M C as a measurement matrix in STP-CS. Let p be a positive integer and satisfy p = lcm(q 3 + 2q 2 + 2q + 1, p). For a signal x ∈ R p , a novel semi-tensor product compressed sensing model by the embedding operation (STP-CS-EO) is given in the following then y ∈ R p q+1 . According to Theorem 1, it finds that Remark 2. Let x ∈ R N be a discrete signal, where N is a positive integer. For y ∈ R m , we present a comparison of CS, Kronecker product compressed sensing (KP-CS), block compressed sampling based on the embedding operation (BCS-EO), STP-CS, Kronecker product semi-tensor product compressed sensing (KP-STP-CS) and semi-tensor product compressed sensing based on the embedding operation (STP-CS-EO). Table 2 lists the comparison of storage space and sampling complexity of the measurement matrices corresponding to the above six sampling models. Sampling complexity is defined by the multiplication times between a matrix and a vector in the sampling process.
In the following, we calculate the coherence of the matrix M C.

Experimental Simulation
In this section, our measurement matrices are compared with several famous matrices. Simulation results show that our matrices can be regarded as an effective signal processing method.

Reconstruction of 1-Dimensional Signals
Let x be a signal. We select the orthogonal matching pursuit (OMP) [37] algorithm and the basis pursuit (BP) [38] algorithm to solve the l 1 -minimization problem, where the solution is represented by x . The definition of the reconstruction Signal-to-Noise Ratio (SNR) of x is For noiseless recovery, if SNR(x) ≥ 100dB, then the signal x is called perfect recovery. For every sparsity order, we reconstruct 1000 noiseless signals to calculate the perfect recovery percentage. Example 1. Let M 1 be the incidence matrix of (73, 9, 1)-SBIBD. Then, we construct three structured random measurement matrices are a normalized orthogonal matrix of 9 × 9-dimensional Gaussian, Bernoulli, and Toeplitz matrix, respectively.

Reconstruction of 2-Dimensional Images
In this subsection, we select the orthogonal matching pursuit (OMP) algorithm, basis pursuit (BP) algorithm, iterative soft thresholding (IST) [39] algorithm and subspace pursuit (SP) [40] algorithm for testing. When CS reconstructs a gray image, it is hard to judge the distortion of the reconstructed image by the naked eye and other subjective ways. Hence, it is necessary to give an important parameter to truly evaluate the quality of the reconstructed image; that is, the definition of peak signal-to-noise ratio (PSNR) is as follows: where MSE represents the normalized mean square error, that is where M × N represents the image size, and Ψ(x, y), Ψ (x, y) are the gray values of the original image and the reconstructed image at the point (x, y), respectively.
Example 4. Let M 2 be the incidence matrix of (21, 5, 1)-SBIBD; we construct three structured random measurement matrices Φ 4 = M 2 C 4 , Φ 5 = M 2 C 5 and Φ 6 = M 2 C 6 , where C 4 , C 5 and C 6 are the normalized orthogonal matrix of a 5 × 5-dimensional Gaussian matrix, Bernoulli matrix, Toeplitz matrix, respectively. Therefore, Φ 4 , Φ 5 and Φ 6 are 21 × 105-dimensional matrices. We consider the matrices Φ 4 ⊗ I 2 , Φ 5 ⊗ I 2 and Φ 6 ⊗ I 2 are used to reconstruct four images with size 210 × 210-dimensional, Φ 4 ⊗ I 3 , Φ 5 ⊗ I 3 and Φ 6 ⊗ I 3 are used to reconstruct four images with size 315 × 315-dimensional, Φ 4 ⊗ I 4 , Φ 5 ⊗ I 4 and Φ 6 ⊗ I 4 are used to reconstruct four images with size 420 × 420-dimensional in Figure 11. Tables 3-5 have listed the PSNRs and CPU time of four images in the reconstruction process. It shows that the PSNRs of our measurement matrices are not less than that of the Gaussian matrix, Bernoulli matrix and Toeplitz matrix, under OMP, BP, IST, and SP, respectively. The CPU times of our measurement matrices are not longer than those of the Gaussian matrix, Bernoulli matrix and Toeplitz matrix, under OMP, BP, IST, and SP, respectively. Figure 11. Four test images randomly selected from MSCOCO dataset.  (3) Semi-tensor product compressed sensing based on the structured random matrices. Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: Not applicable.

Conflicts of Interest:
The authors declare no conflict of interest.

Abbreviations
The following abbreviations are used in this manuscript: