Image Encryption Scheme Based on Mixed Chaotic Bernoulli Measurement Matrix Block Compressive Sensing

Many image encryption schemes based on compressive sensing have poor reconstructed image quality when the compression ratio is low, as well as difficulty in hardware implementation. To address these problems, we propose an image encryption algorithm based on the mixed chaotic Bernoulli measurement matrix block compressive sensing. A new chaotic measurement matrix was designed using the Chebyshev map and logistic map; the image was compressed in blocks to obtain the measurement values. Still, using the Chebyshev map and logistic map to generate encrypted sequences, the measurement values were encrypted by no repetitive scrambling as well as a two-way diffusion algorithm based on GF(257) for the measurement value matrix. The security of the encryption system was further improved by generating the Secure Hash Algorithm-256 of the original image to calculate the initial values of the chaotic mappings for the encryption process. The scheme uses two one-dimensional maps and is easier to implement in hardware. Simulation and performance analysis showed that the proposed image compression–encryption scheme can improve the peak signal-to-noise ratio of the reconstructed image with a low compression ratio and has good encryption against various attacks.


Introduction
With the rapid development of the internet and computer technology, more images, videos, and other multimedia information are being transmitted and stored through the internet. Image information security is receiving more attention. Image encryption is an effective method to protect digital images [1][2][3][4][5][6].
Image encryption methods are currently available using various techniques, such as chaotic systems [7][8][9][10][11][12][13][14][15][16][17], DNA coding [18][19][20][21], substitution boxes [22][23][24], and cellular automata [25][26][27][28]. Chaotic phenomena [29,30] are seemingly random irregular motions that occur in a deterministic system-a system described by a deterministic theory that behaves in an uncertain, irreducible, and unpredictable manner. Chaotic systems are the most widely used and continuously improved systems in the field of image encryption. Chaotic systems for image encryption are classified into two categories: one-dimensional (1D) chaotic systems [7][8][9][10][11] and multidimensional (MD) chaotic systems. One-dimensional chaotic systems have advantages, such as simple structures, easy implementation, and low computational complexity [11], but they also have disadvantages, such as small key spaces and low security [8,9]. To solve this problem, some MD chaotic systems have been applied to image encryption [12][13][14][15][16][17]. The encryption algorithms based on MD systems have larger key spaces and higher security levels; however, MD chaotic systems are relatively difficult to implement in hardware or software and the respective encryption systems have high computational complexity [11]. In this paper, we used Chebyshev mapping and logistic mapping to generate chaotic sequences for the encryption process and calculate the initial value of the one-dimensional chaotic system by the Secure Hash Algorithm-256(SHA-256) function, which is not only easy to implement and increase the key space, but it also reduces the computational complexity of the cryptosystem and improves the security of the algorithm.
As a revolutionary data acquisition technique, compressive sensing (CS) [31][32][33] opens new perspectives for simultaneous compression and encryption of plaintexts, where the measurement matrix is shared between the sender and receiver as a key. To improve the confidentiality, linear feedback shift registers, pseudo-random number generators, and chaotic mappings were proposed to construct the measurement matrix [34][35][36][37][38]. The design of the measurement matrix is the key to ensuring the quality of signal sampling and to determine the hardware implementation of compressed sampling.
The methods used for constructing the measurement matrix are divided into two categories. One is the random measurement matrix, which mainly includes Gaussian measurement matrix, Bernoulli measurement matrix, local Fourier measurement matrix, etc. [32,39]. Although these matrices can reconstruct the original signals well, they are difficult to implement in hardware, in practical applications; storing the matrix elements requires a large storage space. Another type of matrix is the deterministic measurement matrix. In constructing such a matrix, once the system parameters and the parameters of the construction are determined, the elements of the matrix are also determined. The chaotic sequence generated by the chaotic system is a unification of determinism and randomness, and the measurement matrix constructed from the chaotic sequence is a deterministic matrix, which can effectively solve the problems of instability and the need for multiple measurements in the common random measurement matrix in experiments. Therefore, some scholars proposed to use chaotic sequences for the construction of compressiveaware measurement matrices [40][41][42][43][44]. L. Yu proposed using logistic chaotic sequences to construct measurement matrices in the literature [40] and proved that the matrices satisfied the RIP condition; Frunzete [41] et al. proved that the measurement matrices constructed by using tent chaotic sequences can satisfy the RIP condition. However, all of the above measurement matrix construction methods use chaotic sequences generated by a single chaotic mapping to construct the measurement matrix. The performance of the chaotic sequence generated by a single chaotic mapping is slightly worse than that of the hybrid chaotic sequence, so the performance of the measurement matrix is again influenced by the performance of the chaotic sequence [43]. In order to avoid affecting the reconstruction effect of the signal due to the lack of performance of a single chaotic sequence, a measurement matrix construction algorithm based on the XOR operation chaotic sequence is proposed to improve the reconstruction performance of the measurement matrix and the easy implementation feature of the hardware.
To reduce the computational complexity and storage space, the block compressed sensing (BCS) framework was introduced to process medium to large images. R. Huang et al. [45] designed a block cipher structure consisting of linear measurement, scrambling, S-box, and XOR operations. This scheme can work efficiently in the parallel computing environment. Zhu L et al. [46] conducted a novel BCS-based encryption scheme. The nonuniform sampling strategy was adopted in BCS stage to improve the compression efficiency, and the permutation-diffusion framework was utilized to enhance security. Experiments show that the algorithm has good reconstruction capability, but it has low correlation with plaintext images, and is vulnerable to known plaintext and selected plaintext attacks.
In this paper, we propose a simple and secure image encryption algorithm based on BCS and one-dimensional chaotic mapping. This scheme is made up of two primary stages. In the first stage, after generating chaotic sequences using two simple one-dimensional chaotic mappings-a Chebyshev map and logistic map for binary quantization and XOR operations-a Bernoulli measurement matrix was generated and the measurement value matrix was obtained by performing BCS on the original image. In the second stage, still using the Chebyshev map and logistic map to generate chaotic sequences, the measurement value matrix was encrypted to obtain the ciphertext image. The simulation results verify the security, effectiveness, and good reconfiguration performance of the scheme. The main contributions of this paper are as follows.

1.
A hybrid chaotic sequence measurement matrix satisfying Bernoulli's property based on Chebyshev map and a logistic map, which is easy to implement in hardware, is designed, and the performance of the resulting measurement matrix is analyzed and applied to BCS.

2.
The SHA-256 value of the plaintext image is used to calculate the initial value of the chaotic mapping for the encryption algorithm part, which increases the relevance of the algorithm to the plaintext image and improves the ability of the scheme to resist the selective plaintext attack.

3.
A "no repetition" scrambling algorithm and a Galois domain-based two-way diffusion algorithm are proposed to encrypt the measurement value matrix and improve the security of the BCS framework. 4.
In the reconstruction stage, the SPL reconstruction algorithm, based on DDWT, is used, which, combined with the hybrid chaotic sequence measurement matrix proposed in this paper, can achieve a much higher reconstruction quality of decrypted images with low compression ratio.
The rest of this paper is organized as follows. The preliminaries are discussed in Section 2. The proposed algorithm is described in Section 3. Simulation results and discussion are given in Section 4, and the statistical analysis of the scheme is discussed in Section 5. Conclusions are given in Section 6.

Preliminaries
In this section, BCS and the measurement matrix generation algorithm will be briefly introduced.

Block Compressive Sensing
In traditional compressive sensing (CS) image reconstruction methods, the observations are obtained by randomly projecting the image as a whole through the observation matrix. However, two-dimensional image signals often contain a large amount of information, which make the overall projection of the data very large, requiring large storage space and leading to high computational complexity in the image reconstruction process, causing a lot of problems for the specific implementation. Therefore, the compressed perception theory usually encounters two major obstacles when applied to video and images [47]: first, the observation matrix occupies a large amount of storage; second, the computational complexity is too high in the image reconstruction process. To solve this problem, the block compressive sensing (BCS) framework [48] is proposed, which is described in detail as follows.
The image x, with pixel N = I r × I c , is divided into several B × B size image blocks. Denoting the i-th image block by x i , i = 1, 2, · · · , N/B 2 . The same measurement matrix Φ B is applied to each sub-block x i for the chunked image to obtain the measurement values: where Φ B ∈ R M B ×B 2 , and M B = CR × B 2 /N is the number of samples of each sub-block and CR is the compression ratio. The image is chunked; equivalently, the sampling matrix of the entire image is: The observed values y for the whole image are: Compared to the traditional image compressive sensing algorithm, the BCS algorithm solves the problem of large memory consumption required for the observation matrix in the image reconstruction process and greatly improves the speed of the image compressive sensing signal recovery. At the same time, the receiver does not have to wait for all of the sampling data transmissions to be completed, which is convenient for real-time processing of the signal.
In this paper, we use a smoothed PL (SPL) based on a dual-tree discrete wavelet transform (DDWT) [52] to reconstruct the image. A block-based random image sampling was coupled with a projection-driven compressed-sensing recovery that encouraged sparsity in the domain of directional transforms simultaneously with a smooth reconstructed image. Both contourlets, as well as complex-valued dual-tree wavelets, were considered for their highly directional representations, while bivariate shrinkage was adapted to their multiscale decomposition structures to provide the requisite sparsity constraint. Smoothing was achieved via a Wiener filter incorporated into the iterative projected Landweber compressed-sensing recovery, yielding fast reconstruction. The proposed approach yields images with qualities that match or exceed that produced by a popular, yet computationally expensive, technique, which minimizes total variation. Additionally, reconstruction quality is substantially superior to that from several prominent pursuits-based algorithms that do not include any smoothing.

Hybrid Chebyshev-Logistic Bernoulli Sequence
The Chebyshev chaotic map is defined as: where ω is the order of the system. When the order ω > 2, the map enters the chaotic state and the chaotic sequence can be generated by Equation (4). When ω > 4, the Chebyshev map is in the full-rank mapping state, and the chaotic sequence generated by the system iteration is traversed on the interval [−1, 1]. The mean value of the chaotic sequence generated by the Chebyshev map is 0, and its probability density function is: The logistic chaotic map is defined as: where λ is the system parameters, when λ ∈ (1.40015, 2], the Lyapunov exponent of the system are greater than 0, then the mapping is in the chaotic state. When λ = 2, the logistic map is a full mapping, the chaotic sequence generated by the iteration of Equation (6) Entropy 2022, 24, 273 5 of 23 has good pseudo-randomness, and the generated chaotic sequences are distributed in the interval [−1, 1]. The probability density function of the logistic chaotic sequence is: The chaotic sequence generated by the above chaotic system is a real-valued sequence, which is not suitable for transmission and processing in digital communication systems, so it is necessary to convert the real-valued chaotic sequence into a binary chaotic sequence. Here, the signum can be used to convert the chaotic sequence into a binary chaotic sequence. The signum formula is shown in Equation (8).
where c is the threshold value and x is the value of the real-valued sequence.
Theorem 1. The chaotic sequence {x n } generated by the Chebyshev chaotic map, is transformed by the signum function to produce a spread spectrum sequence {x n } satisfying the Bernoulli distribution, which is called the Chebyshev-Bernoulli (C-B) sequence.

Theorem 2.
The chaotic sequence {y n } generated by the iteration of the logistic chaotic map, is transformed by the signum function to generate the spread spectrum sequence {y n }, which is a Bernoulli random sequence; that is, logistic-Bernoulli (L-B) sequence.
Theorem 3. The sequence generated by XOR operation of two Bernoulli random sequences is still the Bernoulli random sequence.
Equation (5) shows that the Chebyshev chaotic sequence {x n } is uniformly distributed in [−1, 1]. From Equations (5) and (8), the distribution of x n on the interval [−1, 1] corresponding to a value of 0 or 1 for x n+1 can be found as follows: Similarly, the logistic chaotic sequence {y n } is uniformly distributed on [−1, 1]. The distribution of y n on [−1, 1] corresponding to y n+1 when it is 0 and 1, respectively, can be derived from Equations (12) and (13).
Finally, a hybrid chaotic spread spectrum sequence is generated, as shown in Equation (11).
XOR operation does not change the state of the sequence distribution. Therefore, the sequence {z n } is also uniformly distributed over [−1, 1]. From Equations (9) and (10), the distribution of z n on [−1, 1] corresponding to z n+1 taking values of 0 and 1 can be obtained as: From Equation (12), the probability that z n+1 takes the value of 0 or 1 is 1/2. Treating {z n } as a Markov chain, its one-step transfer probability matrix is: The i-step transfer probability matrix P i = P(i ≥ 1) can be obtained from Equation (13). According to the properties of the Markov chain, it can be proved that the sequence {z n } is random and independent statistics, so the sequence {z n } is the Bernoulli sequence, and it is called the mixed Chebyshev and logistic Bernoulli (MCLB) sequence.

Construction of a Measurement Matrix Based on MCLB Sequence
The Bernoulli random observation matrix, as a common random measurement matrix, can satisfy the RIP property with great probability [53]. The mixed chaotic sequence proposed in the previous subsection is a random and statistically independent Bernoulli random sequence, so the measurement matrix constructed from the hybrid chaotic sequence satisfies the RIP property. Therefore, this paper proposes an algorithm to construct the measurement matrix based on the mixed chaotic sequence, and its flowchart is shown in Figure 1.  The specific construction steps are as follows:  (4) and (6), and given the initial values x 0 , y 0 , generate Chebyshev chaotic sequences {x n } and logistic chaotic sequences {y n } of the desired length iteratively from Equations (4) and (6).

2.
Binary quantization of the chaotic sequences {x n } and {y n } using Equations (9) and (10), respectively, to obtain the chaotic spread spectrum sequence {x n } and {y n }, and XOR operation to generate a hybrid chaotic sequence {z n }.

3.
The mixed chaotic sequence is cyclically shifted by columns to generate a measurement matrix of desired size, which is called the mixed Chebyshev and logistic Bernoulli matrix (MCLBM). Finally, the MCLBM is orthogonalized and normalized to obtain the final measurement matrix.

Simulation Test Based on MCLBM
This section verifies the reliability and validity of the MCLBM through image simulation experiments and compares the MCLBM with the Gaussian matrix (GM), Bernoulli matrix (BM), logistic-Bernoulli matrix (LBM), and Chebyshev-Bernoulli matrix (CBM) are compared.
BCS was used to test the "Lena" image with a size of 512 × 512. First, the image was divided into 32 × 32 blocks, the compression ratio was set to 0.3, different measurement matrices were used for compression measurements, and the DDWT-SPL algorithm was used in the reconstruction stage. MCLBM, GM, BM, LBM, and CBM were selected as measurement matrices for compressive reconstruction of "Lena" to compare the performance. In order to reduce the instability of GM and BM, they were averaged over 100 experiments. Given the parameters of the chaotic system, the constructed measurement matrices were intrinsically deterministic and were definite measurement matrices, so there was no need to repeat the experiments. We used the peak signal-to-noise ratio (PSNR) as an evaluation index for reconstructed images. The PSNR was the most widely used objective evaluation index for images. The PSNR is usually used to judge the quality of the decrypted image [54]. The larger the PSNR value, the smaller the difference between the two images and the higher the reconstruction accuracy.
PSNR is defined as follows: where X(i, j) is the value of the corresponding pixel point in the original image and Y(i, j) is the value of the corresponding pixel point in the reconstructed image. Figure 2 shows the effect plots of the reconstruction of the "Lena" image with different measurement matrices for a compression ratio of 0.3 compared with the original image. It can be seen from the figure that MCLBM has the best reconstruction effect and is closer to the original image.
For a more intuitive understanding of the performance of MCLBM, the experimental data of PSNR and reconstruction time of the five measurement matrices in reconstructing images at a compression ratio of 0.3 are given in Table 1. From the data in the table, it can be obtained that MCLBM outperforms several other measurement matrices in terms of PSNR at a compression ratio of 0.3. The reconstruction times of different measurement matrices are approximately the same, and the reconstruction time of MCLBM is slightly better than the other measurement matrices. Figure 3 reflects the variation of the PSNR of the "Lena" reconstructed images for different compression ratios for the five measurement matrices GM, BM, LBM, CBM, and MCLBM. As can be seen in Figure 3, the higher the compression ratio, the higher the PSNR of the reconstructed image when using different measurement matrices to compress the reconstructed image. Before the compression ratio is less than 0.4, the PSNR of the reconstructed image with MCLBM as the measurement matrix is greater than that of other measurement matrices by about 1 dB. When the compression ratio is greater than 0.4, the effect of the compression ratio on the PSNR exceeds that of the measurement matrix, so the PSNR with MCLBM as the measurement matrix is not greater than that of other measurement matrices by 1 dB, but is still greater than that of other measurement matrices. In summary, the peak signal-to-noise ratio of the reconstructed image using MCLBM is always the largest, and the reconstruction effect is better than the other four measurement matrices.  Figure 2 shows the effect plots of the reconstruction of the "Lena" image with different measurement matrices for a compression ratio of 0.3 compared with the original image. It can be seen from the figure that MCLBM has the best reconstruction effect and is closer to the original image. For a more intuitive understanding of the performance of MCLBM, the experimental data of PSNR and reconstruction time of the five measurement matrices in reconstructing images at a compression ratio of 0.3 are given in Table 1. From the data in the table, it can be obtained that MCLBM outperforms several other measurement matrices in terms of PSNR at a compression ratio of 0.3. The reconstruction times of different measurement matrices are approximately the same, and the reconstruction time of MCLBM is slightly better than the other measurement matrices.   Figure 3, the higher the compression ratio, the higher the PSNR of the reconstructed image when using different measurement matrices to compress the reconstructed image. Before the compression ratio is less than 0.4, the PSNR of the reconstructed image with MCLBM as the measurement matrix is greater than that of other measurement matrices by about 1 dB. When the compression ratio is greater than 0.4, the effect of the compression ratio on the PSNR exceeds that of the measurement matrix, so the PSNR with MCLBM as the measurement matrix is not greater than that of other measurement matrices by 1 dB, but is still greater than that of other measurement matrices. In summary, the peak signal-to-noise ratio of the reconstructed image using MCLBM is always the largest, and the reconstruction effect is better than the other four measurement matrices.

The Proposed Algorithm
The main objective of the image compression/encryption algorithm proposed in this paper was to improve the performance of image compression, reconstruction, and encryption at low compression ratios; that is, with a reduced amount of more data. For this purpose, the plaintext images were blocked and measured using the MCLBM in the compressive measurement phase. To improve the resistance to selective plaintext attacks, the chaotic encryption sequence was generated for the plaintext image using the SHA-256 func-

The Proposed Algorithm
The main objective of the image compression/encryption algorithm proposed in this paper was to improve the performance of image compression, reconstruction, and encryption at low compression ratios; that is, with a reduced amount of more data. For this purpose, the plaintext images were blocked and measured using the MCLBM in the compressive measurement phase. To improve the resistance to selective plaintext attacks, the chaotic encryption sequence was generated for the plaintext image using the SHA-256 function to generate a key as the initial value of the chaotic map, and a new image encryption scheme was designed accordingly.
The proposed compression/encryption algorithm consists of two main processes, BCS compression sampling and quantization encryption of the measured values, as shown in Figure 4. The specific steps are discussed in detail in the following sections.

Block Compression Sampling and Quantification Process
Assuming the BCS compression ratio is CR, the size of the plaintext image P is

Block Compression Sampling and Quantification Process
Assuming the BCS compression ratio is CR, the size of the plaintext image P is M × N, and it is divided into non-overlapping blocks of size B × B, which can be divided into n B = M×N B×B blocks, and then these blocks are expanded into a one-dimensional vector in the scanning order from left to right and then from top to bottom, and noted as S i , i = 1, 2, · · · , n B . The BCS compression sampling process is as follows: 1.

2.
Generate a measurement matrix Φ 0 of size m × n by cyclically shifting z j by columns: 3.

4.
Compress the plaintext image P after blocked and obtain the measurement value C i of each block.
Finally, C i is combined in blocks to obtain the measurement value matrix C, at which point the measurement value matrix is At this time, the size of C is m × n B , and the data are obviously smaller than that of the original image. If the compression ratio is CR = 0.25, then the size of C is only 1/4 of the size of the plaintext image, which is convenient for the subsequent encryption processing.
The output matrix values after CS measurements are usually outside the grayscale range of [0, 255]. Therefore, it must be quantized before encryption. The quantization process is expressed as follows: where floor(x) represents the maximum integer value that is not greater than x and min and max are the minimum and maximum elements of the measurement matrix Y, respectively.

Cipher Sequence Generation Algorithm
In order to make the proposed encryption algorithm resistant to known plaintext and selected plaintext attacks, this paper uses the SHA-256 value of the plaintext image to calculate the parameters and initial values of the variable parameter chaotic mapping, so that the parameters and initial values are closely related to the plaintext image. The specific steps are as follows.
The 256-bit hash of the plaintext image is first obtained, written in 64-bit hexadecimal form, and then each bit is converted to the corresponding decimal number, which is used as the key K and is denoted as: where k i ∈ (0, 16), i = 1, 2, · · · , 64. The encryption algorithm requires the use of Chebyshev chaotic map and the logistic chaotic map to generate two cipher sequences, so generate their initial values x 0 , ω , y 0 , and λ according to the key K, generated by the following equation.
Then the initial values x 0 , ω , y 0 , and λ generated by SHA-256 are substituted into Equations (4) and (6) to generate a chaotic sequence S = {s 1 , s 2 , · · · , s mn B } and a chaotic sequence Z = z 1 , z 2 , · · · , z 2mn B , respectively, after which the elemental values of the chaotic sequences S and Z are quantized using the following equation: Sequence X is used for scrambling and Y for diffusion.

Image Encryption Algorithm
The encryption process begins with a scrambling of the matrix of measurement values, followed by a diffusion operation, as follows.

1.
First, keep only one of the recurring pseudo-random numbers in the pseudo-random sequence X (i.e., the first occurrence), then add the values in {1, 2, · · · , mn B } that do not appear in X to the end of X in descending order, and note X 1 . X 1 is then used to displace C without repetition. C is then expanded column-wise into a one-dimensional vector, denoted C 1 . Finally, C 1 (X 1 (i)) is swapped with C 1 (X 1 (mn B − i + 1)) to give the disordered matrix C 2 .

2.
Select the first mn B values of the sequence Y to form the sequence Y 1 . C 3 is obtained by forward diffusion of C 2 using Y 1 as follows.
Select the values from mn B + 1 to 2mn B of the sequence Y to form the sequence Y 2 . Use Y 2 to perform backward diffusion on C 3 to obtain C 4 , as follows.
where "×" represents the multiplication operation of GF(257) domain and C 4 (0) = 0. Finally, C 4 is rearranged into a matrix of m × n B by columns, and after the above operation, the ciphertext image C 5 is obtained.

Image Decryption and Reconstruction Process
The decompression and decryption process is the opposite of the compression and encryption process. The steps are summarized as follows: 1.
Use the key K to generate the chaotic mapping initial values x 0 , ω , y 0 , and λ , and generate X and Y by Equation (23).

2.
The received ciphertext image C 5 is sequentially subjected to backward diffusion, forward diffusion, and scrambling algorithms to obtain the quantized measurement value matrix P 0 .

3.
The measurement matrix C is recovered by inverse quantization of the quantized measurement matrix P 0 by the following equation: 4. The parameters ω, λ and the initial value x 0 , y 0 of the MCLB transmitted by the sender are used to iterate and generate the MCLBM, that is, the measurement matrix Φ i .

5.
The DDWT-SPL algorithm is used to reconstruct the recovered measurement matrix C to obtain a reconstructed original image P of size M × N.

Image Encryption Algorithm
To verify the effectiveness of the proposed scheme, simulation results and performance analyses are given in this section. The experiments were performed on a computer with 2.50 GHz CPU and 8 GB RAM using MATLAB R2017a. Different grayscale images (size 512 × 512) are selected for simulation tests, such as "Lenna", "Peppers", "Cell", and "Xray". The initial value of MCLBM was set to ω = 2, λ = 4, x 0 = 0.169877, y 0 = 0.2555. The compression ratio of the compression measurement was CR = 0.25, and the size of the plain image blocks were all 32 × 32. In the decryption stage, DDWT-SPL was used as the reconstruction algorithm.
The simulation results are shown in Figure 5. The first to third columns represent the plaintext image, the encrypted image, and the decrypted reconstructed image, respectively. From the final encryption results, no meaningful information can be obtained from the ciphertext image in the third column. The compression and reconstruction results show that the size of the ciphertext image is only the size of the original image, which greatly reduces the image file size, storage space, and transmission bandwidth required for transmission. In addition, as shown in the third column, the decrypted reconstructed image is visually very similar to the corresponding plaintext image.

Impact of Important Parameters on Encryption and Decryption Performance
In this scheme, two important parameters, block size and compression ratio, both affect the encryption and decryption results. Next, we will evaluate their effects on the simulation results for better performance.
Firstly, the influence of block size on the reconstructed image PSNR is discussed when the compression ratio is the same. The "Lenna" image of 512 × 512 size was used for compressed encryption as well as decrypted reconstruction, with the compression ratio set to 0.25. Table 2 lists the PSNR values and the consumption time for different block sizes. It can be seen from Table 2 that when the compression ratio is the same, the PSNR value and the time consumed increase as the block size increases, but the PSNR value increases insignificantly and the time consumed increases too much, which has a side effect instead, so a high PSNR value can be achieved by choosing a block size of 32 × 32.
Then, we discuss the PSNR of the reconstructed image with a block size of 32 × 32 and different image compression-encryption and decryption at different compression ratios, as shown in Figure 6. It can be seen that the PSNR values of the different reconstructed images can still reach more than 30 dB at a compression ratio of 0.2, which indicates that the compression-encryption scheme proposed in this paper can achieve a high PSNR value at a low compression ratio.
Finally, the compression-aware compression ratio in the algorithm of this paper is set to 0.25 and the block size is 32 × 32. Table 3 compares the scheme of this paper with other compression-encryption schemes, and it can be seen that the algorithm proposed in this paper has higher PSNR values than other algorithms under the same compression ratio, which indicates that it has good compression reconstruction performance. plaintext image, the encrypted image, and the decrypted reconstructed image, re tively. From the final encryption results, no meaningful information can be obtained the ciphertext image in the third column. The compression and reconstruction results that the size of the ciphertext image is only the size of the original image, which gr reduces the image file size, storage space, and transmission bandwidth required for t mission. In addition, as shown in the third column, the decrypted reconstructed ima visually very similar to the corresponding plaintext image.

Impact of Important Parameters on Encryption and Decryption Performance
In this scheme, two important parameters, block size and compression ratio, bo fect the encryption and decryption results. Next, we will evaluate their effects on the ulation results for better performance.
Firstly, the influence of block size on the reconstructed image PSNR is discussed the compression ratio is the same. The "Lenna" image of 512 × 512 size was used for pressed encryption as well as decrypted reconstruction, with the compression ratio 0.25. Table 2 lists the PSNR values and the consumption time for different block siz can be seen from Table 2 that when the compression ratio is the same, the PSNR valu the time consumed increase as the block size increases, but the PSNR value increas significantly and the time consumed increases too much, which has a side effect ins so a high PSNR value can be achieved by choosing a block size of 32 × 32.   Finally, the compression-aware compression ratio in the algorithm of this paper is set to 0.25 and the block size is 32 × 32. Table 3 compares the scheme of this paper with other compression-encryption schemes, and it can be seen that the algorithm proposed in this paper has higher PSNR values than other algorithms under the same compression ratio, which indicates that it has good compression reconstruction performance.

Key Space
For a secure image encryption algorithm, the key space is at least 100 2 to resist brute force attacks. In this scheme, the key consists of two parts: (1) the parameters ,  and the initial value 00 , xy of the MCLB; and (2) the SHA-256 hash function of the plaintext image. According to the IEEE floating point standard, the precision of the double precision values is approximated as 15 10 − , SHA-256 key space can be up to 128 2 . Therefore, the key space of the proposed scheme in this paper is 15 15 15 15 128 300 10 10 10 10 2 2      , greater than 100 2 . In other words, the key space of the algorithm meets the security criteria and can resist exhaustive attacks. Table 4 compares the key space of this paper with that of other compression-encryption schemes, and the results in Table 4 show that the proposed image encryption algorithm is secure enough to resist brute-force attacks.

Key Space
For a secure image encryption algorithm, the key space is at least 2 100 to resist brute force attacks. In this scheme, the key consists of two parts: (1) the parameters ω, λ and the initial value x 0 , y 0 of the MCLB; and (2) the SHA-256 hash function of the plaintext image. According to the IEEE floating point standard, the precision of the double precision values is approximated as 10 −15 , SHA-256 key space can be up to 2 128 . Therefore, the key space of the proposed scheme in this paper is 10 15 × 10 15 × 10 15 × 10 15 × 2 128 ≈ 2 300 , greater than 2 100 . In other words, the key space of the algorithm meets the security criteria and can resist exhaustive attacks. Table 4 compares the key space of this paper with that of other compression-encryption schemes, and the results in Table 4 show that the proposed image encryption algorithm is secure enough to resist brute-force attacks. different from the correct one when decrypting, no valuable information about the plain image can be obtained from the decrypted result. This section uses the plain image "Lena" as an example to test the key sensitivity of the algorithm. The sensitivity of the chaotic map to the initial conditions determines the sensitivity of the encryption system to the key. Therefore, the 256-bit hash value of the plaintext is changed from K to K 0 , K 1 , and K 2 with bit modifications, which are shown below.
The "Lena" image is encrypted with K to K 0 , K 1 , and K 2 to obtain different cipher images, as shown in Figure 7b-e. Figure 7f-g shows the difference between two ciphertexts; as seen from Figure 7f-g, the "difference image" of two ciphertexts obtained by encrypting the same plaintext image shows a noise style, intuitively, it reflects that the two ciphertexts differ significantly; that is, the image encryption system has key sensitivity. Figure 7j-l shows the images decrypted by different "wrong" keys; it can be seen that the information of the original image cannot be obtained in the decrypted images, which indicates that the image decryption system is key sensitive.

Key Sensitivity
Key sensitivity analysis aims to analyze the difference between two ciphertext images obtained by encrypting the same plaintext image when there is a small change in the key. If the algorithm is sensitive to the key, it means that if the input decryption key is slightly different from the correct one when decrypting, no valuable information about the plain image can be obtained from the decrypted result. This section uses the plain image "Lena" as an example to test the key sensitivity of the algorithm. The sensitivity of the chaotic map to the initial conditions determines the sensitivity of the encryption system to the key. Therefore, the 256-bit hash value of the plaintext is changed from K to K0, K1, and K2 with bit modifications, which are shown below.
The "Lena" image is encrypted with K to K0,K1, and K2 to obtain different cipher images, as shown in Figure 7b-e. Figure 7f-g shows the difference between two ciphertexts; as seen from Figure 7f-g, the "difference image" of two ciphertexts obtained by encrypting the same plaintext image shows a noise style, intuitively, it reflects that the two ciphertexts differ significantly; that is, the image encryption system has key sensitivity. Figure 7j-l shows the images decrypted by different "wrong" keys; it can be seen that the information of the original image cannot be obtained in the decrypted images, which indicates that the image decryption system is key sensitive. To quantify the differences between ciphertext images obtained from the same plaintext image, the pixel change range (NPCR) and the uniform average change intensity (UACI) were calculated using the correct and incorrect keys for encrypting the images, and the results are shown in Table 5. NPCR and UACI are calculated as follows. e) encrypted plaintext images using K, K 0 , K 1 , and K 2 , respectively; (f) Difference image of encrypted image ciphertext using K and K 0 ; (g) Difference image of encrypted image ciphertext using K and K 1 ; (h) Difference image of encrypted image ciphertext using K and K 2 ; (i) Decrypt the (b) ciphertext image using K; (j-l) Decrypt the (b) ciphertext image using K 0 , K 1 , and K 2 , respectively.
To quantify the differences between ciphertext images obtained from the same plaintext image, the pixel change range (NPCR) and the uniform average change intensity (UACI) were calculated using the correct and incorrect keys for encrypting the images, and the results are shown in Table 5. NPCR and UACI are calculated as follows.
where, D(i,j) is the Heaviside function, D(i,j) = 0 when C1(i,j) = C2(i,j), and D(i,j) = 1 when C1(i,j) = C2(i,j). The theoretical expectation of NPCR is 996.6034% and the theoretical expectation of UACI is 33.4635%.  Table 5 shows that both NPCR and UACI are close to the theoretical values and more than 99.6% of the pixels are modified, which means that the encryption process is highly sensitive to the key. Table 6 lists the NPCR and UACI values in Figure 7i-l, where more than 99.5% of the pixels are changed when the key is slightly changed, which indicates that the decryption process is highly sensitive to the key.

Histogram
The histogram is a fundamental property of digital images. It reflects the distribution of pixel grayscale values in the image. The more uniform the histogram distribution of the ciphertext image, the better the encryption effect [9]. As can be seen from Figure 8, the pixel value distributions of different plaintext images have their own characteristics, and their ciphertext image histograms are all approximately uniformly distributed, so that the attacker cannot obtain any statistical information from the ciphertext image. Therefore, the image compression and encryption algorithm proposed in this paper is able to resist statistical analysis attacks.

Information Entropy Analysis
The information entropy reflects the uncertainty of the image information, and it is generally believed that the greater the entropy, the greater the uncertainty, and the greater the amount of information, the smaller the visible information. The formula for calculating information entropy is shown below: where L is the gray level of the image and p(i) denotes the probability of occurrence of the gray value i. For a grayscale random image with L = 256, the theoretical value of information entropy is 8. Table 7 presents the information entropy of different cipher images for different compression ratios. From Table 7, it can be seen that the information entropy of each cipher image is very close to the theoretical value, even under different compression ratios. Therefore, the compression-encryption scheme proposed in this paper can achieve both image compression and encryption, and the encrypted images have good randomness and sufficient security. their ciphertext image histograms are all approximately uniformly distributed, so that the attacker cannot obtain any statistical information from the ciphertext image. Therefore, the image compression and encryption algorithm proposed in this paper is able to resist statistical analysis attacks.

Information Entropy Analysis
The information entropy reflects the uncertainty of the image information, and it is generally believed that the greater the entropy, the greater the uncertainty, and the greater the amount of information, the smaller the visible information. The formula for calculating information entropy is shown below: where L is the gray level of the image and ( ) p i denotes the probability of occurrence of the gray value i .
For a grayscale random image with L = 256, the theoretical value of information entropy is 8. Table 7 presents the information entropy of different cipher images for different compression ratios. From Table 7, it can be seen that the information entropy of each cipher image is very close to the theoretical value, even under different compression ratios. Therefore, the compression-encryption scheme proposed in this paper can achieve both image compression and encryption, and the encrypted images have good randomness and sufficient security.

Correlation Analysis
The correlation coefficient of adjacent pixels can be used to evaluate the image encryption effectiveness. Generally, a relatively good digital image encryption scheme can make the correlation between adjacent pixels of the ciphertext image relatively low [58,59]. The closer the correlation coefficient of adjacent pixels to 0, the better the encryption effect. In this paper, for 10,000 pairs of pixels selected in the image, the correlation coefficient of adjacent pixels is calculated by the formula: (29) where N denotes any N pair of adjacent pixel points in the image, (u i , v i ), i = 1, 2, · · · , N denotes the grayscale value of N pairs of adjacent pixel points, E(u) and E(v) are the averages of all u and v. In addition, D(u) is the variance, cov(u, v) is the covariance, and r uv is the correlation coefficient. Lenna, Peppers, Cell, and X-ray of size 512 × 512 were used as plaintext images in this experiment. To calculate the correlation coefficients, 10,000 pairs of pixels were randomly selected from the plaintext and ciphertext images in horizontal, vertical, and diagonal directions. The calculation results are shown in Table 8. As can be seen from Table 8, the correlation coefficients of neighboring pixels in the plain images in all three directions are greater than 0.9, indicating that the correlation between neighboring pixels is high. The correlation of adjacent pixels in the images encrypted with the algorithm proposed in this paper is close to 0. Using Lena as an example, 10,000 pairs of pixels were selected from the original and encrypted images in the horizontal, vertical and diagonal directions. Their adjacent pixel correlations in horizontal, vertical, and diagonal directions are plotted in Figure 9, which shows the pixel distribution before and after encryption of the Lena image; it can be seen that the pixel distribution follows a certain pattern, and the grayscale values from the ciphertext image are randomly distributed between 0 and 255.

Choose Plaintext Attack
According to Kerckhoffs's principle, the security of an encryption system depends on the key and not on the encryption algorithm itself. The choose plaintext attack [10,54,60], implies that an attacker temporarily gains access to use the encryption machine, so that he can encrypt any plaintext and obtain the corresponding ciphertext to decrypt all (or part) of the plaintext and the key. The chosen plaintext attack is the strongest attack, and if a cryptosystem can resist this attack, then it must be able to resist other attacks. In this paper, the SHA-256 hash function of the original image was calculated as the initial value of the

Choose Plaintext Attack
According to Kerckhoffs's principle, the security of an encryption system depends on the key and not on the encryption algorithm itself. The choose plaintext attack [10,54,60], implies that an attacker temporarily gains access to use the encryption machine, so that he can encrypt any plaintext and obtain the corresponding ciphertext to decrypt all (or part) of the plaintext and the key. The chosen plaintext attack is the strongest attack, and if a cryptosystem can resist this attack, then it must be able to resist other attacks. In this paper, the SHA-256 hash function of the original image was calculated as the initial value of the chaotic system. Therefore, when different plaintext images are encrypted, the corresponding encryption sequences change, so that a hacker cannot obtain useful information by encrypting some specific images. Therefore, the scheme can resist well to selective plaintext attacks.

Ciphertext Sensitivity
The purpose of ciphertext sensitivity analysis is to analyze the difference between the restored image and the original plaintext image after a small change in the ciphertext image by the decryption system. If the restored image is very different from the original plaintext image, the image decryption system is said to have strong ciphertext sensitivity; if the restored image is not very different from the original plaintext image, the image cryptosystem is said to have weak ciphertext sensitivity, and then this type of image cryptosystem is deficient in fighting against selected ciphertext attacks or known ciphertext attacks. Moreover, a small change in the ciphertext image is a small change in the value of one or more pixel points of a given ciphertext image.
In this paper, different plaintext images P were selected, and the corresponding ciphertext image C 1 was obtained by encrypting P using this encryption system. A pixel point was randomly selected from C 1 and the value of this pixel point was changed by the amount of 1, the changed image was noted as C 2 , and the restored image P 1 was obtained by decrypting C 2 . The NPCR and UACI values of P and P 1 were calculated and listed in Table 9. From Table 9, we can see that the calculated results of both NPCR and UACI are close to their respective theoretical values, indicating that the encryption system has strong ciphertext sensitivity.

Time Complexity Analysis
The algorithm differs from the traditional compressive sensing algorithm in the use of block compressive sensing, and in the image recovery process, the DDWT-SPL algorithm is used. The encryption and decryption times of Lena and Peppers images of size 512 × 512, with different compression ratios (CR), were analyzed; the results are shown in Table 10. Table 11 shows the comparison of the encryption time of the algorithm in this paper with other algorithms for the compression ratio CR = 0.25 (encrypt "Lena" image). It can be seen from Table 11 that the algorithm is faster than other BCS image encryption algorithms [61] when encrypting images of the same size. Compared to the image encryption algorithms based on the chaotic measurement matrix and SHA-256 [16], the image compression-encryption algorithm in this paper is slightly slower.
Next, the time complexity of the encryption scheme proposed in Section 3 was analyzed in detail, assuming that the size of the plaintext image was M × N, the size of the block compression-aware measurement matrix was m × n, and the compression ratio was CR. Section 3.1, involved generating the measurement value matrix, and the time complexity of generating the MCLB is Θ(2 × m × n), the time complexity of generating the measurement matrix is Θ(m × n), and the time complexity of the final quantized mea-surement value matrix is Θ(CR × M × N). Section 3.2 focused on generating the cipher sequence, where the time complexity of generating the key value was Θ(M × N), the time complexity of generating two cipher sequences was Θ(2 × CR × M × N). Section 3.3 involved the encryption of the measurement value matrix, where the time complexity of the permutation of the measurement values was Θ(CR × M × N), and then we diffused it, with a time complexity of Θ(2 × CR × M × N). Then the total time complexity was Θ(2 × M × N). Compared with the [55,61] encryption algorithms listed in Table 12, our scheme has a smaller time complexity.  [16] 0.4115 3.5099 Table 12. Comparison results of time complexity.

Conclusions
A block compressive sensing image encryption algorithm based on the mixed chaotic Bernoulli measurement matrix is proposed. The core of this encryption algorithm is the combination of the mixed chaotic sequence measurement matrix and BCS, which is practical and can improve the reconstruction quality of images with low compression ratios under the premise of ensuring system security and easy hardware implementation. The compression performance of the mixed chaotic sequence measurement matrix was analyzed, and the performance analysis shows that the measurement matrix has better measurement reconstruction performance than the ordinary measurement matrix. In the encryption process, the SHA-256 function of the plaintext image was used to generate the initial values of the desired chaotic sequence, which improved the ability of the scheme to resist the selective plaintext attack. Finally, the results of the encryption scheme were analyzed and evaluated. First, the effect of reconstructing the image was analyzed; the method has better reconstruction capabilities at a low compression ratio compared with other methods, which could greatly reduce the amount of data to be processed. Secondly, the security of the encryption algorithm was analyzed. The encryption algorithm could resist various attacks from key space, statistical analysis, and information entropy. Finally, the proposed encryption scheme has good encryption effects and image compression capabilities.
Author Contributions: C.Y. conceived and wrote the paper. P.P. and Q.D. analyzed the data. Q.D. provided theoretical guidance. P.P. provide guidance for the paper structure. All authors have read and agreed to the published version of the manuscript.