Efficient Entropic Security with Joint Compression and Encryption Approach Based on Compressed Sensing with Multiple Chaotic Systems

This paper puts forward a new algorithm that utilizes compressed sensing and two chaotic systems to complete image compression and encryption concurrently. First, the hash function was utilized to obtain the initial parameters of two chaotic maps, which were the 2D-SLIM and 2D-SCLMS maps, respectively. Second, a sparse coefficient matrix was transformed from the plain image through discrete wavelet transform. In addition, one of the chaotic sequences created by 2D-SCLMS system performed pixel transformation on the sparse coefficient matrix. The other chaotic sequences created by 2D-SLIM were utilized to generate a measurement matrix and perform compressed sensing operations. Subsequently, the matrix rotation was combined with row scrambling and column scrambling, respectively. Finally, the bit-cycle operation and the matrix double XOR were implemented to acquire the ciphertext image. Simulation experiment analysis showed that the compressed encryption scheme has advantages in compression performance, key space, and sensitivity, and is resistant to statistical attacks, violent attacks, and noise attacks.


Introduction
In the wake of the development of the internetworking and information technique, digital images are extensively used in numerous domains [1][2][3][4]. A great quantity of information is presented in a digital image form, which usually contains private and important information. When important information is falsified or leaked, it can cause acute consequences [5,6], which makes the privacy security issue very prominent. Hence, the information security protection of digital images has aroused widespread attention [7,8]. In this situation, multiple encryption scenarios have emerged.
In the past few years, due to the excellent characteristics of chaotic maps [9,10], various encryption scenarios based on chaotic maps have been created [11][12][13][14]. Wang et al. utilized parameter controlled scroll chaotic attractors for encryption [15]. Gao proposed a new 2D hyperchaotic system for image encryption [16]. In addition, chaotic maps can be combined with a variety of methods for encryption. Chen et al. combined chaotic maps and DNA coding for encryption and the results indicated that the effect was better than using chaotic maps alone [17,18]. Yu et al. combined chaotic maps and fractional Fourier transform for optical image encryption [19,20]. Choi et al. combined chaotic maps and cellular automata for encryption [21,22]. Sundarakrishnan et al. used chaotic mapping and cellular automata to encrypt color images, increased the key space, and used a double permutation and replacement framework, which significantly reduced the correlation and improved the security of the algorithm [23]. Based on the advantage of chaos theory to encryption, many scholars began to use multiple chaotic systems in the encryption framework. Ramasamy et al. achieved secure and efficient encryption using the proposed enhanced logical map, chaotic map, and general encryption framework-scrambling, diffusing, and generating a key stream [24]. Masood et al. used multiple chaotic systems such as two-dimensional Arnold cat mapping, Newton-Leipnik dynamic system and improved Logistic-Gaussian chaotic system, to generate sequences for multiple links of color image encryption, which improved the security of the algorithm [25]. This image encryption scheme using multiple chaotic systems combined with the encryption framework makes full use of the advantages of chaos for encryption, making encryption more secure and efficient, obtaining good encryption effects under various experimental tests, and resisting various attacks. Although the above-mentioned algorithms have achieved good results, none of the above algorithms are applicable due to the bandwidth constraint problem.
To satisfy the bandwidth-constrained demands, the theoretical concept of compressed sensing (CS) was established [26,27]. Soon afterward, multifarious compressed encryption scenarios based on CS were put forward [28,29]. Lu et al. created an image encryption scenario [30] that compressed images by CS and encrypted images by double random phase coding technology. Although this algorithm reduced the amount of data, its method of using the metric matrix as the key takes up a large amount of storage space and is bandwidth-constrained.
To resolve these issues, a new image compression encryption scenario has attracted much attention [31][32][33][34][35][36]. This scenario combines compressed sensing with chaotic systems, which utilizes compressed sensing to compress the image to meet the bandwidth demands in transmission and can also make full use of the excellent properties of the chaotic system by using the initial parameters of chaotic maps as the key, and using the created sequences to form the measurement matrix. This method resolves the problem that the key occupies a large storage space and the limited bandwidth.
To further improve the security of these schemes, many scenarios have adopted scrambling methods [37][38][39][40][41]. According to the position of the scrambling in the algorithm, these can be divided into two categories. One is to perform the scrambling and confusion operation after the measurement value is obtained by compressed sensing [40,41]. The other is to first obtain a sparse coefficient matrix through a sparse transformation of the original image and then perform a scrambling operation on the sparse coefficient matrix [38,39]. Both types of methods can decrease the image correlation and heighten the security of the scenarios, while the latter has the advantage of effectively enhancing the reconstruction quality of the decrypted image [42]. In general, there are two scrambling methods in the encryption process: scrambling using Arnold map [38] and scrambling of the index values obtained by sorting the chaotic sequences [37,38]. Both methods have drawbacks. The Arnold map scrambling method cannot be directly used for non-square images [43]. The second scrambling method is easy to operate and its scrambling effect is determined by the randomness of the chaotic sequence [44], so it is not suitable for a chaotic system with bad randomness. Therefore, a scrambling method called pixel transformation is proposed.
To increase the security and meet the demand of limited bandwidth, a compressed encryption plan based on two chaotic systems and CS was put forward in this paper. First, the SHA-384 of the original image was used to calculate the initial parameters of the 2D-SLIM and 2D-SCLMS system and used as the key, which greatly heightened the relevance between the scheme and the plaintext, and can better resist known plaintext attacks and selective plaintext attacks. Second, the plaintext image is converted into a sparse coefficient matrix. Third, to increase the reconstruction quality of the decrypted image, a new scrambling technology is created. In addition, the chaotic sequence is utilized to create the measurement matrix and implement the compressed sensing operation, which greatly meets the transmission bandwidth requirements. To further heighten the security, the encryption operation combines matrix rotation with row scrambling and column scrambling, respectively, followed by a bit-cycle operation. Finally, double XOR of the matrix is implemented to acquire the ciphertext image.
The novelties of this paper are: (1) By combining two chaotic systems and compressed sensing, a new image encryption scheme is generated; (2) a new pixel transformation scrambling method is proposed; and (3) the combination of matrix rotation and scrambling improves the security of the algorithm.
The remaining sections are organized as follows. Section 2 presents the related work. Section 3 designs the new compression encryption scenario. Section 4 demonstrates the corresponding decryption algorithms. Section 5 presents the various performance analyses of the compression encryption scenario. Section 6 provides our conclusions.

Compressed Sensing
In 2006, Donoho et al. proposed a compressed sensing formulation and processing method for signals [26,27]. This concept smashes the restrictions of Shannon's sampling theorem by exploiting the sparsity of the natural signal itself or the sparsity of a certain transform domain, allowing for the recovery of the sampled signal with a small amount of samples at lower than the Nyquist sampling rate. Compressed sensing, also known as compressive sampling, allows for sampling, compression, and encryption to be conducted concurrently [43,45].
The pivotal elements of compressed sensing comprises sparse representation, the measurement matrix, and the reconstruction scheme. In general, the signal is not sparse in the time domain, but in some transform domains, the signal may become sparse. Therefore, the classic sparsity representation methods comprise discrete wavelet transform (DWT), fast Fourier transform (FFT), and discrete cosine transform (DCT).
We took a 1D signal to explain the step of compressed sensing. The sparsity expression for a non-sparse signal x (N × 1) in the transform domain is In Equation (1), Ψ (N × N) is known as the normal orthogonal matrix and s (N × 1) is a K sparse vector.
According to Equation (1), the specific expression of compressed sensing is In Equation (2), Φ (M × N) is the measurement matrix; Θ (M × N) is the sensing matrix; and y (M × 1) is the measured value matrix. In particular, M < N.
Compressed sensing demands that Θ has the content of the restricted isometry property [46], that is to say, Φ and Ψ are uncorrelated. In addition, the length of y ought to be In Equation (3), c is a constant with a small value.
To exactly recover the s from the measured value matrix y, theoretically, the problem of l 0 norm minimization should be solved min||s|| 0 s.t.y = ΦΨs However, Equation (4) is an NP-hard problem. Therefore, in general, the problem of l 1 norm minimization is used to supersede Equation There are many reconstruction algorithms for compressed sensing, the most common ones are the orthogonal matching tracking algorithm, subspace pursuit algorithm, and the smooth l 0 norm (Sl 0 ) algorithm. We chose the Sl 0 algorithm for the reconstruction in this paper.

Sigmoid Function
A common S-shaped function, also known as an S-shaped growth curve, is the Sigmoid function [39], whose expression is where the range of y is [0, a]. We utilized the sigmoid function for quantization, so we set a = 255, b = 80/(15.518 × (X max − X min )), c = (X max + X min )/2. X max and X min are the maximum and minimum values of X, respectively. For different images, X max and X min are different, (i.e., the values of b and c are taken differently).

Image Encryption Process
A new encryption scenario was created and its flow chart is presented in Figure 1.

Key Generation
The hash algorithm was utilized to create the initial parameters of the 2D-SCLMS map and the 2D-SLIM, which enhanced the relevance between the ciphertext image and the original image. First, the SHA-384 hash function generates a binary sequence composed of 384 bits and then this sequence is separated into blocks every 8 bits (i.e., 48 decimal numbers h 1 , h 2 , . . . , h 48 ). The 2D-SCLMS system is mainly used for pixel transformation, row scrambling, and column scrambling, and the initial values and parameters are calculated as

Key Generation
The hash algorithm was utilized to create the initial parameters of the 2D-SCLMS map and the 2D-SLIM, which enhanced the relevance between the ciphertext image and the original image. First, the SHA-384 hash function generates a binary sequence composed of 384 bits and then this sequence is separated into blocks every 8 bits (i.e., 48 decimal numbers h1, h2, …, h48). The 2D-SCLMS system is mainly used for pixel transformation, row scrambling, and column scrambling, and the initial values and parameters are calculated as The 2D-SLIM is mainly utilized to establish the measurement matrix and perform bit-cycle, where the initial values are calculated as The 2D-SLIM is mainly utilized to establish the measurement matrix and perform bit-cycle, where the initial values are calculated as

Encryption Process
The proposed encryption algorithm is depicted as follows.
Step 1: The initial parameters (x 0 , y 0 , u), obtained in Section 3.1, are entered into the 2D-SCLMS map for 500 + N 2 iterations. The first 500 values are removed to acquire the sequences X, Y. Sequence X 1 is obtained Entropy 2022, 24, 885 The sequence X is transformed into a matrix X 2 (N × N) and X 2 is divided into X 21 , X 22 by rows, so the matrices A, B are obtained, respectively.
The sequence Y is transformed into an N × N matrix and is then divided into two parts Y 1 , Y 2 , according to the number of rows. The matrix Y 1 is arrayed in descending order by the columns, and the matrix Y 2 is arrayed in ascending order by rows to obtain the index matrix L 1 , L 2 , respectively.
Step 2: The plaintext image P (N × N) generates a discrete coefficient matrix P 1 through DWT, and then matrix P 1 is divided equally into four small matrices P 11 , P 12 , P 13 , and P 14 .
Similarly, pixel transformation was performed again based on the values of X 12 , X 13 , X 14 , respectively. When the pixel transformation was over, the four matrices were combined to acquire P 2 .
Step 4: The initial values (z 0 , w 0 ), created in Section 3.1 and the parameters, are entered into the 2D-SLIM iterating 500 + d × M × N times to produce two chaotic sequences. The first 500 values of the two sequences are removed to obtain the chaotic sequence Z, W. M = CR × N, wherein CR is the compression rate and d is the sampling distance.
Sequence Z 1 is acquired by sampling from sequence Z according to the sampling distance d. The measurement matrix Φ (M × N) is generated. Take the MN values from the sequence W and transform it into a matrix W 1 . According to Equation (17), W 2 and C can be generated.
Step 5: Compress P 2 to obtain the measurement results P 3 .
Step 6: Quantize P 3 according to the sigmoid function introduced in Section 2.2 and round the quantized result to obtain P 4 .
Step 7: Rotate P 4 counterclockwise by 180 • and then scramble the columns according to the index matrix L 1 to obtain P 5 .
Step 8: Rotate P 5 counterclockwise by 180 • and then scramble the rows according to the index matrix L 2 to obtain P 6 .
Step 9: Rotate P 6 counterclockwise by 180 • and then perform the bit-cycle operation according to W 2 . If W 2 (i, j) = 1, then P 61 (i, j) is shifted left by one bit. If W 2 (i, j) = 2, then P 61 (i, j) is shifted left by two bits. Similarly, if W 2 (i, j) = 7, then P 61 (i, j) is shifted left by seven bits, and finally P 7 is obtained.
Step 10: The final ciphertext image P 8 is obtained by double XOR of P 7 .

Decryption Process
The specific decryption method is demonstrated below and its flow chart is presented in Figure 2.
Step 1: The initial parameters are brought into the two chaotic systems. The specific method is the same as Steps 1 and 4 in Section 3.2.
Step 2: Perform the reverse operation of the double XOR on the ciphertext image P 8 to obtain P 7 , then perform the inverse operation of the bit cycle and rotate 180 • counterclockwise to obtain P 6 .
Step 3: Perform the inverse scrambling operation on the rows of P 6 according to the index matrix L 2 and then rotate 180 • counterclockwise to obtain P 5 .

Decryption Process
The specific decryption method is demonstrated below and its flow chart is presented in Figure 2.  Step 4: Perform the inverse scrambling operation on the columns of P 5 according to the index matrix L 1 and then rotate 180 • counterclockwise to obtain P 4 .
Step 5: Perform inverse quantization on P 4 according to the sigmoid function introduced in Section 2.2 to obtain P 3 .
Step 6: Use the smooth l 0 norm method to reconstruct P 2 .
Step 7: Divide P 2 into four blocks on average and perform inverse pixel transformation to obtain P 1 .
Step 8: Perform the reverse DWT on P 1 to acquire the decrypted image P.

Simulation Experiment and Performance Analysis
Multiple experiments were conducted to prove the performance of the newly presented compressed encryption scenario. The operating system used for all experiments was Windows 10 Ultimate with AMD Ryzen 2.00 GHz CPU, 8 G RAM, and 1 TB hard disk and the operating software was MATLAB R2020a. The test selected six images with a size of 512 × 512 ("Lena", "Cameraman", "Cattle", "Einstein", "Boat" and "Couple") and three images with a size of 256 × 256 ("Barbana", "Lena", "Cameraman").     The ciphertext images were similar in noise and were smaller than the original images in Figure 3, which indicates that this scheme has a good compression and encryption effect. Furthermore, the decrypted images were of high quality and were the same size as the plaintext images, which showed that the scenario had a good reconstruction and decryption effect.

Peak Signal-to-Noise Ratio (PSNR)
PSNR [49] was utilized for the assessment of the compression performance. This is expressed as The ciphertext images were similar in noise and were smaller than the original images in Figure 3, which indicates that this scheme has a good compression and encryption effect. Furthermore, the decrypted images were of high quality and were the same size as the plaintext images, which showed that the scenario had a good reconstruction and decryption effect.

Peak Signal-to-Noise Ratio (PSNR)
PSNR [49] was utilized for the assessment of the compression performance. This is expressed as In Equation (31), X and Y are the plaintext and the decrypted image, respectively. The larger the PSNR value, the better the compression performance. Figure 4 shows the simulation of "Lena" under different CRs and their corresponding PSNR values. It can be concluded that even if the CR = 0.25, the PSNR exceeded 30 db. Table 1 lists the PSNR of different images. The PSNR of the tested images exceeded 30 db, which indicates that the compression characteristic of the scenario was excellent and stable. Table 2 compares the PSNR of different compression encryption algorithms for "Lena" (256 × 256). The PSNR of our algorithm was 32.6176, which was higher than the other scenarios, which showed that the newly proposed scenario was better.   A momentous indicator to survey the similarity of two images is SSIM, and its range is [0, 1]. The larger the SSIM [51], the greater the similarity of the two images. The expression of SSIM is

Structural Similarity Index Measurement (SSIM)
A momentous indicator to survey the similarity of two images is SSIM, and its range is [0, 1]. The larger the SSIM [51], the greater the similarity of the two images. The expression of SSIM is In Equation (32), X and Y are the plaintext and reconstructed image. µ X and σ 2 X are the mean value and the variance of X, respectively. µ Y and σ 2 Y are the mean value and the variance of Y, respectively. σ XY is the covariance of X and Y. M is the total number of windows. L = 255. K 1 = 0.01, K 2 = 0.03. We tested the SSIM values for multiple images, as shown in Table 3. The SSIM of the images was close to 1, which indicates that the plaintext image and reconstructed image had high similarity (i.e., the reconstruction algorithm achieved good results). Table 4 compares the SSIM of different compression encryption algorithms for "Lena" (256 × 256). The SSIM calculated by the newly proposed scenario was larger, which shows that the image reconstructed by the new scenario was more similar to the plaintext image.

Key Space Analysis
The key space of a scenario must be larger than 2 100 to ensure that the algorithm is good and secure enough against brute force attacks [52].
The new algorithm has an internal key α and utilizes the hash-384 algorithm. Assuming that the computer has a computational precision of 10 −14 , the entire key space is 10 14 + 2 384 , which is much larger than 2 100 . Thus, the scenario has a large key space and can resist violent attacks.

Key Sensitivity Analysis
A good encryption scenario is sensitive to the key, that is, even though the key changes very little, the encrypted image has a great difference.
The number of pixel change rate (NPCR) and the unified average change intensity (UACI) can be used to test the sensitivity of the scenario. These are expressed as where C 1 and C 2 are two different cipher images. Table 5 lists the NPCR and UACI for multiple images. The NPCR and UACI were close to 99.6094% and 33.4635%, respectively, which indicates that the scenario is sensitive to key.

Histogram Analysis
A momentous index to appraise the performance of encryption scenarios is the histogram. Figure 5 displays the histogram of multiple plaintext images and ciphertext images.  The NPCR and UACI were close to 99.6094% and 33.4635%, respectively, which indicates that the scenario is sensitive to key.

Histogram Analysis
A momentous index to appraise the performance of encryption scenarios is the histogram. Figure 5 displays the histogram of multiple plaintext images and ciphertext images. The histograms of the plaintext images were uneven, but those of the cipher images were similar to the uniform distribution, which illustrates that the scenario resisted statistical attacks.
In addition, we utilized the histogram variance to survey the effectiveness of this algorithm.
In Equation (34), z i and z j represent the number of pixel values corresponding to i and j. The histogram variance of the plaintext images were very large, and the maximum could reach 10 6 , while those of the ciphertext images were small, only 10 2 , and the minimum was 115.8203 in Table 6. This shows that the histogram of the ciphertext images was flatter.  Table 7 compares the histogram variance of "Lena" (256 × 256) with different algorithms. The histogram variance of the new scenario was smaller, explaining that the histogram was flatter. That is, the newly proposed algorithm was more resistant to statistical attacks. To appraise the performance of the new scenario to resist statistical attacks, this paper utilized the chi-square [53], the expression of which is In Equation (35), u i is the frequency of value i. u 0 = MN/2 8 . Table 8 enumerates the chi-square results of multiple images. The values for seven images were less than 293.2478 (255 degrees of freedom and 5% confidence), which shows that this algorithm has good effects and can resist statistical attack. Table 9 compares the results of several scenarios for "Lena" (256 × 256). The chi-square value of the newly proposed encryption scenario was the smallest, which shows that this scenario was more resistant to statistical attacks.

Correlation Analysis
The encryption scenario is to break the correlation of the original image. The evaluation index to assess the effectiveness of the scenario is the correlation coefficient, the expression of which is In Equation (36), x and y are the image adjacent pixels. Table 10 enumerates the correlation coefficients of different original images and ciphertext images. The comparison values of "Lena" (256 × 256) with several encryption scenarios are enumerated in Table 11.  Table 11. The correlation coefficients of "Lena" (256 × 256) using different schemes.
The correlation of "Lena" is presented in Figure 6 for clear observation. The correlation of the plaintext image in three directions was diagonal, but those of the ciphertext image were interspersed over the whole range. This shows that the encryption scenario effectively abated the correlation.
Ours 0.0012 −0.0041 0.0032 The correlation coefficients of the plaintext images were close to 1, but those of the ciphertext images were about 0 in Table 8, and that value in our scenario was smaller in Table 11, which shows that the scenario had better resistance to statistical attacks.
The correlation of ''Lena'' is presented in Figure 6 for clear observation. The correlation of the plaintext image in three directions was diagonal, but those of the ciphertext image were interspersed over the whole range. This shows that the encryption scenario effectively abated the correlation.

Information Entropy (IE)
The quota to assess the overall randomness of images is the IE and its expression is In Equation (37), M = 255. p(si) is the probability of si. The closer the IE is to 8, the better the algorithm [54]. Table 12 lists the IE of several original and ciphertext images. Table 13 lists the comparison value of ''Lena'' (256 × 256) using different algorithms.

Information Entropy (IE)
The quota to assess the overall randomness of images is the IE and its expression is In Equation (37), M = 255. p(s i ) is the probability of s i . The closer the IE is to 8, the better the algorithm [54]. Table 12 lists the IE of several original and ciphertext images. Table 13 lists the comparison value of "Lena" (256 × 256) using different algorithms.
The IE of the ciphertext image was extremely close to 8 in Table 12. The IE of the newly encryption algorithm was higher than the other algorithms in Table 13. Therefore, this algorithm had very good results. The encrypted image had stronger randomness and was resistant to statistical attacks.

Image
Original Image Encryption Image

Local Information Entropy (LIE)
A momentous metric for analyzing the randomness of the local image is the LIE. Some non-overlapping image blocks are randomly selected and the LIE can be obtained by calculating the IE of each block and then taking the average value. The expression is where H(S i ) is the IE of sub-block S i . Let k = 30, T B = 1936 for calculation. When the confidence level is 0.05, the range of LIE is [7.901901305, 7.903037329] [32]. Table 14 lists the LIE of different images (512 × 512). The LIE of all test images passed the experiment, which shows that the local image had good randomness.

Differential Attack Analysis
An excellent encryption scenario is sensitive to the plaintext image, in other words, even though the original image has very small changes, the encrypted image can be completely different.
The NPCR and UACI are indicators used to measure whether the algorithm can resist differential attacks. When the NPCR > NPCR* α , the NPCR passes the test. When the UACI is between [UACI* − α , UACI* + α ], the UACI passes the test [55]. The NPCR and UACI statistical tests are shown in Tables 15 and 16.
The NPCR and UACI of all test images were very close to the ideal values, and all passed the NPCR and UACI tests. Therefore, the algorithm could effectively resist differential attacks.

NIST SP 800-22 Analysis
The NIST SP 800-22 statistical test suite is published by the National Institute of Standards and Technology for testing sequences for randomness [56]. Therefore, we set the confidence level to 0.01 to evaluate the randomness of the ciphertext image. The results are listed in Table 17. All data passed the test, indicating that the ciphertext image had good randomness.

Time Complexity
Time complexity is an important quantitative criterion to evaluate the feasibility of an encryption algorithm, and it requires the algorithm to be easy to execute. If the running time of the algorithm is too long, it does not meet the requirements of real-time performance. This paper tested the encryption time of multiple images, which are presented in Table 18. The time of all 256 × 256 images was less than 1 s, and the time of 512 × 512 images was less than 3 s, which greatly proves that the algorithm is real-time.

Anti-Noise Attack Analysis
As it is subject to various noise interference during transmission, an excellent encryption scenario should resist noise attacks. The salt and pepper noise is tested at intensities of 0.005%, 0.05%, and 0.1% in "Lena", as shown in Figure 7.

Time Complexity
Time complexity is an important quantitative criterion to evaluate the feasibility of an encryption algorithm, and it requires the algorithm to be easy to execute. If the running time of the algorithm is too long, it does not meet the requirements of real-time performance. This paper tested the encryption time of multiple images, which are presented in Table 18. The time of all 256 × 256 images was less than 1 s, and the time of 512 × 512 images was less than 3 s, which greatly proves that the algorithm is real-time.

Anti-Noise Attack Analysis
As it is subject to various noise interference during transmission, an excellent encryption scenario should resist noise attacks. The salt and pepper noise is tested at intensities of 0.005%, 0.05%, and 0.1% in ''Lena'', as shown in Figure 7. Even though the added noise intensity was 0.1%, the cipher image could be decrypted and information could be viewed. This shows that the scenario resisted noise attacks. Even though the added noise intensity was 0.1%, the cipher image could be decrypted and information could be viewed. This shows that the scenario resisted noise attacks.
In order to measure the anti-noise ability of the encryption algorithm more accurately, this paper tested the PSNR. For three different noise intensities, their corresponding PSNR are presented in Table 19. When the noise intensity was 0.005%, the PSNR was 33.2311, even if the noise intensity increased to 0.1%, the PSNR was greater than 29, which shows that the algorithm had a strong resistance to noise.

Discussion
The encryption algorithm based on the chaotic system and compressed sensing proposed in this paper could resist various attacks, and had security and timeliness. However, it also has certain limitations. The measurement matrix is generated by the universal method, that is, the chaotic sequence generated by the chaotic system constitutes the measurement matrix. We should conduct further research in the future to make better use of the chaotic characteristics of the chaotic system to construct a better measurement matrix to make the compression and encryption more convenient and obtain better compression and encryption effects.

Conclusions
The paper proposed a new image compression and encryption scenario based on CS and two chaotic maps. The pixel transform operation was performed before the compressed sensing first, which is beneficial to increase the image reconstruction quality. In the quantization process, we made full use of the performance of the sigmoid function to quantize the matrix to the interval [0, 255]. In the scrambling process, we combined rotation with row and column scrambling, which tremendously reduced the correlation. Finally, the cipher image was created by double XOR after the bit-cycle operation.
After a series of tests and experimental analysis, the new scenario had a huge key space and was sensitive to keys. In addition, various experiments against statistical analysis attacks were carried out in this paper such as histograms and their statistical analysis, information entropy, correlation, and local information entropy. The information entropy was very close to 8, and the correlation coefficient was close to 0. Subsequently, the algorithm was also resistant to differential attacks, brute force attacks, and noise attacks. All of the test images were close to the standard values of the NPCR and UACI and passed the statistical analysis test, and their PSNR exceeded 29 for 0.1% intensity noise. The bit sequence of the ciphertext image passed the NIST randomness test.
The significance of this paper was to combine the two chaotic systems with compressed sensing, which can not only fully utilize the practicability of chaos theory for image encryption, but can also compress ciphertext images to meet the needs of the transmission bandwidth. The encryption algorithm proposed in this paper is not only resistant to various attacks, but also has real-time performance and is a secure encryption scheme.
In the future, we should focus on the further combination of the chaotic system and compressed sensing and its application in medicine or larger fields.