Next Article in Journal
Signal Detection Based on Separable CNN for OTFS Communication Systems
Previous Article in Journal
Multivariate Modeling of Some Datasets in Continuous Space and Discrete Time
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Chaos-Based Color Image Encryption with JPEG Compression: Balancing Security and Compression Efficiency

College of Software, Northeastern University, Shenyang 110167, China
*
Author to whom correspondence should be addressed.
Entropy 2025, 27(8), 838; https://doi.org/10.3390/e27080838
Submission received: 7 May 2025 / Revised: 22 June 2025 / Accepted: 27 July 2025 / Published: 6 August 2025
(This article belongs to the Section Multidisciplinary Applications)

Abstract

In recent years, most proposed digital image encryption algorithms have primarily focused on encrypting raw pixel data, often neglecting the integration with image compression techniques. Image compression algorithms, such as JPEG, are widely utilized in internet applications, highlighting the need for encryption methods that are compatible with compression processes. This study introduces an innovative color image encryption algorithm integrated with JPEG compression, designed to enhance the security of images susceptible to attacks or tampering during prolonged transmission. The research addresses critical challenges in achieving an optimal balance between encryption security and compression efficiency. The proposed encryption algorithm is structured around three key compression phases: Discrete Cosine Transform (DCT), quantization, and entropy coding. At each stage, the algorithm incorporates advanced techniques such as block segmentation, block replacement, DC coefficient confusion, non-zero AC coefficient transformation, and RSV (Run/Size and Value) pair recombination. Extensive simulations and security analyses demonstrate that the proposed algorithm exhibits strong robustness against noise interference and data loss, effectively meeting stringent security performance requirements.

1. Introduction

In the era of highly computerized communication, information security has emerged as a critical area of research and public concern. While technological advancements have significantly enhanced convenience, they have also heightened the risks of personal information leakage. To mitigate the unauthorized access and exploitation of digital images, which often contain sensitive and private data, image encryption has garnered substantial attention as a robust protective measure [1]. Furthermore, the substantial storage requirements of image data during transmission necessitate efficient compression techniques to optimize space utilization and reduce transmission latency. Among the various compression algorithms available [2,3], the JPEG standard stands out for its exceptional performance. Consequently, to simultaneously address the dual challenges of image security and transmission efficiency, this study proposes an integrated approach that combines JPEG compression with advanced encryption techniques, thereby enhancing the overall security of digital images.
As an encryption method based on nonlinear dynamical systems, chaotic encryption technology has garnered significant attention in the field of information security in recent years. Chaotic systems exhibit properties such as sensitivity to initial conditions, pseudo-randomness, and ergodicity, which align closely with the requirements of encryption algorithms for randomness and unpredictability. These characteristics make chaotic systems an ideal tool for designing efficient encryption schemes. The chaos-based image encryption algorithm was first proposed by Fridrich in 1997 [4]. Since then, numerous related papers have been proposed [5,6,7,8,9,10,11,12,13,14]. Most of them encrypted the raw pixels of the image without considering the compression, which led to the larger data volume during transmission.
The JPEG international standard [15], established in 1993, has become a cornerstone technology widely adopted across various industries. In recent years, significant research efforts have been directed toward integrating JPEG compression with encryption algorithms [16,17,18,19,20]. However, encryption and compression are inherently contradictory. Compression algorithms leverage the high correlation within the image, while encryption algorithms aim to eliminate such correlations within the image data to achieve high security. Therefore, such algorithms often struggle to balance both security and compression efficiency. Based on the sequence of applying encryption and compression techniques, the combined processes can be systematically categorized into three distinct approaches: encryption before compression, encryption during compression, and encryption after compression. This classification provides a comprehensive framework for understanding and optimizing the interplay between data security and compression efficiency in digital image processing.
(1) Encryption before compression. Cryptographic schemes based on chaotic systems, deoxyribonucleic acid (DNA) encoding, or permutation mechanisms can be classified solely within this category [21,22,23,24,25,26,27,28]. Nevertheless, such approaches exhibit limited applicability in lossy compression frameworks, primarily stemming from two technical constraints. First, the initial encryption stage substantially diminishes inter-pixel spatial correlations within digital images, thereby reducing the inherent redundancy required for efficient compression. Second, the non-reversible nature of lossy compression introduces irreversible quantization errors during decoding, preventing precise reconstruction of original pixel values from cipher images.
(2) Joint encryption with compression. Encryption within compression frameworks represents an integrated approach combining cryptographic operations with compression processes [18,29,30,31,32,33,34,35,36,37]. This methodology enables encryption implementation across multiple stages of JPEG compression, including Discrete Cosine Transform (DCT), quantization, and entropy coding phases. Among existing solutions, this architecture has attracted substantial research attention due to its demonstrated performance advantages. Previous studies have proposed various implementation strategies. Li et al. [31] developed an 8   ×   8 block transformation mechanism that integrates compression and encryption through alternating orthogonal transformations. While achieving 12.5% higher coding efficiency compared to conventional methods, this approach exhibits vulnerability to statistical analysis attacks due to its fixed block size constraint. He et al. [34] systematically evaluated three JPEG encryption schemes employing identical cryptographic algorithms but differing in integration points within the compression pipeline. Their comparative analysis revealed a 15–20% reduction in compression efficiency despite maintaining computational efficiency. More recently, Liang et al. [37] proposed a novel encryption framework featuring JPEG compatibility through entropy coding segment modification using cryptographic keys. However, their implementation demonstrates an inherent trade-off between security enhancement and file size expansion, with experimental results indicating a 25% average file size increase for optimal security configurations. Additionally, He et al. [38] combine the reversible data hiding scheme with an encrypted algorithm to ensure the security of JPEG images, and data extraction and information recovery can be performed independently.
(3) Post-compression encryption represents a sequential processing architecture where compression precedes cryptographic operations [39,40,41,42]. This methodology specifically targets the encryption of entropy-encoded bitstreams, maintaining compression compatibility through isomorphic codeword mapping that preserves original code lengths. While this approach demonstrates superior compression efficiency, it introduces potential format compliance issues. Specifically, the decoding process may generate blocks containing more than 64 elements, violating standard format specifications and potentially compromising system interoperability.
All of the aforementioned methods share a common issue: the trade-off between compression ratio and security. Specifically, it is difficult to simultaneously achieve high compression ratios and robust security. To address this challenge, we conducted an in-depth investigation into the various stages and different syntactic elements of the JPEG compression standard. Our study also examined the impact of encrypting these syntactic elements on both the final compression ratio and the security of the system.
This paper presents a novel integrated framework for simultaneous data compression and encryption, employing three encryption steps across three critical stages of image compression: Discrete Cosine Transform (DCT) transformation, quantization, and entropy encoding. The proposed scheme can be summarized as follows:
1. A 16 × 16 DCT transformation is initially executed during the DCT transformation phase, followed by applying a block segmentation algorithm to generate 8 × 8 sub-blocks.
2. The diffusion mechanisms are implemented through a dual approach in the quantization stage, including DC coefficient permutation and non-zero AC coefficient transformation algorithms.
3. An innovative RSV (Run/Size/Value) pair recombination algorithm is designed in the entropy encoding phase to significantly reduce spatial correlation among adjacent pixels.
4. Comparative analysis with existing methodologies demonstrates that the proposed framework achieves superior optimization between cryptographic robustness and compression efficiency, offering enhanced performance in both security and data compression domains.
The remainder of this paper is organized as follows. Section 2 provides a comprehensive overview of the JPEG compression standard and the foundational principles of chaotic systems. Section 3 elaborates on the proposed joint compression and encryption scheme, detailing its algorithmic framework and implementation specifics. Section 4 presents a systematic evaluation of the simulation experiments, including performance metrics and comparative analyses. Finally, Section 5 concludes the paper by summarizing the key findings, discussing their implications, and outlining potential directions for future research.

2. Preliminaries

2.1. JPEG Compression Standard

The JPEG (Joint Photographic Experts Group) standard, established by the International Standards Organization (ISO) and the International Telegraph and Telephone Consultative Committee (CCITT) [43], represents the first international digital image compression standard for static images. As a widely adopted image compression standard, JPEG supports lossy compression, achieving significantly higher compression ratios compared to traditional compression algorithms. In practical implementations, the JPEG image encoding process employs a combination of Discrete Cosine Transform (DCT), Huffman encoding, and run-length encoding. For color images, the general architecture of the JPEG compression standard, as illustrated in Figure 1, consists of four primary stages: color space conversion, DCT transformation, quantization, and entropy coding. These stages collectively enable efficient compression while maintaining acceptable image quality.
In the execution process of the JPEG compression algorithm, different stages generate distinct syntactic elements. Encrypting these syntactic elements has varying impacts on data security and compression efficiency. The main four compression steps are as follows:
Color Space Conversion: While the JPEG standard is capable of compressing RGB components, it demonstrates superior performance in the luminance/chrominance (YUV) color space. For color images, an initial step involves the transformation of the RGB color space into the YUV color space. It is important to note that this color space conversion process is lossless, as it solely entails a mathematical mapping of pixel values from one color representation to another without any degradation in image quality.
Discrete Cosine Transform (DCT): DCT is a widely utilized coding technique in rate-distortion optimization for image compression. It serves as a mathematical transformation that converts spatial domain image representations into their frequency domain counterparts. This process can be conceptualized as a mapping operation that transforms an array of pixel values into a new array of frequency coefficients, effectively converting spatial light intensity data into frequency-based information. The transformation is applied to each image block independently, resulting in the concentration of energy into a limited number of coefficients, predominantly located in the upper-left quadrant of the transformed matrix. This energy compaction property is a fundamental characteristic of DCT, enabling efficient compression by prioritizing significant frequency components.
Quantization: The quantization process involves dividing each DCT coefficient by its corresponding value in a predefined quantization table, yielding the “quantized DCT coefficients.” During this stage, frequency coefficients are transformed from floating-point representations into integers, which simplifies subsequent encoding operations. It is evident that quantization introduces a loss of precision, as the data are reduced to integer approximations, thereby discarding certain information. In the JPEG algorithm, distinct quantization tables are applied to luminance (brightness) and chrominance components, reflecting their differing accuracy requirements. The design and selection of the quantization table play a critical role in determining the overall compression ratio, making it a pivotal factor in balancing image quality and compression efficiency.
Entropy coding: For the DC coefficient, Differential Pulse Code Modulation (DPCM) is employed. Specifically, the difference between each DC value in the same image component and the preceding DC value is computed and encoded. For the AC coefficients, Run Length Coding (RLC) is utilized to further reduce data transmission by encoding sequences of zero-valued coefficients. Subsequently, Huffman coding is applied to assign shorter binary codes to symbols with higher probabilities of occurrence and longer binary codes to symbols with lower probabilities, thereby minimizing the average code length. Different Huffman coding tables are utilized for the DC and AC coefficients. Additionally, distinct Huffman coding tables are employed for luminance and chrominance components. Consequently, four Huffman coding tables are required to complete the entropy coding process.
Decompression is the inverse process of JPEG compression encoding. The primary steps involved in JPEG decompression are illustrated in Figure 1.

2.2. Chaotic System

A chaotic system is defined as a deterministic system that exhibits seemingly random and irregular motion, characterized by behaviors that are uncertain, non-repeatable, and unpredictable. This phenomenon, known as chaos, has garnered significant attention and is widely applied in the field of image encryption due to its inherent complexity and sensitivity to initial conditions. Among the foundational discoveries in chaos theory, the Lorenz system stands as the first identified chaotic attractor, marking a pivotal milestone in the study of chaotic dynamics [44]. This system represents the earliest dissipative system demonstrating chaotic motion, as revealed through numerical experiments. It is a three-dimensional system governed by three parameters, which collectively contribute to its rich and intricate dynamical behavior.
Low-dimensional chaotic systems are characterized by the presence of only one positive Lyapunov exponent, which results in a relatively limited key space. This limitation makes them vulnerable to contemporary brute-force attacks. To address this issue, this paper employs the four-dimensional hyperchaotic Lorenz system proposed in Ref. [45]. This hyperchaotic system features two positive Lyapunov exponents, thereby enhancing the complexity and robustness of the chaotic behavior. The system’s definition is as follows:
x t = a ( y x ) + w , y t = c x y 2 x z , z t = 2 x 2 b z , w t = y z d w .
where a ,   b ,   c ,   d are system parameters. When a   =   10 , b   =   8 / 3 , c   =   28 , and d   =   2 , the Lyapunov exponents of Equation (1) are λ 1   =   2.0438 , λ 2   =   1.9735 , λ 3   =   2.1918 , and λ 4   =   35.4927 ; two of these Lyapunov exponents are positive. Under these conditions, Equation (1) exhibits hyperchaotic motion.

2.3. Analysis of Syntactic Elements for Compression and Encryption

Two syntax elements have been universally accepted in the JPEG compression process: DC coefficients and AC coefficients. The DC and non-zero AC coefficients are encrypted in distinct phases, each significantly influencing communication efficiency, bit expansion, and perceptual security to varying degrees. By combining the BLAKE-256 hash function with the Lorenz hyperchaotic system, test keys are generated, exhibiting high stochasticity within the range [0, 31]. Extensive empirical studies, conducted using a simple XOR (Exclusive OR) operation, confirm the fairness of the process. The visual results are shown in Figure 2, with a detailed analysis provided in Table 1.
As shown in Figure 2, the detailed information of the original image is concealed within a ciphertext image, where only part of the plain pixel values are encrypted. All information is encoded in the encrypted image, resulting in block effects, with the DC coefficients being encrypted after the DCT transformation. In contrast, the AC coefficients are encrypted following the DCT transformation, which enhances data security while still allowing for recognition of object contours. It is noteworthy that certain syntax elements, such as DC and AC coefficients after quantization and DC coefficients after DPCM coding, are selectively chosen for encryption, further strengthening information privacy.

3. Methodology

In the process of simultaneously performing image compression and encryption, the compression operation and the encryption operation can interfere with each other. The reason lies in the fact that the goal of encryption is to eliminate the correlation between image pixels, while compression exploits these correlations to reduce the image size. A poorly designed encryption algorithm can significantly degrade the compression ratio. Through a comprehensive analysis of the effects of different syntactic elements in the image compression process on both encryption and compression performance, we propose a joint compression and encryption algorithm based on chaos theory. This algorithm consists of the following four steps, as shown in Figure 3.

3.1. Key Scheming

Secret keys play a crucial role in cryptographic systems, serving as the cornerstone of their security. According to the widely accepted Kerckhoffs’s principle, the robustness of a cryptosystem should remain intact even if all aspects of the system, excluding the secret key, are publicly disclosed. This principle underscores the importance of key secrecy in ensuring the overall security of encryption mechanisms. In the proposed scheme, the four initial values of the Lorenz hyperchaotic system are designated as the cryptographic key. Prior to iterating the chaotic system to generate pseudo-random sequences, the BLAKE-256 hash function is utilized to compute a 256-bit hash value, denoted as H. This hash value is subsequently employed to modify the four initial values of the Lorenz hyperchaotic system, resulting in the generation of four updated initial values. These modified values are then substituted into the Lorenz hyperchaotic system for iterative computation, producing four pseudo-random key sequences, X ,   Y ,   Z , and W . This approach enhances the security and randomness of the key generation process. The specific process is as follows:
Step 1: The 256-bit hash value H is partitioned into 32 segments, each comprising 8 bits, expressed as H   =   h 1 h 2 h 32 , where each hi falls within the range [0, 255]. Subsequently, the initial values of the Lorenz hyperchaotic system, denoted as x 0 , y 0 ,   z 0 , and w 0 , are modified using Equation (2). This step ensures the initialization of the chaotic system with updated values derived from the hash segments.
x 0 = x 0 + ( h 1 h 2 h 8 ) 256 , y 0 = y 0 + ( h 9 h 10 h 16 ) 256 , z 0 = z 0 + ( h 17 h 18 h 24 ) 256 , w 0 = w 0 + ( h 25 h 26 h 32 ) 256 .
Step 2: The Lorenz hyperchaotic system is iterated 10,000 times using the updated initial values x0′, y0′, z0′, and w0′. Each iteration can obtain four values xi, yi, zi, and wi, i = 1, 2, …, 10,000. Let M and N represent the height and width of the image, respectively. These generated values are subsequently preprocessed using Equation (3) to ensure their suitability for further cryptographic operations.
x i + 1 = ( ( x i x i ) × 1 0 14 ) mod ( M × N ) , y i + 1 = ( ( y i y i ) × 1 0 14 ) mod ( M × N ) , z i + 1 = ( ( z i z i ) × 1 0 14 ) mod ( M × N ) , W = ( ( w i f l o o r ( w i ) × 1 0 14 ) mod ( M × N ) .
Step 3: All values within the pseudo-random sequences X, Y, Z, and W are converted into binary format to govern the subsequent encryption process. Consequently, the encryption key comprises two distinct components: (1) a 256-bit hash value H, and (2) the four initial values of the Lorenz hyperchaotic system, denoted as X0, Y0, Z0, and W0. This dual-component structure ensures a robust and secure foundation for the encryption mechanism.

3.2. DCT Transformation Stage Encryption

The first-stage encryption process is implemented following the Discrete Cosine Transform (DCT), which begins by partitioning the image into 16   ×   16 blocks and subsequently performing the DCT transformation on these blocks. However, since the standard quantization table specified by the JPEG compression standard is designed for 8   ×   8 blocks, each 16   ×   16 block (denoted as B16) is further divided into four 8   ×   8 blocks (denoted as B8) after the DCT transformation. As a result, the 256 coefficients of each 16   ×   16 block are distributed across four 8   ×   8 blocks. This division ensures compatibility with the standard JPEG encoding process, which can be seamlessly applied in subsequent steps. Below is the fundamental procedure of the block-splitting algorithm:
Step 1: The 16   ×   16 block (B16) is transformed into a one-dimensional sequence using zigzag scanning. This sequence is then evenly divided into two parts, denoted as B16_1 and B16_2, each containing 128 coefficients.
Step 2: The coefficients in B16_1 are allocated to four 8   ×   8 blocks. For each allocation, two bits from the pseudo-random key stream X are used to determine the target 8 × 8 block, while three additional bits from X specify the number of coefficients to be assigned to the selected block. After each allocation, the corresponding five bits in X are removed. This process is repeated until all coefficients in B16_1 are distributed, ensuring that each 8 × 8 block contains 32 coefficients.
Step 3: The coefficients in B16_2 are allocated to the four 8   ×   8 blocks using the same procedure as described in Step 2. Upon completion, each 8   ×   8 block contains a total of 64 coefficients.
Step 4: Each one-dimensional 8   ×   8 block (B8) is transformed back into a two-dimensional 8 × 8 block through the inverse zigzag scanning process.
During the block-splitting process, the number of 8   ×   8 blocks to which the DC coefficient (the first coefficient of the original B16) is assigned is recorded. This information is utilized in the encryption algorithm during the quantization stage. The above block-splitting algorithm is applied to each 16   ×   16 block, thereby completing the encryption algorithm in the DCT transformation stage.

3.3. Quantization Stage Encryption

In the second encryption stage, block permutation is performed following the quantization process to achieve encryption scrambling and diffusion effects. Specifically, the DC coefficients undergo confusion, while the signs of the non-zero AC coefficients are transformed. These operations collectively enhance the security and robustness of the encryption scheme by introducing additional layers of complexity and unpredictability.

3.3.1. Blocks Permutation

To further enhance the chaotic nature of the encrypted image, a block permutation operation is introduced after the quantization process. This operation disrupts the original order of the 8   ×   8 blocks based on the pseudo-random key sequence Y before advancing to the entropy encoding stage. The permutation algorithm employed is a random permutation method. In the context of our encryption scheme, B represents the original sequence of all 8   ×   8 blocks, n denotes the total number of 8   ×   8 blocks, and the random integers required for permutation are derived from the pseudo-random key sequence Y. The algorithm is described in Algorithm 1.
Algorithm 1 Blocks permutation algorithm
Input: All 8 × 8 blocks after quantization; Y.
Output: The result of block permutation.
1: r   =   log 2 n ;
2:where  i   n   do
3:j pick r bits from Y, and convert to decimal;
4: exchange B [ i ] and B [ j ] ;
5: remove the first r bits from Y .
6:end while

3.3.2. DC Coefficients Confusion

After the permutation of blocks, the index sequence of the DC coefficients within the original 16   ×   16 blocks undergoes a transformation. Consequently, the index sequence of the 8   ×   8 blocks containing the DC coefficients from the original 16   ×   16 blocks must be updated in accordance with the permutation vector S 0 . To further enhance the diffusion and obfuscation characteristics of the encryption scheme, an XOR calculation is applied to the DC coefficients, following the order they appear after the block permutation. The calculation is formulated as follows:
d c ( 1 )   =   d c ( 1 )                                                                                         d c ( i )   =   d c ( i ) d c i 1 d c 1 ,  
where d c ( i ) represents the DC coefficient of i -th 8   ×   8 block,   i   =   2 ,   3 , ,   M 1 16 + 1 × ( [ ( N 1 ) / 16 ] + 1 ) . In the decryption process, the original value of the DC coefficient can be restored by Equation (5).
d c ( 1 ) = d c ( 1 ) d c ( j ) = d c ( j ) d c j 1 d c 1 ,
where j = 2 , 3 , , M 1 16 + 1 × ( [ ( N 1 ) / 16 ] + 1 ) .

3.3.3. Non-Zero AC Coefficients Sign Transformation

After quantization, the non-zero AC coefficients are identified and extracted. The encryption effect is achieved by altering the order of these non-zero AC coefficients using the pseudo-random key stream W , as well as modifying their values through sign transformation. This step preserves the distribution of zero-valued AC coefficients, ensuring that the length of the encrypted bitstream remains largely unaffected. The transformation is computed as follows, as defined in Equation (6):
a c k   =   ( 1 ) × a c k
where a c k represents the value of the k -th non-zero AC coefficient. Once the sign transformation of the non-zero AC coefficients is completed, these coefficients are restored to their original positions. This ensures that the structural integrity of the data is maintained while achieving the desired encryption effect.

3.4. Entropy Coding Stage Encryption

The primary objective of the encryption operation in the entropy encoding stage is to strengthen the correlation removal capability of the encryption scheme while preserving format consistency. Although the encryption algorithms in the first two stages contribute to security, they do not effectively eliminate the correlation among 8 × 8 blocks. To address this limitation, an RSV (Run/Size/Value) recombination strategy is implemented during the entropy coding stage. This strategy enhances the decorrelation of the data, further improving the security and robustness of the encryption scheme.
The primary objectives of the encryption at this stage are to maintain format consistency and to reduce the correlation within the image. Prior to entering the entropy encoding stage, zigzag scanning is applied to each 8   ×   8 block to separate the DC and AC coefficients. For the DC coefficients, Difference Pulse Code Modulation (DPCM) is employed. For the AC coefficients, Run Length Coding (RLC) is utilized, converting them into data pairs known as RSV (Run/Size/Value) pairs. These RSV pairs are then subjected to Huffman encoding to generate variable-length codes. To enhance security, we propose encrypting these RSV pairs through an RSV pair recombination algorithm, and the specific realization is given in Algorithm 2. This recombination process is governed by the pseudo-random key sequence Z . The steps of the algorithm are outlined as follows:
Step 1: AC coefficients with a value of zero are referred to as zero-value blocks, meaning their RSV pairs consist solely of the end identifier (0, 0). Blocks containing non-zero AC coefficients are termed non-zero blocks, indicating the presence of non-zero RSV pairs. For non-zero blocks, all end identifiers are removed.
Step 2: All blocks are evenly divided into two parts, denoted as B1 and B2, with each block in B1 corresponding sequentially to a block in B2.
Step 3: A block from B1 and its corresponding block from B2 are sequentially processed, and the following conditions are evaluated:
1.  If both blocks are zero-valued blocks, their values remain unchanged.
2.  If one block is zero-valued and the other is non-zero-valued, the two blocks are swapped. Subsequently, all RSV pairs of the new non-zero block undergo cyclic shifting to disrupt their original distribution pattern. The shift amount is determined by the corresponding key sequence Z.
3.  If both blocks are non-zero blocks, the number of RSV pairs in the first block is denoted as RSVnum. All RSV pairs from both blocks are concatenated and cyclically shifted based on the key sequence Z. The difference between the number of RSV pairs newly assigned to the first block and RSVnum is recorded as T. Initially, the division is performed according to the original number of RSV pairs in each block, with T = 0. Since the number of AC coefficients in each block cannot exceed 63 (to maintain format compatibility and avoid entropy decoding issues), the number of AC coefficients in the newly assigned blocks must be verified. If the number of AC coefficients in either block exceeds this limit, the division point is adjusted as follows:
(1) If neither block exceeds the limit, the division is performed at the original cutting point, and T = 0 .
(2) If the first block exceeds the limit, the division point is shifted left until both blocks are within the acceptable range. The shift amount is recorded as a negative value.
(3) If the second block exceeds the limit, the division point is shifted right until both blocks are within the acceptable range. The shift amount is recorded as a positive value.
The value of T is converted into an RSV pair, and the corresponding relationship between the T value and the RSV pair is established using a variable-length integer coding table, as shown in Figure 4. The RSV pair is appended to the end of the first block. The condition for determining whether the value is out of range is that it should not exceed 62.
To better illustrate the RSV pair recombination process, a simple example is provided in Figure 5. In this example, B1 and B2 are two non-zero AC blocks with the original end identifier removed. Loc1 and Loc2 represent the results of the RSV pair recombination.
Step 4: Repeat the steps outlined in Step 3 until all blocks in B1 and B2 are complete.
Step 5: Add end identifiers to all non-zero blocks after recombination. If the number of AC coefficients in the block reaches the upper limit, no additional end identifiers will be added.
Step 6: Rearrange the positions of all blocks in a circular manner, with the displacement controlled by the key sequence Z . This rearrangement only moves the RSV pairs within the block, while the position of the encoded DC coefficient remains unchanged. The RSV recombination strategy is shown in Algorithm 2.
Algorithm 2 RSV recombination strategy
Input: All RSV pairs; Z.
Output: The result of RSV recombination.
1:Remove the end identifier of each block except comprise merely [0, 0];
2:The adjacent blocks are denoted B1 and B2;
3:if zero value exhibits in B1 and B2 then
4:   Remain unchanged;
5:else if zero value exhibits B1 or B2 then
6:   Swapping B1 and B2 and connecting them in Lac;
7:   All RSV of Lac are shifted cyclically according to Z;
8:else
9:   Calculate the number of RSV in B1, denoted as RSVnum;
10:   Connecting B1 and B2 in Lac;
11:   All RSV of Lac are shifted cyclically according to Z;
12:   Fn ← The number of RSV pairs in the first block;
13:   Sn ← The number of RSV pairs in the second block;
14:   times ← Number of moves;
15:   if Fn   63 then
16:    Split Lac using original point;
17:    T ← The difference in the number of RSV pairs in the first block with RSVnum;
18:    T ← 0;
19:    The RSV pair corresponding to T is spliced into the rear of the first block;
20:   else if Fn  > 63 then
21:    Split point is left relocated until Fn       63 ;
22:    T ← times
23:    The RSV pair corresponding to T is spliced into the rear of the first block;
24:   else if Sn > 63 then
25:    Split point is right relocated until Sn   63 ;
26:    T ← times
27:    The RSV pair corresponding to T is spliced into the rear of the first block;
28:Insert the end identifier in each block;
29:All RS pairs in each block are shifted cyclically according to Z.

4. Simulation Results

In this section, we assess the encryption security and compression performance of the proposed encryption scheme. For the experimental setup, the parameters of the Lorenz hyperchaotic system are defined as a   =   10 ,   b   = 8 3 ,   c   =   28 ,   a n d   d   =   2 . The initial states of the system are set to x0 = 0, y0 = 11, z0 = 14, and w0 = 5. Experiments are conducted on the widely used image database: the USC-SIPI image database [46]. A total of twenty color images, as depicted in Figure 6, are selected as experimental samples. The experiments are performed in a Python 3.9 environment on a 64-bit operating system with an Intel Core i7-10700K processor running at 2.90 GHz and 16 GB of RAM. The performance of the proposed scheme is compared with Ref. [31] and two existing schemes referenced in [18], providing a comprehensive evaluation of its effectiveness.

4.1. Encryption Security

Image encryption algorithms necessitate a dual assurance of both perceptual security and cryptographic security. Perceptual security pertains to the degree of perceptual distortion introduced in the cipher image relative to the plain image. This distortion is quantitatively assessed using metrics such as the Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity (SSIM). High values of PSNR and close-to-unity SSIM indicate lesser distortion, whereas lower PSNR values and significant deviations in SSIM from unity signify greater distortion, reflecting a stronger perceptual security in the encrypted image.
Furthermore, the mean square error (MSE) serves as an objective measure to compute the PSNR. Given a plain image I and its corresponding cipher image K , both of size M   ×   N , the MSE and PSNR are defined as follows:
M S E = 1 m × n i = 0 m 1 j = 0 n 1 [ I ( i , j ) K ( i , j ) ] 2 .
P S N R = 10 × log 10 ( M A X I 2 M S E ) .
where MAXI is the maximum possible pixel value of the image. If each pixel is represented by an 8-bit binary, the value of MAXI is 255.
The Structural Similarity Index (SSIM) is a metric used to evaluate the similarity between two images by assessing three key aspects: luminance (brightness), contrast, and structure. Let x represent the plain image and y represent the cipher image. The SSIM is defined as follows:
S S I M x , y   =   ( 2 × μ x × μ y + c 1 ) × ( σ x y + c 2 ) ( μ x 2 × μ y 2 + c 1 ) × ( σ x 2 + σ y 2 + c 2 ) .
where μ x and μ y represent the average values of x and y , respectively. σ x and σ y denote the standard deviations of x and y, respectively. σ x y is the covariance of x and y. c 1 and c 2 are constants to avoid system errors with a denominator of 0.
In the proposed encryption methods, we encrypted twenty images using different quality factor (QF) values. The average PSNR and SSIM values for these encrypted images are presented in Figure 7. In contrast, the first scheme described in Reference [18] solely employs encryption during the transformation and quantization stages. As evident from Figure 7, our proposed method yields lower PSNR and SSIM values compared to the two schemes presented in Reference [18]. This indicates that our encryption method possesses superior image distortion capabilities.

4.1.1. Brute-Force Attack

A brute-force attack involves attempting all possible key values in order to recover the actual key and successfully execute the attack. To ensure image security, it is essential that the key space of a cryptosystem be sufficiently large. In our encryption scheme, the encryption keys consist of a 256-bit random hash value, denoted as σ , generated by the BLAKE-256 algorithm, and four initial states (x0, y0, z0, and w0) of the Lorenz chaotic system. Given a precision of 1014 for the four initial values, the key space size is 1056. The total key space size, therefore, is 2256 × 1056, which is computationally infeasible for attackers to brute force. Additionally, for different plaintext images, BLAKE-256 produces distinct 256-bit random hash values, and the initial values of the Lorenz system vary, further increasing the difficulty of a potential attack.

4.1.2. Key Sensitivity Analysis

Encryption key leads to significant variations in both the generated key sequence and the resultant cipher image during the encryption and decryption processes. This property ensures that minor alterations in the key yield substantial differences in the encrypted and decrypted outputs. Key sensitivity can be evaluated based on two criteria:
(1) A small variation in the encryption key, when applied to the same plaintext image, should produce a drastically different ciphertext image.
(2) A minimal alteration to the encryption key must generate a cipher image with statistically indistinguishable similarity to random noise, while any decryption attempt using the perturbed key should yield only non-recognizable, noise-corrupted content.
In the proposed scheme, the encryption keys consist of two components: (1) a 256-bit random hash value σ derived from the BLAKE-256 hash function, which is generated using both the plaintext image and a random value; and (2) the four initial states x0, y0, z0, and w0. During the key sensitivity analysis, the 256-bit hash value σ remains constant as there are no changes to the plaintext image. Therefore, to assess the impact of key sensitivity, we focus on modifying these four initial states, which allows us to observe the variations in the encryption and decryption results.
To evaluate the first case of key sensitivity, we introduce slight modifications to the four initial values x0 = 0, y0 = 11, z0 = 14, and w0 = 5 by changing each value in turn. Specifically, we generate four distinct keys as follows: key1 (x0 = 0.00000001, y0 = 11, z0 = 14, w0 = 5), key2 (x0 = 0, y0 = 11.00000001, z0 = 14, w0 = 5), key3 (x0 = 0, y0 = 11, z0 = 14.00000001, w0 = 5), and key4 (x0 = 0, y0 = 11, z0 = 14, w0 = 5.00000001). These keys are then used to encrypt various images in accordance with the proposed algorithm.
The correlation coefficients between the ciphertexts produced by the original key and each of the slightly altered keys are presented in Table 2. The observed low correlation coefficients demonstrate that even small changes in the initial key lead to significant variations in the encrypted images, thereby confirming that the proposed scheme satisfies the first condition of key sensitivity.
For the second case of key sensitivity, we perform decryption using both the correct key and a slightly altered key on the ciphered Mandrill image. The decrypted results are shown in Figure 8. As expected, the decryption using the slightly modified key does not produce the correct image, highlighting the sensitivity of the scheme to key changes. This demonstrates that even a minor variation in the key prevents the recovery of the original image, thereby satisfying the second condition of key sensitivity.

4.1.3. Differential Attack

Differential attack is a type of chosen-plaintext attack and serves as a crucial method for analyzing the plaintext sensitivity of encryption algorithms. In this type of attack, the adversary aims to identify statistical relationships by calculating the differences between a plaintext image and its corresponding ciphertext [47]. If the algorithm is vulnerable, small changes in the plaintext should result in noticeable patterns or predictable differences in the encrypted output.
The Number of Pixel Change Ratio (NPCR) and Unified Average Change in Intensity (UACI) are two important statistical parameters widely used to evaluate the robustness of image encryption schemes against differential attacks. NPCR measures the ratio of differing pixel values between two ciphertexts generated from the original plaintext image and the altered plaintext image, where a single pixel value is modified. Specifically, it quantifies the proportion of pixels in the ciphertext that have changed when the plaintext is slightly modified. A higher NPCR value indicates a greater degree of change between the ciphertexts, signifying a more secure encryption scheme that resists differential attacks effectively. UACI calculates the average change in intensity between the pixels of two ciphertexts derived from the original and the slightly altered plaintext images. This metric measures the average change density across the entire encrypted image. For higher security, the value of UACI should be close to 33%, which represents a balanced and significant change in the ciphertext corresponding to slight changes in the plaintext. The calculations of these two parameters are defined as follows:
D i , j   =   0 , i f   C 1 ( i , j )   =   C 2 ( i , j ) 1 , i f   C 1 ( i , j ) C 2 ( i , j ) .
N P C R : N C 1 , C 2 = i = 1 M j = 1 N D ( i , j ) M × N × 100 % .
U A C I : U C 1 , C 2 = i = 1 M j = 1 N C 1 ( i , j ) C 2 ( i , j ) ) 255 M × N × 100 % .
where C 1 and C 2 are two cipher images corresponding to two plain images differing by one single pixel. The range of i is 1 i M and the range of j is 1 j N .
The mean NPCR and UACI results for our encryption algorithm are listed in Table 3. For comparison, the NPCR and UACI values for the cipher images generated by the two encryption schemes described in Ref. [18] and Ref. [31] are also included in Table 4. The encryption scheme in Ref. [18] uses key generation machines that rely on ordinary images. From the analysis of experimental results, all encryption schemes have defense capabilities against differential attacks.

4.1.4. Statistical Attack

Correlation analysis involves examining the relationship between two or more variables to assess their degree of correlation. In the context of images, adjacent pixels typically exhibit high correlation, meaning that a pixel often contains information about its neighboring pixels. Attackers can exploit this by analyzing specific data segments between plaintext and ciphertext images, attempting to predict the plaintext without knowledge of the encryption key. As a result, image encryption schemes must effectively reduce the correlation between image pixels. In Table 5 and Table 6, we measure correlation in three directions: horizontal, vertical, and diagonal. The correlation coefficients for various images are provided, with the Quality Factor (QF) set to 50.
In Table 6, the correlation between adjacent pixels in the image encrypted using our algorithm is significantly lower than other methods. This reduction is attributed to the added entropy encoding stage in our encryption process. When compared to the three schemes, our algorithm effectively diminishes the correlation between adjacent pixels, particularly in the vertical direction. Table 7 presents the correlation values for the original ‘Mandrill’ image and the ciphertexts produced by various encryption schemes.

4.1.5. Information Entropy Analysis

Information entropy is an important metric used to quantify the randomness of the pixels in the images [48]. Information entropy is close to eight after encrypted gray-image mapping the effectiveness of the proposed encryption algorithm. The results of information entropies with different algorithms in distinct images are exhibited in Table 8, in which the information entropy of our presented encryption scheme is higher than Ref. [31]. Moreover, the result of the proposed algorithm is close to, and slightly higher than, Ref. [18], which has proven impressive for our algorithm in terms of ciphertext attacks to some extent.

4.2. Compression Performance

Our encryption scheme is integrated into the lossy JPEG compression process, with compression performance influenced by the Quality Factor (QF) values. A higher QF value results in less compression, leading to a larger bitstream size. To evaluate compression performance, we use the bits per pixel (BPP) metric. The average BPP values are calculated for the reconstructed images. The experimental images used are the nine shown in Figure 6. In Figure 9, we present the QF-BPP curves for various encryption schemes, with QF values ranging from 10 to 90. From the average QF-BPP curves shown in Figure 9, the compression performance of our proposed scheme is better than Ref. [31] and the second scheme in Ref. [18], but worse than the first scheme in Ref. [18]. This can be attributed to the introduction of the RSV (Replaced and Reassembled Variable) in our approach, which increases the bitstream size. Unlike the first scheme in Ref. [18], our encryption does not occur during the entropy encoding stage. In contrast, the second scheme in Ref. [18] adds an additional RSV pair for each block during the entropy encoding encryption process, along with an EOB (End-of-Bit) embedding step, resulting in a larger bitstream size. However, our scheme only incorporates half of the additional RSV pairs, leading to better compression performance compared to the second scheme in Ref. [18]. Moreover, the change in compression savings (measured as BPP) against image quality (measured as PSNR and SSIM) is plotted in Figure 10. As BPP increases, the proposed strategy generates higher-quality compressed images than JPEG, Ref. [18], and Ref. [31].

4.3. Computational Complexity Analysis

A promising algorithm obeys low time complexity; we combined JPEG compression with encryption to balance data security and compression efficiency in this study. The time complexity of this algorithm mainly includes three parts: compression, key generation, and encryption. Four steps occur in the JPEG compression algorithm that has been presented thoroughly in Section 2.1, and the computation complexity of JPEG compression is approximately O( M N ), where M and N represent the image’s size. The pseudo-random sequences are generated by the Lorenz hyperchaotic system, in which the time complexity is O( n ), wherein n expresses the iterated times of the chaotic system. The encryption algorithm mainly contains five stages: coefficient distribution, block permutation, DC coefficient encryption, non-zero AC coefficient sign transformation, and RSV pair separate–permute–restore–permute, in which the time complexity is O( M N + M N 64 + m + k + r 2 s 2 ), where m is the number of DC coefficients, k signifies the number of AC coefficients, r stands for the number of non-zero 8   ×   8 blocks, and s indicates the number of RSV pairs in every 8   ×   8 block. Therefore, the computational complexity of this algorithm is O( M N + n + M N + M N 64 + m + k + r 2 s 2 ), which after simplifying is approximately O( 2 M N + n + r 2 s ).

4.4. The Effect of Encryption Strategy on Compression

This section presents a detailed analysis of the effect of different encryption methods on compression performance, including file size and execution time. The compression performance should be fully accounted for in the stage of designing the encryption algorithm.

4.4.1. The Effect of DCT Permutation on Compression

A coefficient permutation strategy based on the generated key is proposed in the DCT transformation phase. A comprehensive analysis is presented in this subsection on the effect of this encryption method on compression, using the common image (Mandrill) as an example to test the variation of bit increase and time overhead with different QF values, which are plotted in Figure 11 and Figure 12. Obviously, the file size and computational overhead are enhanced as QF increases. Among them, the blue solid is the compression baseline, and the green solid expresses the result of coefficient permutation. Experimental results have demonstrated that the proposed scheme incurs a slight increase in bit size and computational time.

4.4.2. The Effect of Shuffling on the DPCM

The block permutation and DC encryption are performed in this study, which may affect the next step. In this subsection, we prove the algorithm’s feasibility using the Mandrill image. As depicted in Figure 11 and Figure 12, the red solid quantizes this encryption result, and all values are basically similar to the compression result that represents the blue solid. The results indicate that the encryption algorithm demonstrates efficacy despite its negligible impact on deposit space and communication costs.

5. Conclusions

In this study, we propose a novel scheme that combines compression and encryption. We begin by designing a block splitting method that divides each 16 × 16 block into four 8 × 8 blocks, based on the 16 × 16 Discrete Cosine Transform (DCT), to ensure compatibility with the standard quantization table of the JPEG compression standard. Next, block permutation is applied to disrupt the correlation between blocks. The DC coefficient and nonzero AC coefficients are encrypted using pseudo-random key sequences generated by the Lorenz hyperchaotic system. Additionally, we introduce RSV pair recombination during entropy coding to further break the correlation between adjacent pixels, thereby enhancing data security. All RSV pairs are split into two parts within the blocks. Each block in the first part is paired with a corresponding block in the second part, and the RSV pairs between these two blocks are permuted and recombined. Moreover, an extra RSV pair, representing the displacement of the segmentation point, is embedded within each block of the first part. Compared to existing schemes, only half of the additional RSV pairs are embedded while enhancing the data security and running efficiency. Extensive experiments and various evaluation metrics show that our approach significantly improves compression efficiency and noticeably ruins the pixel correlation to immunize against the various attacks. However, the vulnerability of our algorithm did not fully disrupt intra-block pixel correlations, resulting in low information entropy in ciphertext images. In future work, we will address this limitation by increasing, reducing, or modifying a part of the pixel to break the intra-block pixel correlations.

Author Contributions

W.Z. conducted the research idea and algorithm design; X.Z. built and realized the algorithm as well as analyzed the result; M.X. and J.Y. contributed literature research and manuscript revision editing; H.Y. provided guiding philosophy; and Z.Z. contributed the manuscript review. All authors have read and agreed to the published version of the manuscript.

Funding

This research is supported by the Natural Science Foundation of Liaoning Province of China (Grant No. 2022-MS-123) and the National Natural Science Foundation of China (Grant Nos. 61977014, 61902056, and 61603082).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Acknowledgments

We thank the reviewers for their helpful comments and suggestions.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Ghadirli, H.; Nodehi, A.; Enayatifar, R. An overview of encryption algorithms in color images. Signal Process. 2019, 164, 163–185. [Google Scholar] [CrossRef]
  2. Khan, J.; Kayhan, S. Chaos and compressive sensing based novel image encryption scheme. J. Inf. Secur. Appl. 2021, 58, 102711. [Google Scholar] [CrossRef]
  3. Xu, Q.; Sun, K.; Cao, C.; Zhu, C. A fast image encryption algorithm based on compressive sensing and hyperchaotic map. Opt. Lasers Eng. 2019, 121, 203–214. [Google Scholar] [CrossRef]
  4. Fridrich, J. Image encryption based on chaotic maps. In Proceedings of the 1997 IEEE International Conference on Systems, Man, and Cybernetics. Computational Cybernetics and Simulation, Orlando, FL, USA, 12–15 October 1997; Volume 2, pp. 1105–1110. [Google Scholar]
  5. Kumar, S.; Sharma, D. Image scrambling encryption using chaotic map and genetic algorithm: A hybrid approach for enhanced security. Nonlinear Dyn. 2024, 112, 12537–12564. [Google Scholar] [CrossRef]
  6. Ustun, D.; Sahinkaya, S.; Atli, N. Developing a secure image encryption technique using a novel S-box constructed through real-coded genetic algorithm’s crossover and mutation operators. Expert Syst. Appl. 2024, 256, 124904. [Google Scholar] [CrossRef]
  7. Zhu, S.; Deng, X.; Zhang, W.; Zhu, C. Image encryption scheme based on newly designed chaotic map and parallel DNA coding. Mathematics 2023, 11, 231. [Google Scholar] [CrossRef]
  8. Ghazvini, M.; Mirzadi, M.; Parvar, N. A modified method for image encryption based on chaotic map and genetic algorithm. Multimed. Tools Appl. 2020, 79, 26927–26950. [Google Scholar] [CrossRef]
  9. Lu, J.; Zhang, J.; Hao, D.; Zhao, R.; Mou, J.; Zhang, Y. Text visualization encryption based on coordinate substitution and chaotic system. Chaos Solitons Fractals 2024, 184, 115001. [Google Scholar] [CrossRef]
  10. Pandey, K.; Sharma, D. Novel image encryption algorithm utilizing hybrid chaotic maps and Elliptic Curve Cryptography with genetic algorithm. J. Inf. Secur. Appl. 2025, 89, 103995. [Google Scholar] [CrossRef]
  11. Ullah, S.; Liu, X.; Waheed, A.; Zhang, S. S-box using fractional-order 4D hyperchaotic system and its application to RSA cryptosystem-based color image encryption. Comput. Stand. Interfaces 2025, 93, 103980. [Google Scholar] [CrossRef]
  12. Teng, L.; Wang, X.; Yang, F.; Xian, Y. Color image encryption based on cross 2D hyperchaotic map using combined cycle shift scrambling and selecting diffusion. Nonlinear Dyn. 2021, 105, 1859–1876. [Google Scholar] [CrossRef]
  13. Mansouri, A.; Sun, P.; Lv, C.; Zhu, Y.; Zhao, X.; Ge, H.; Sun, C. A secure medical image encryption algorithm for IoMT using a Quadratic-Sine chaotic map and pseudo-parallel confusion-diffusion mechanism. Expert Syst. Appl. 2025, 270, 126521. [Google Scholar] [CrossRef]
  14. Rohhila, S.; Singh, A.K. Using binary hash tree-based encryption to secure a deep learning model and generated images for social media applications. Future Gener. Comput. Syst. 2025, 166, 107722. [Google Scholar] [CrossRef]
  15. ISO/IEC 10918-1:1994; Information technology—Digital compression and coding of continuous-tone still images—Part 1: Requirements and guidelines. ISO: Geneva, Switzerland, 1994.
  16. Abdmouleh, M.; Khalfallah, A.; Bouhlel, M. A novel selective encryption scheme for medical images transmission based-on JPEG compression algorithm. Procedia Comput. Sci. 2017, 112, 369–376. [Google Scholar] [CrossRef]
  17. Li, Z.; Sun, X.; Du, C.; Ding, Q. JPEG algorithm analysis and application in image compression encryption of digital chaos. In Proceedings of the 2013 Third International Conference on Instrumentation, Measurement, Computer, Communication and Control, Shenyang, China, 21–23 September 2013; pp. 185–189. [Google Scholar]
  18. Li, P.; Lo, K.T. Joint image encryption and compression schemes based on 16 × 16 DCT. J. Vis. Commun. Image Represent. 2019, 58, 12–24. [Google Scholar] [CrossRef]
  19. Ahmed, F.; Rehman, M.; Ahmad, J.; Khan, M.; Boulila, W.; Srivastava, G.; Lin, J.; Buchanan, W. A DNA based colour image encryption scheme using a convolutional autoencoder. ACM Trans. Multimed. Comput. Commun. Appl. 2023, 19, 128. [Google Scholar] [CrossRef]
  20. Qin, C.; Hu, J.; Li, F.; Qian, Z.; Zhang, X. JPEG image encryption with adaptive DC coefficient prediction and RS pair permutation. IEEE Trans. Multimed. 2022, 25, 2528–2542. [Google Scholar] [CrossRef]
  21. Wang, X.; Li, Y. Chaotic image encryption algorithm based on hybrid multi-objective particle swarm optimization and DNA sequence. Opt. Lasers Eng. 2021, 137, 106393. [Google Scholar] [CrossRef]
  22. Wang, B.; Zhang, B.; Liu, X. An image encryption approach on the basis of a time delay chaotic system. Optik 2021, 225, 165737. [Google Scholar] [CrossRef]
  23. Wang, X.; Chen, S.; Zhang, Y. A chaotic image encryption algorithm based on random dynamic mixing. Opt. Laser Technol. 2021, 138, 106837. [Google Scholar] [CrossRef]
  24. Munoz-Guillermo, M. Image encryption using q-deformed logistic map. Inf. Sci. 2021, 552, 352–364. [Google Scholar] [CrossRef]
  25. Yildirim, M. A color image encryption scheme reducing the correlations between R, G, B components. Optik 2021, 237, 166728. [Google Scholar] [CrossRef]
  26. Kurihara, K.; Shiota, S.; Kiya, H. An encryption-then-compression system for jpeg standard. In Proceedings of the 2015 Picture Coding Symposium (PCS), Cairns, Australia, 31 May–3 June 2015; pp. 119–123. [Google Scholar]
  27. Kurihara, K.; Kikuchi, M.; Imaizumi, S.; Shiota, S.; Kiya, H. An encryption-then-compression system for jpeg/motion jpeg standard. IEICE Trans. Fundam. Electron. Commun. Comput. Sci. 2015, 98, 2238–2245. [Google Scholar] [CrossRef]
  28. Nimbokar, K.; Sarode, M.; Ghonge, M. A survey based on designing an efficient image encryption-then-compression system. Int. J. Comput. Appl. 2014, 975, 8887. [Google Scholar]
  29. Chaudhary, P.; Gupta, R.; Singh, A.; Majumder, P.; Pandey, A. Joint image compression and encryption using a novel column-wise scanning and optimization algorithm. Procedia Comput. Sci. 2020, 167, 244–253. [Google Scholar] [CrossRef]
  30. Li, P.; Lo, K. Joint image compression and encryption based on alternating transforms with quality control. In Proceedings of the 2015 Visual Communications and Image Processing (VCIP), Singapore, 13–16 December 2015; pp. 1–4. [Google Scholar]
  31. Li, P.; Lo, K. Joint image compression and encryption based on order-8 alternating transforms. J. Vis. Commun. Image Represent. 2017, 44, 61–71. [Google Scholar] [CrossRef]
  32. Li, P.; Lo, K. A content-adaptive joint image compression and encryption scheme. IEEE Trans. Multimed. 2017, 20, 1960–1972. [Google Scholar] [CrossRef]
  33. Qian, Z.; Zhang, X.; Wang, S. Reversible data hiding in encrypted JPEG bitstream. IEEE Trans. Multimed. 2014, 16, 1486–1491. [Google Scholar] [CrossRef]
  34. He, K.; Bidan, C.; Le Guelvouit, G.; Feron, C. Robust and secure image encryption schemes during JPEG compression process. Electron. Imaging 2016, 28, art00014. [Google Scholar] [CrossRef]
  35. Wang, F.; Bai, S. JPEG image encryption by shuffling DCT coefficients in defined block. In Proceedings of the 2013 International Conference on Computational and Information Sciences, Shiyang, China, 21–23 June 2013; pp. 60–63. [Google Scholar]
  36. Ji, X.; Bai, S.; Guo, Y.; Guo, H. A new security solution to JPEG using hyper-chaotic system and modified zigzag scan coding. Commun. Nonlinear Sci. Numer. Simul. 2015, 22, 321–333. [Google Scholar] [CrossRef]
  37. Liang, H.; Zhang, X.; Cheng, H. Huffman-code based retrieval for encrypted JPEG images. J. Vis. Commun. Image Represent. 2019, 61, 149–156. [Google Scholar] [CrossRef]
  38. He, J.; Chen, J.; Luo, W.; Tang, S.; Huang, J. A novel high-capacity reversible data hiding scheme for encrypted JPEG bitstreams. IEEE Trans. Circuits Syst. Video Technol. 2018, 29, 3501–3515. [Google Scholar] [CrossRef]
  39. Chuman, T.; Sirichotedumrong, W.; Kiya, H. Encryption-then-compression systems using grayscale-based image encryption for JPEG images. IEEE Trans. Inf. Forensics Secur. 2018, 14, 1515–1525. [Google Scholar] [CrossRef]
  40. Xu, Y.; Xiong, L.; Xu, Z.; Pan, S. A content security protection scheme in JPEG compressed domain. J. Vis. Commun. Image Represent. 2014, 25, 805–813. [Google Scholar] [CrossRef]
  41. Zhang, Y.; Cai, Z.; Xiong, G. A new image compression algorithm based on non-uniform partition and U-system. IEEE Trans. Multimed. 2020, 23, 1069–1082. [Google Scholar] [CrossRef]
  42. Zhou, J.; Liu, X.; Au, O.; Tang, Y. Designing an efficient image encryption-then-compression system via prediction error clustering and random permutation. IEEE Trans. Inf. Forensics Secur. 2013, 9, 39–50. [Google Scholar] [CrossRef]
  43. Leger, A.; Omachi, T.; Wallace, G. The JPEG Still Picture Compression Algorithm and Its Applications. SPIE J. Opt. Eng. 1991, 30, 947–954. [Google Scholar] [CrossRef]
  44. Wang, X.; Wang, M. Hyperchaotic Lorenz system. Acta Phys. Sin. 2007, 56, 5136–5141. [Google Scholar] [CrossRef]
  45. Xia, B. Digital Image Encryption Algorithm Based on 4D Hyperchaotic System. 2014. Available online: https://www.jiamisoft.com/blog/11493-siweishuzituxiangjiamisuanfa.html (accessed on 7 May 2025).
  46. Weber, A. The USC-SIPI Image Database, version 5; University of Southern California: Los Angeles, CA, USA, 2006; Available online: http://sipi.usc.edu/database/ (accessed on 7 May 2025).
  47. Taneja, N.; Raman, B.; Gupta, I. Combinational domain encryption for still visual data. Multimed. Tools Appl. 2012, 59, 775–793. [Google Scholar] [CrossRef]
  48. Ye, G.; Guo, L. A visual meaningful encryption and hiding algorithm for multiple images. Nonlinear Dyn. 2024, 112, 14593–14616. [Google Scholar] [CrossRef]
Figure 1. Basic process of JPEG compression and decompression.
Figure 1. Basic process of JPEG compression and decompression.
Entropy 27 00838 g001
Figure 2. The visualized results of encrypting different syntax elements in distinct steps. (a) Original image. (b) The first plain value (FPV) in each block with size 8 × 8 is encrypted. (c) The rest of the plain value excluding the first (RPV) in each block with size 8 × 8 is encrypted. (d) The DC coefficient after the DCT transformation (DCADCT) is encrypted. (e) The AC coefficient after the DCT transformation (ACADCT) is encrypted. (f) The DC coefficient after the quantification (DCAQ) is encrypted. (g) The AC coefficient after the quantification (ACAQ) is encrypted. (h) The DC coefficient after DPCM coding (DCADPCM) is encrypted.
Figure 2. The visualized results of encrypting different syntax elements in distinct steps. (a) Original image. (b) The first plain value (FPV) in each block with size 8 × 8 is encrypted. (c) The rest of the plain value excluding the first (RPV) in each block with size 8 × 8 is encrypted. (d) The DC coefficient after the DCT transformation (DCADCT) is encrypted. (e) The AC coefficient after the DCT transformation (ACADCT) is encrypted. (f) The DC coefficient after the quantification (DCAQ) is encrypted. (g) The AC coefficient after the quantification (ACAQ) is encrypted. (h) The DC coefficient after DPCM coding (DCADPCM) is encrypted.
Entropy 27 00838 g002
Figure 3. Encryption framework of the proposed scheme.
Figure 3. Encryption framework of the proposed scheme.
Entropy 27 00838 g003
Figure 4. RSV pairs corresponding to different T values.
Figure 4. RSV pairs corresponding to different T values.
Entropy 27 00838 g004
Figure 5. A simple example of Step 3.
Figure 5. A simple example of Step 3.
Entropy 27 00838 g005
Figure 6. Test images. (a) Mandrill; (b) Airplane; (c) Car; (d) Earth; (e) House; (f) Jelly beans; (g) Peppers; (h) Sailboat; (i) Tree; (j) Splash; (k) San Diego; (l) San Diego2; (m) Woodland Hills, Ca.; (n) San Diego3; (o) San Diego4; (p) San Francisco; (q) Foster City, Ca.; (r) Female; (s) Female2; and (t) Jelly beans2.
Figure 6. Test images. (a) Mandrill; (b) Airplane; (c) Car; (d) Earth; (e) House; (f) Jelly beans; (g) Peppers; (h) Sailboat; (i) Tree; (j) Splash; (k) San Diego; (l) San Diego2; (m) Woodland Hills, Ca.; (n) San Diego3; (o) San Diego4; (p) San Francisco; (q) Foster City, Ca.; (r) Female; (s) Female2; and (t) Jelly beans2.
Entropy 27 00838 g006
Figure 7. Perceptual security results (Proposed Algorithm, Ref. [18], and Ref. [31]).
Figure 7. Perceptual security results (Proposed Algorithm, Ref. [18], and Ref. [31]).
Entropy 27 00838 g007
Figure 8. Key sensitivity analysis for the decryption process: (left) decrypted image via the correct key, (right) decrypted image via the modified key.
Figure 8. Key sensitivity analysis for the decryption process: (left) decrypted image via the correct key, (right) decrypted image via the modified key.
Entropy 27 00838 g008
Figure 9. The QF-BPP curves with different encryption schemes: nine test images (Proposed Algorithm, Ref. [18], and Ref. [31]).
Figure 9. The QF-BPP curves with different encryption schemes: nine test images (Proposed Algorithm, Ref. [18], and Ref. [31]).
Entropy 27 00838 g009
Figure 10. The PSNR-BPP-SSIM curves with different compression schemes(Proposed Algorithm, Ref. [18], and Ref. [31]).
Figure 10. The PSNR-BPP-SSIM curves with different compression schemes(Proposed Algorithm, Ref. [18], and Ref. [31]).
Entropy 27 00838 g010
Figure 11. The variation of file size.
Figure 11. The variation of file size.
Entropy 27 00838 g011
Figure 12. The variation of computational time.
Figure 12. The variation of computational time.
Entropy 27 00838 g012
Table 1. Different syntax elements are encrypted to impact performance metrics.
Table 1. Different syntax elements are encrypted to impact performance metrics.
Syntax ElementsBit ExpansionEfficiencyPerceptual SecurityBlock EffectEdge Information
FPV0.32HighLowNoClear
RPV2.43LowLowNoClear
DCADCT0.01HighLowYesClear
ACADCT2.61LowLowNoClear
DCAQ0.04HighMediateYesVaguely observed
ACAQ1.28LowHighYesVaguely observed
DCADPCM0.04HighHighYesInvisibility
FPV0.32HighLowNoClear
RPV2.43LowLowNoClear
DCADCT0.01HighLowYesClear
Table 2. Correlation coefficient for image encrypted with original key and slightly different key.
Table 2. Correlation coefficient for image encrypted with original key and slightly different key.
ImageAlgorithm
Key1Key2Key3Key4Key1
Mandrill0.0002880.000098−0.002164−0.0003200.000288
Airplane0.0083650.004452−0.002935−0.0044630.008365
Car−0.0000110.0018810.0045110.000033−0.000011
Earth−0.0068090.006958−0.004686−0.003050−0.006809
House−0.0068000.002395−0.006090−0.006233−0.006800
Jelly−0.004974−0.001895−0.0095820.013820−0.004974
Peppers0.0095140.0087120.002132−0.0000990.009514
Sailboat0.003398−0.000293−0.0024720.0033680.003398
Tree−0.003853−0.0023580.000253−0.002544−0.003853
Table 3. NPCR and UACI of cipher images with one pixel changing.
Table 3. NPCR and UACI of cipher images with one pixel changing.
ImageProposed Scheme
NPCR%UACI%
RGBRGB
Mandrill99.5199.5299.5024.6925.3924.39
Airplane99.5099.5499.5126.4926.9625.99
Car99.4999.5399.5025.5526.3025.37
Earth99.4899.5499.4926.5527.2826.22
House99 5499.5599.5226.7427.1626.06
Jelly99.4999.5699.5026.5028.4527.28
Peppers99.5099.5599.5125.8927.1526.01
Sailboat99.5099.5499.4925.3926.2025.28
Tree99.4999.5399.4925.1425.8125.08
Average value of 20 test images99.4599.5199.4825.8626.7525.71
Table 4. NPCR and UACI with different algorithms.
Table 4. NPCR and UACI with different algorithms.
ImageRef. [18]-1Ref. [18]-2Ref. [31]
NPCR%UACI%NPCR%UACI%NPCR%UACI%
RGBRGBRGBRGBRGBRGB
Mandrill99.0799.1698.8622.9223.7922.0199.4499.4399.3623.7724.6023.2099.5099.4999.5624.7223.2625.82
Airplane99.4199.3899.4223.0122.7122.9599.2199.2199.2124.3724.2823.4699.6299.5999.6527.8929.3229.21
Car99.3799.4299.3622.3823.0621.6599.4299.3699.3923.5824.5123.1699.5499.5699.5525.5127.0825.73
Earth99.2699.3499.2620.2220.8120.1699.4399.4499.4222.4423.1722.1499.4299.5499.3920.6626.3419.90
House98.5098.5398.4923.4223.6922.8198.8398.8598.8024.2724.1423.6199.4499.5699.4620.2625.4525.84
Jelly98.5198.5198.4024.2723.3622.2498.8198.9198.8625.5125.5624.4099.6199.6199.5126.4129.3322.43
Peppers99.3299.3399.3321.9322.7521.0399.4399.4599.4323.3424.1822.4499.5099.6099.6624.4031.3831.83
Sailboat99.4299.3999.3420.1420.9319.4499.4699.4599.4622.2523.1321.8399.4099.5999.6321.0730.2430.07
Tree98.5998.6398.5622.2422.0721.9299.0799.1799.1023.2023.7422.7799.5199.6499.4924.7730.6026.55
Average value of 20 test images99.1599.0299.0522.2322.5321.5299.2899.1599.2823.6424.2423.0299.5399.5599.5823.9328.2126.47
Table 5. Correlation coefficients of adjacent pixels.
Table 5. Correlation coefficients of adjacent pixels.
ImageJPEGProposed Scheme
HVDHVD
Mandrill0.86020.76510.70970.07970.03760.0165
Airplane0.97100.96810.94370.40530.15680.0991
Car0.94540.95920.91290.28710.08140.0766
Earth0.97690.98150.95840.35730.14880.1190
House0.97710.96210.94190.45810.11990.1013
Jelly0.97910.98210.96060.53480.18460.1304
Peppers0.92020.93450.95910.45390.11870.0814
Sailboat0.97790.97660.02120.24660.09810.0741
Tree0.96930.95170.93290.13340.02350.0337
Average value of 20 test images0.94270.93530.82290.31810.11230.0765
Table 6. Correlation coefficients with different algorithms.
Table 6. Correlation coefficients with different algorithms.
ImageAlgorithmRef. [18]-1Ref. [18]-2Ref. [31]
HVDHVDHVDHVD
Mandrill0.07970.03760.01650.54070.54370.46080.19000.17080.16480.09170.15600.0703
Airplane0.40530.15680.09910.67200.75730.59790.52890.47460.4096−0.02730.12420.0362
Car0.28710.08140.07660.55580.69280.50930.40880.37760.32270.10560.14870.0405
Earth0.35730.14880.11900.68100.77510.61740.55220.47470.40180.17590.16980.0665
House0.45810.11990.10130.65860.75910.58280.49140.48950.40840.14670.20140.0621
Jelly0.53480.18460.13040.79670.85590.74360.73800.67770.59620.06440.16930.0009
Peppers0.45390.11870.08140.61830.71560.53990.49090.44620.35680.10840.19780.0019
Sailboat0.24660.09810.07410.57580.67280.49590.36380.32110.24740.04510.05070.0021
Tree0.13340.02350.03370.37610.52120.30500.19630.16450.1073−0.00560.03670.0186
Average value of 20 test images0.31810.11230.07650.59250.68730.52390.43840.38750.30240.06780.13770.0351
Table 7. The correlation of the plain ‘Mandrill’ image with different algorithms.
Table 7. The correlation of the plain ‘Mandrill’ image with different algorithms.
File NameImageRGB
MandrillEntropy 27 00838 i001Entropy 27 00838 i002Entropy 27 00838 i003Entropy 27 00838 i004
JPEGEntropy 27 00838 i005Entropy 27 00838 i006Entropy 27 00838 i007Entropy 27 00838 i008
AlgorithmEntropy 27 00838 i009Entropy 27 00838 i010Entropy 27 00838 i011Entropy 27 00838 i012
Ref. [18]-1Entropy 27 00838 i013Entropy 27 00838 i014Entropy 27 00838 i015Entropy 27 00838 i016
Ref. [18]-2Entropy 27 00838 i017Entropy 27 00838 i018Entropy 27 00838 i019Entropy 27 00838 i020
Ref. [31]Entropy 27 00838 i021Entropy 27 00838 i022Entropy 27 00838 i023Entropy 27 00838 i024
Table 8. The results of information entropy with different methods.
Table 8. The results of information entropy with different methods.
ImagesInformation Entropy
RGB
AlgorithmRef. [18]-1Ref. [18]-2Ref. [31]AlgorithmRef. [18]-1Ref. [18]-2Ref. [31]AlgorithmRef. [18]-1Ref. [18]-2Ref. [31]
Mandrill7.7707.8647.8327.7217.7847.9057.8157.7667.7537.8677.7727.65
Airplane7.8097.9177.8867.7127.8457.9517.9047.7767.7997.9037.8797.669
Car7.7777.8977.8577.7177.8087.9247.8577.7667.7657.8797.8437.663
Earth7.7837.9037.8717.7087.8467.9367.9007.7217.7747.9047.8517.633
House7.8127.8617.8007.7177.8287.9377.8667.7817.7797.8927.8267.654
Jelly7.7547.8437.8577.7367.8947.8677.8727.7277.8947.8707.8507.701
Peppers7.7957.9077.8477.7227.8427.9347.8767.8167.7927.9037.8457.688
Sailboat7.7727.8817.8327.7457.8117.9287.8587.7797.7567.8777.8217.719
Tree7.7737.8767.8447.7437.7967.8987.8527.7717.7767.8797.8377.716
Average value of 20 test images7.7827.8837.8477.7247.8287.9207.8667.7677.7877.8867.8367.677
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, W.; Zheng, X.; Xing, M.; Yang, J.; Yu, H.; Zhu, Z. Chaos-Based Color Image Encryption with JPEG Compression: Balancing Security and Compression Efficiency. Entropy 2025, 27, 838. https://doi.org/10.3390/e27080838

AMA Style

Zhang W, Zheng X, Xing M, Yang J, Yu H, Zhu Z. Chaos-Based Color Image Encryption with JPEG Compression: Balancing Security and Compression Efficiency. Entropy. 2025; 27(8):838. https://doi.org/10.3390/e27080838

Chicago/Turabian Style

Zhang, Wei, Xue Zheng, Meng Xing, Jingjing Yang, Hai Yu, and Zhiliang Zhu. 2025. "Chaos-Based Color Image Encryption with JPEG Compression: Balancing Security and Compression Efficiency" Entropy 27, no. 8: 838. https://doi.org/10.3390/e27080838

APA Style

Zhang, W., Zheng, X., Xing, M., Yang, J., Yu, H., & Zhu, Z. (2025). Chaos-Based Color Image Encryption with JPEG Compression: Balancing Security and Compression Efficiency. Entropy, 27(8), 838. https://doi.org/10.3390/e27080838

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop