An Image Encryption Scheme Combining 2D Cascaded Logistic Map and Permutation-Substitution Operations

: Confusion, diffusion, and encryption keys affect the quality of image encryption. This research proposes combining bit-and pixel-level permutation and substitution methods based on three advanced chaotic logistic map methods. The three chaotic methods are the 2D Logistic-adjusted-Sine map (2D-LASM), the 2D Logistic-sine-coupling map (2D-LSCM), and the 2D Logistic ICMIC cascade map (2D-LICM). The encryption method’s design consists of six stages of encryption, involving permutation operations based on chaotic order, substitution based on modulus and bitXOR, and hash functions. Hash functions are employed to enhance key space and key sensitivity quality. Several testing tools are utilized to assess encryption performance, including histogram and chi-square analysis, information entropy, correlation of adjacent pixels, differential analysis, key sensitivity and key space analysis, data loss and noise attacks, NIST randomness tests, and TestU01. Compared to using a single 2D logistic map, the amalgamation of bit-level and pixel-level encryption and the utilization of three 2D cascade logistic maps has improved encryption security performance. This method successfully passes the NIST, TestU01, and chi-square tests. Furthermore, it outperforms the previous method regarding correlation, information entropy, NPCR, and UACI tests.


Introduction
Image encryption is the process of securing an image by converting the original image into an unreadable form.Without an appropriate decryption key, the image cannot be restored and understood [1,2].Image encryption aims to protect the privacy and security of images, especially in situations where images are sensitive and confidential, such as medical [3,4], and military [5] images.In the context of encryption methods, Shannon's theory emphasizes the importance of the two main concepts of confusion and diffusion [3,6,7].Combining these two concepts aims to create a strong dependency between encrypted data and encryption keys.Confusion in the encryption process makes the statistical relationship between the original and encrypted images complex and challenging to predict, while diffusion distributes information in its original form evenly and efficiently to all parts of the encrypted image.Both confusion and diffusion can be achieved in the encryption method by performing intensive permutations and substitutions for each pixel and/or image bit.Permutation involves randomizing the order of bits or pixels in an image [8], aiming to hide the statistical relationship between pixels in an image.Substitution is carried out by changing the bit or pixel value in the image with a different value.Substitution can be carried out using various methods such as substitution-box (S-box) [9][10][11][12], XOR [13,14], and modulus [15,16] operations.With increasingly chaotic patterns of permutations and substitutions, it will be more complex and difficult to predict.Therefore, a combination of permutations and substitutions must be performed at the bitand pixel level.
The important thing to note in the encryption method is the key.Keys play a critical role in the security and confidentiality of encrypted data.Therefore, the key must fulfil several requirements, such as uniqueness, randomness, length or space, and complexity [17][18][19].This is so that the input key from the user to encrypt the image needs to be processed first to generate a more complex key, such as a Pseudorandom Number Generator (PRNG) [5,20] and hash function [21].However, these keys can also be compromised with chosen plaintext attacks if only standard operations are performed, such as stream image shuffling (permutation), image blocking, and sub-image encryption [22].More complex and combined operations can certainly improve security.In today's modern image encryption, keys, including the chaotic method, can be made very complex.In the key terminology, the chaotic method is used to produce chaotic sequences.Chaotic sequences are susceptible to initial conditions, with different initial parameters resulting in entirely different chaotic sequences, thus providing a very high level of security [18,19,23,24].The chaotic method has many variations, some famous for image encryption, such as the logistic map, Henon map, Tent map, Arnold map, and Lorentz system [6].Others include memristive hyperchaotic maps [25] and their development for 3D settings [26].Focus on the logistic map method.This method has been developed and has several derivatives such as an Improved Logistic Map (ILM) [7], 2D Logistic-adjusted-Sine map (2D-LASM) [27], 2D Logistic-sine-coupling map (2D-LSCM) [28], and 2D Logistic ICMIC cascade map (2D-LICM) [29].Derivatives from the logistic map have several advantages, namely an increase in the Lyapunov exponent (LE), which enhances randomness and sensitivity, resulting in a more chaotic, unique, and secure sequence.Of the four logistic map derivatives, 2D-LASM, 2D-LICM, and 2D-LSCM have the advantage of having more dimensions and higher complexity.Based on the literature, this study has the following objectives and contributions: 1.
Proposing a combination of 2D-LASM, 2D-LICM, and 2D-LSCM to improve image encryption security based on various assessments.

2.
Proposing a combination of substitution and permutation techniques based on chaotic sequences at the bit and pixel levels in six stages to improve confusion and diffusion quality.

3.
Using a hash function on the private key to produce greater key space before using it to generate a chaotic sequence.
The rest of this paper is written in several sections, namely: related work, which goes into more detail about the hypotheses and work related to this research; a section outlining the proposed approach, which explains the stages in the proposed method; the results and discussion section, which presents research results, discussions, and comparisons with related methods; and lastly is the conclusion, which presents conclusions and research suggestions.

Related Work
The previous sections have discussed the importance of confusion and diffusion qualities in image encryption, which are implemented through permutation and substitution processes.Another aspect to note is the quality of the encryption keys.Literature related to the development of permutation and substitution techniques and the improvement of key qualities has motivated this research.Initially, permutation and/or substitution encryption methods were applied only to pixels.Furthermore, permutations and substitutions at the bit level have become a trend, as seen in studies [27,[29][30][31][32][33].Bit-level encryption can be carried out on the bit plane or by directly converting the image into bits for the encryption process.One simple method is proposed by [33], which involves the conversion of the RGB image into a bit form.First, the image channels are split, then each channel is converted into a bit form and integrated.After integration, permutation-diffusion is carried out using the chaotic tent map method.Study [31] also employs a fairly simple bit-level technique, namely the bit swapping method and the modulus operation with the piecewise linear chaotic map (PWLCM).
Study [30] employs a more complex algorithm on grayscale images.The image matrix is converted using the bit-plane decomposition technique and then transformed into a vector form.Next, the diffusion process is carried out with cyclic shift permutation and modulus substitution functions for confusion.The diffusion and confusion stages are conducted in several rounds.The results are transformed into a bit-plane form and finally restored into an encrypted image matrix.Further research [29] proposed an improved method, with the main contribution being the use of 2D-LICM, which possesses a much higher Lyapunov exponent (LE) value than the standard logistic map.Encryption is accomplished by converting the image into bit-plane decomposition, followed by cyclic shift and XOR operations on rows and columns based on 2D-LICM.Furthermore, the encrypted bits are reassembled into an encrypted image matrix.Apart from 2D-LICM, other methods have also been developed, such as 2D-LASM [27], whose LE is also better than the standard logistic map.A study [32] introduces a new combined chaotic system (NCCS) based on three chaotic models, namely the Logistic-Sine map, Logistic-Tent map, and Sine-Tent map.These three chaotic models are combined with a hash function to generate the keystream.During the encryption stage, two levels of bit confusion and one label bit diffusion are carried out.
Pixel-level image encryption is proposed once again by [13].Although it appears simple, the advantage of this method lies in its potential implementation in future quantum computing technology.Hilbert scrambling and XOR operations are performed to encrypt the image.The combination of bit-and pixel-level encryption is proposed in the studies [28,[34][35][36].Study [34] proposes four levels of encryption.The first involves pixel-level permutation, the second is a new permutation, the third is column permutation, and finally, the fourth is the bitXOR-based diffusion block.These four levels utilize the same chaotic method, namely PWLCM.Another study [28] suggests 2D-LSCM for performing two levels of image encryption.The first level is based on scrambling using 2D-LSCM on rows and columns of image pixels.The second level also employs 2D-LSCM on the rows and columns of image bits.Study [35] introduces the Logistic-Chebyshev map (LCM) and employs the SHA-512 hash operation to enrich the keys pace.The encryption operations conducted in this study include row and column scrambling on pixels, circular shift on bit-planes, and XOR diffusion.Study [36] proposes image encryption by combining the Hilbert curve, cyclic shift, and 2D Henon map.The Hilbert curve is utilized for pixel permutations, cyclic shifts are employed for bit permutations, and 2D Henon maps are used for diffusion processes.
Based on the reviewed research above, it is apparent that the proposed encryption methods are generally developed using bit-level encryption techniques with reliable chaotic methods such as [27,[29][30][31][32][33].Furthermore, a combination of bit-and pixel-level encryption was developed [28,[34][35][36].We conclude that chaotics such as 2D-LASM, 2D-LICM, and 2D-LSCM have the potential to achieve better encryption quality if combined at both the bit-and pixel levels.Additionally, the use of the hash function will increase the key space.Thus, 2D-LASM, 2D-LICM, 2D-LSCM, and the hash operation are proposed in this study.A more detailed description of the proposed method is presented in Section 3.

Proposed Approach
The approach suggested in this research involves integrating three distinct types of chaotic maps, namely 2D-LASM, 2D-LSCM, and 2D-LICM.These chaotic methods are utilized in the permutation and substitution processes, operating at both the bit-and pixel levels across six encryption stages.Figure 1 illustrates the proposed method, while the specific details of the method are outlined as follows: 1.
Read the user key and plain image as input for the SHA-512 hash function, and the output is 64 characters each, i.e., hashA and hashB.
In the second stage of encryption, first transform the second sequence 2D-LSCM with Equation (10).Next, perform bitXOR substitution between the encrypted first stage image and the transformed second sequence of 2D-LSCM.
In the third stage, encryption is conducted using bit-level permutation, so the encrypted second-stage image (enc2) needs to be converted to binary form.At this stage, sort the first 2D-LASM sequence.Then, perform permutation enc2 based on the sorting index.9.
The image, which is still in binary form, is restored to decimal form (enc3) to perform pixel substitution in the fourth stage.At this stage, the second sequence of 2D-LASM must be converted with Equation (11), then carry out the modulus operation with Equation (12).
where second sequence of 2D-LASM (s2) ∈ (β 1 , . . . ,β n ), enc3 ∈ (e3 1 , . . . ,e3 n ).10.The bit-level permutation is performed in the fifth stage based on the first 2D-LICM sequence.Then, the image is converted back into binary form, then sorting the first 2D-LICM sequence.Then, perform a permutation based on the first 2D-LICM index sorting sequence in the encrypted fourth-stage image (enc4).11.Restore the encrypted fifth stage image to decimal form, then convert the second sequence from 2D-LICM with Equation ( 13).Then, perform the bitXOR operation on the encrypted fifth stage image (enc5) with the second sequence from 2D-LICM to obtain the final encrypted image ( f e), see Equation ( 14).
Computation 2023, 11, x FOR PEER REVIEW 1. Read the user key and plain image as input for the SHA-512 hash function, a output is 64 characters each, i.e., ℎℎ and ℎℎ.2. Generate initial value (0, 0) for 2D-LSCM, (1, 1) for 2D-LASM, and (2,  2D-LICM.To generate these, use Equations ( 1)-( 6).The encryption method proposed above has six stages consisting of one pixel-level permutation stage, two bit-level permutation stages, a one-pixel substitution stage, and two bitXOR substitution stages.Meanwhile, perform image decryption can be performed with the reverse step.

Results and Discussion
The experiments conducted in this research were performed using Matlab R2021a, an i7 11th gen processor with 16 GB of memory.Two types of images were tested, namely 8-bit grayscale and 24-bit color (red, green, and blue) images.The images used were standard test images and random samples from the BossBase dataset.The image dimensions were 512 × 512 pixels, and a sample image used is presented in Figure 2. It is important to note that the proposed method is primarily designed for 8-bit images.The channels are separated first for color images, and then the proposed method is applied to each channel.After encryption, all channels are combined into one encrypted color image.Sample encrypted results for grayscale images are shown in Figure 3, while Figure 4 displays the results for color images.To assess image encryption security, several tests, such as histogram and chi-square (χ 2 ), entropy, correlation coefficient, differential, key space, and key sensitivity analysis.The NIST and TestU01 randomness suite tests, noise and data loss attacks, and ablation studies are conducted, as presented in Sections 4.1-4.10.
The encryption method proposed above has six stages consisting of one pixel-level permutation stage, two bit-level permutation stages, a one-pixel substitution stage, and two bitXOR substitution stages.Meanwhile, perform image decryption can be performed with the reverse step.

Results and Discussion
The experiments conducted in this research were performed using Matlab R2021a, an i7 11th gen processor with 16 GB of memory.Two types of images were tested, namely 8-bit grayscale and 24-bit color (red, green, and blue) images.The images used were standard test images and random samples from the BossBase dataset.The image dimensions were 512 × 512 pixels, and a sample image used is presented in Figure 2. It is important to note that the proposed method is primarily designed for 8-bit images.The channels are separated first for color images, and then the proposed method is applied to each channel.After encryption, all channels are combined into one encrypted color image.Sample encrypted results for grayscale images are shown in Figure 3, while Figure 4 displays the results for color images.To assess image encryption security, several tests, such as histogram and chi-square( ), entropy, correlation coefficient, differential, key space, and key sensitivity analysis.The NIST and TestU01 randomness suite tests, noise and data loss attacks, and ablation studies are conducted, as presented in Sections 4.1-4.10.

Histogram and Chi-Square (𝑋 ) Test
The image histogram is a visual representation of the pixel intensity distribution in the image.Histograms can provide important information about image characteristics, including contrast, brightness, intensity distribution, and color variation.In the context of image encryption, the image histogram needs to change significantly.Besides that, the histogram distribution must be nearly uniform.The histogram of the encrypted image presented in Figures 3 and 4, row 2, appears visually uniform.However, validation of histogram uniformity was carried out using  analysis.If the calculated  value is less than or equal to  , = 293.2478with a significance level () of 0.05 and freedom degrees () of 255, it indicates that the histogram is considered to be uniformly distributed.Equation ( 15) is used to calculate the chi-square value.
In Matlab, the index range for  is from 1 to 256, as it starts indexing from 1.The grey recurrence value ( ) represents the value assigned to each occurrence of the  th grey value.
Based on the results presented in Table 1, all image histograms are confirmed to be uniform.This is evidenced by the  value, which is smaller than 293.2478.This also proves that the performance of the proposed encryption method is excellent based on the histogram and chi-square analysis.As a note, for RGB images, the chi-square value is taken from the mean value of the R, G, and B channels.The image histogram is a visual representation of the pixel intensity distribution in the image.Histograms can provide important information about image characteristics, including contrast, brightness, intensity distribution, and color variation.In the context of image encryption, the image histogram needs to change significantly.Besides that, the histogram distribution must be nearly uniform.The histogram of the encrypted image presented in Figures 3 and 4, row 2, appears visually uniform.However, validation of histogram uniformity was carried out using X 2 analysis.If the calculated χ 2 value is less than or equal to X 2 δ, f d = 293.2478with a significance level (δ) of 0.05 and freedom degrees ( f d) of 255, it indicates that the histogram is considered to be uniformly distributed.Equation ( 15) is used to calculate the chi-square value.
In Matlab, the index range for i is from 1 to 256, as it starts indexing from 1.The grey recurrence value (r i ) represents the value assigned to each occurrence of the ith grey value.
Based on the results presented in Table 1, all image histograms are confirmed to be uniform.This is evidenced by the X 2 value, which is smaller than 293.2478.This also proves that the performance of the proposed encryption method is excellent based on the histogram and chi-square analysis.As a note, for RGB images, the chi-square value is taken from the mean value of the R, G, and B channels.

Correlation Coefficient of Adjacent Pixel Test
Analyzing the correlation coefficient (cor) between adjacent pixels is a technique employed to assess the level of interdependence between neighboring pixels in an image following the encryption process.This analysis evaluates the level of randomness or irregularity in the arrangement of encrypted pixels.In addition, this analysis can indicate patterns or structures that still exist in encrypted images.The range of cor measurements is −1 to 1.A value closer to −1 or 1 shows an inverse or high correlation, while the optimal value is close to zero.A value closer to zero indicates the minimum correlation, signifying encryption at the highest level of randomness.Equation ( 16) is used to calculate the cor.
Equation ( 16) uses the symbol N to represent the total number of pixels in the image.The variables a and b refer to two neighboring pixels positioned diagonally, horizontally, or vertically.E(a) and E(b) represent the expectations or average values of a and b, respectively.Figure 5 represents a sample of the correlation coefficient of adjacent pixels plot results.

Correlation Coefficient of Adjacent Pixel Test
Analyzing the correlation coefficient () between adjacent pixels is a technique employed to assess the level of interdependence between neighboring pixels in an image following the encryption process.This analysis evaluates the level of randomness or irregularity in the arrangement of encrypted pixels.In addition, this analysis can indicate patterns or structures that still exist in encrypted images.The range of  measurements is −1 to 1.A value closer to −1 or 1 shows an inverse or high correlation, while the optimal value is close to zero.A value closer to zero indicates the minimum correlation, signifying encryption at the highest level of randomness.Equation ( 16) is used to calculate the .
Equation ( 16) uses the symbol  to represent the total number of pixels in the image.The variables  and  refer to two neighboring pixels positioned diagonally, horizontally, or vertically.() and () represent the expectations or average values of  and , respectively.Figure 5  The results presented in Table 2 show that all images correlate closely to zero.This indicates that the proposed method performs well based on the analysis.In Table 3, a correlation comparison with related work is also carried out on the Lena image, and it can be seen that the proposed method has the best correlation value in the vertical direction because it is closest to zero.However, compared to related methods, the proposed method is superior in diagonal correlation (marked with bold text), while horizontal and vertical correlation is the second best (marked with underlined text).It can also be seen that no related method excels at all correlation values, while the proposed method holds a relative advantage across all correlation values.As a note, the correlation value is taken from the average value of the R, G, and B channels for RGB images.

Information Entropy Test
Entropy measures the level of randomness or uncertainty in data distribution.In image encryption terminology, entropy can be used to evaluate the effectiveness of The results presented in Table 2 show that all images correlate closely to zero.This indicates that the proposed method performs well based on the analysis.In Table 3, a correlation comparison with related work is also carried out on the Lena image, and it can be seen that the proposed method has the best correlation value in the vertical direction because it is closest to zero.However, compared to related methods, the proposed method is superior in diagonal correlation (marked with bold text), while horizontal and vertical correlation is the second best (marked with underlined text).It can also be seen that no related method excels at all correlation values, while the proposed method holds a relative advantage across all correlation values.As a note, the correlation value is taken from the average value of the R, G, and B channels for RGB images.

Information Entropy Test
Entropy measures the level of randomness or uncertainty in data distribution.In image encryption terminology, entropy can be used to evaluate the effectiveness of encryption methods to measure the degree of randomness or uncertainty in data distribution.The entropy of 8-bit images generally has a range of 0 to 8, where high entropy means an increasingly random distribution of pixel values, making it difficult to guess the actual pixel value.Conversely, low entropy indicates a clear pattern or dependency between pixel values, making the image vulnerable to statistical attacks.Entropy can be measured by Equation (17).
where H is entropy, which is calculated to involve the total number of symbols (n), the information of the source (encrypted image) represented by e i , and the probability of occurrence of the e i , represented by p(e i ).
Based on the results presented in Table 4, all images have very high entropy values: the lowest is 7.9993 and the highest is 7.9994.It can also be concluded that the entropy value is very stable, both in grayscale and RGB images.In RGB images, the entropy value presented is the mean value of all channels.The results presented Table 5 also confirm that the proposed method has an advantage in entropy values compared to related methods.

Key Sensitivity Test
The key sensitivity test is essential in image encryption, and its purpose is to evaluate the sensitivity of the encryption to small changes in the encryption key.During image encryption, a transformation occurs in the original image, which the encryption key affects.By introducing variations to the encryption key, key sensitivity tests examine the impact of these changes on the resulting encrypted image.To test key sensitivity, at least two encryptions are performed on an image with different keys.This difference is generally a single bit, which can be at the key's start, end, or middle.The sample test results presented in Figure 6 confirm that the proposed method satisfies the requirements of this test, as even a single-bit difference results in significant variation, making precise image decryption impossible.

Differential Test
The differential test in image encryption is a crucial evaluation process commonly employing the Normalized Pixel Change Rate (NPCR) and Unified Average Changing Intensity (UACI) metrics.NPCR assesses the impact of changes in the encryption key on the resulting encrypted image by calculating the percentage of differing pixels between two encrypted images generated using slightly different encryption keys.A higher NPCR value indicates that even slight changes in the encryption key lead to significant variations in the encrypted image.On the other hand, UACI measures the average intensity changes in encrypted images caused by modifications in the encryption key.UACI quantifies the difference in pixel intensity between two encrypted images produced using slightly different encryption keys.A higher UACI value signifies those alterations in the encryption key result in notable intensity changes in the encrypted image.The optimal NPCR value is approximately 99.6094%, while the ideal UACI value is around 33.4635%.Equations ( 18) and ( 19) are utilized to calculate NPCR and UACI, respectively.
where 1 and 2 represent the original cipher and the altered cipher, respectively. and  correspond to the width and height dimensions, respectively, and  and  indicate the coordinates of individual pixels.
Based on the NPCR and UACI tests, the proposed method produces an average NPCR and UACI value that is very close to the ideal value, see Table 6.The average NPCR value is 99.9060, the difference is only 0.0004 from the ideal NPCR value.Meanwhile, the

Differential Test
The differential test in image encryption is a crucial evaluation process commonly employing the Normalized Pixel Change Rate (NPCR) and Unified Average Changing Intensity (UACI) metrics.NPCR assesses the impact of changes in the encryption key on the resulting encrypted image by calculating the percentage of differing pixels between two encrypted images generated using slightly different encryption keys.A higher NPCR value indicates that even slight changes in the encryption key lead to significant variations in the encrypted image.On the other hand, UACI measures the average intensity changes in encrypted images caused by modifications in the encryption key.UACI quantifies the difference in pixel intensity between two encrypted images produced using slightly different encryption keys.A higher UACI value signifies those alterations in the encryption key result in notable intensity changes in the encrypted image.The optimal NPCR value is approximately 99.6094%, while the ideal UACI value is around 33.4635%.Equations ( 18) and ( 19) are utilized to calculate NPCR and UACI, respectively.
where C1 and C2 represent the original cipher and the altered cipher, respectively.N and M correspond to the width and height dimensions, respectively, and i and j indicate the coordinates of individual pixels.
Based on the NPCR and UACI tests, the proposed method produces an average NPCR and UACI value that is very close to the ideal value, see Table 6.The average NPCR value is 99.9060, the difference is only 0.0004 from the ideal NPCR value.Meanwhile, the average UACI value is 33.4611, and the difference is only 0.0024.This indicates that the performance based on NPCR and UACI is very satisfying.Based on Table 7, the NPCR value of the proposed method outperforms the related methods.Meanwhile, UACI is the second best.

NIST Randomness Test
The National Institute of Standards and Technology (NIST) developed a series of tests that are used to assess the randomness or random nature of a data or bit stream.This randomness test serves as a standard for measuring the security of cryptographic algorithms.The NIST randomness test comprises 15 statistical tests, which include tests such as the frequency test, run test, run-bit test, long-term test, and others.For each test, a p-value is generated ranging from 0 to 1.To assess the effectiveness of encryption and ensure compliance with test standards, each test requires a sequence of at least 106 bits and must generate a p-value greater than 0.01 to pass.In terms of image encryption, the results of image encryption can be directly tested.The encrypted image must first be converted into a binary file, then saved with the .datextension.This .datfile is input for testing through a series of NIST tests.The test results, including the average p-values of all encrypted images, are documented in Table 8.The statistical test results obtained from the NIST tool indicate that the proposed method successfully passed all tests and demonstrated resistance to various attacks.The results presented in Table 8 confirm that the proposed method successfully passed all tests, as evidenced by the average p-value of all tests being greater than 0.49.

TestU01
TestU01 is a statistical testing software suite used to assess the quality and randomness of a random number generator [37].In this case, it is the second chaotic sequence because TestU01 requires integer input and all three are converted, as explained in Section 3. To test the randomness, we use two battery tests, namely Rabbit and Alphabit.Rabbit is a test suite designed to evaluate a random number generator's correlation between the generated bits.This test consists of 39 different statistical sub-tests to help identify imbalances or patterns in the distribution of bits produced by the generator.Alphabit consists of 17 more general sub-tests involving basic statistical tests and distribution tests to identify abnormalities in the distribution of random numbers.The number of sub-tests in Rabbit and Alphabit is generally used for bitstreams with a length of 2 24 bits [38][39][40].The test results presented in Table 9 show that all chaotic sequences have passed all tests and are proven to exhibit randomness that meets the criteria of TestU01.Data loss and noise attack testing in image encryption is an important process to evaluate the robustness of image encryption methods against attacks that result in data loss.This helps determine how the image encryption method can maintain confidentiality, overcome damage, and ensure data integrity despite data loss or increased noise.Additionally, it can help reveal potential weaknesses in image encryption methods against data loss or noise attacks.In this section, the proposed method is tested, as presented in Figure 7. While, visually, the attack of data loss can be seen, the addition of noise may not be obvious.The results of the description show that the proposed method is able to restore the image in all forms with scattered noise.This test confirms the statistical measurement tools that have been tested, such as chi-square, correlation of adjacent pixels, entropy, differential analysis, and NIST.This is evidenced by the even distribution of noise in the decryption process.

Key Space Analysis
The analysis of key space plays a crucial role in image encryption as it encompasses the entire range of potential encryption keys utilized within a specific encryption system.In the context of image encryption, the key space holds significant importance since a larger key space presents a greater challenge in guessing the correct encryption key.The key space must be extensive in an effective encryption system to prevent successful brute force attacks.The key space must have 2 or more possibilities [41,42].The proposed method has several parameters, a dynamic initial value, and a hash operation, from which

Key Space Analysis
The analysis of key space plays a crucial role in image encryption as it encompasses the entire range of potential encryption keys utilized within a specific encryption system.In the context of image encryption, the key space holds significant importance since a larger key space presents a greater challenge in guessing the correct encryption key.The key space must be extensive in an effective encryption system to prevent successful brute force attacks.The key space must have 2 100 or more possibilities [41,42].The proposed method has several parameters, a dynamic initial value, and a hash operation, from which the proposed method can calculate the key space.The total key space of the proposed method is ≈ 1.34 × 10 154 , as presented in detail in Table 10.This shows that the proposed method will be highly reliable and resistant to brute-force attacks.

Ablation Study
Ablation studies constitute the final part of this section.They are conducted to perform a more in-depth analysis by removing certain components of the proposed encryption method to observe how these changes affect security quality and encryption performance.As explained earlier, the proposed encryption method consists of six stages, with each of the two stages being based on one 2D logistic map.Upon reviewing the results presented in Table 11, one of the weaknesses of the proposed method is its slightly slower encryption time.However, this is reasonable due to combining three 2D logistic maps.Moreover, the encryption time remains reasonable, totaling less than two seconds, with the difference being no more than half a second.Another observation from combining these three 2D logistic maps is the enhancement in all security aspects, as demonstrated by chi-square, IE, CC, NPCR, and UACI assessments.Furthermore, it can be elucidated that the two stages utilizing 2D-LICM have a more significant impact on improving the performance against differential attacks.The two stages employing 2D-LSCM tend to increase IE and CC.Meanwhile, the two stages of 2D-LASM tend to strongly influence the chi-square value.In short, despite the increase in computational time, the proposed method's combination provides notable encryption security advantages.

Conclusions
In this study, a proposal has been made to combine the 2D-LASM, 2D-LICM, and 2D-LSCM methods for image encryption.Substitution and permutation techniques are also employed based on chaotic sequences at both the bit-and pixel levels.A hash function is also utilized on the private key to enhance the key space quality before generating a garbled sequence.The test results, encompassing histogram and chi-square analysis, information entropy, adjacent pixel correlation, differential analysis, key sensitivity analysis, key space analysis, data loss, noise attacks, NIST randomness test, and TestU01 test, affirm that the

Figure 5 .
Figure 5. Sample results of plot correlation coefficient of adjacent pixels of 1013.pgmimage: (a) diagonal of the original image; (b) diagonal of encrypted image; (c) horizontal of the original image; (d) horizontal of encrypted image; (e) vertical of the original image; (f) vertical of encrypted image.

Figure 6 .
Figure 6.Sample results of key sensitivity decryption test: (a) original image; (b) encrypted image; (c) decrypted image with correct key; (d) decrypted image with single-bit key modification.

Figure 6 .
Figure 6.Sample results of key sensitivity decryption test: (a) original image; (b) encrypted image; (c) decrypted image with correct key; (d) decrypted image with single-bit key modification.

Figure 7 .
Figure 7. Sample results of data loss and noise attack: (a) data loss attack of gray image; (b) salt and pepper noise attack (0.05) of gray image; (c) decrypted gray image of data loss attack; (d) decrypted gray image of noise attack; (e) data loss attack of color image; (f) salt and pepper noise attack (0.05) of color image; (g) decrypted color image of data loss attack; (h) decrypted color image of noise attack.

Figure 7 .
Figure 7. Sample results of data loss and noise attack: (a) data loss attack of gray image; (b) salt and pepper noise attack (0.05) of gray image; (c) decrypted gray image of data loss attack; (d) decrypted gray image of noise attack; (e) data loss attack of color image; (f) salt and pepper noise attack (0.05) of color image; (g) decrypted color image of data loss attack; (h) decrypted color image of noise attack.

Table 2 .
The correlation coefficient of adjacent pixel results of the encrypted image.

Table 3 .
Comparison of correlation coefficient of the adjacent pixel of encrypted Lena Image (grayscale).

Table 2 .
The correlation coefficient of adjacent pixel results of the encrypted image.

Table 3 .
Comparison of correlation coefficient of the adjacent pixel of encrypted Lena Image (grayscale).

Table 5 .
Comparison of information entropy of encrypted Lena image.

Table 7 .
Comparison of NPCR and UACI of Lena image.

Table 8 .
Average NIST randomness test results from all images.