1. Introduction
In recent years, chaotic systems have become crucial in fields like image encryption [
1,
2,
3], neural networks [
4,
5,
6], secure communications [
7,
8,
9], and the generation of pseudo-random numbers [
10,
11,
12]. This is largely due to their extreme sensitivity to initial conditions and parameters, coupled with their inherent unpredictability [
13,
14,
15]. Given that a multitude of natural phenomena can be represented by dissipative chaotic systems (DCSs), numerous such systems have been introduced by researchers over the years. Notable examples include the Chen system [
16], the Chua system [
17], the Lu system [
18], the Sprott system [
19], and the Lorenz system [
20]. However, the study of conservative chaotic systems (CCSs) remains less explored. Unlike their dissipative counterparts, CCSs typically lack the concept of attractors [
21]. They are characterized by pseudo-randomness and exhibit properties akin to white noise. These features suggest that CCSs hold significant potential for broader applications, particularly in the realm of information encryption.
Due to the many advantages that CCSs possess that DCSs lack, numerous scholars have shifted their focus to the realm of conservative chaos [
22,
23]. They have proposed a multitude of methods for constructing CCSs. These have led to the emergence of many new CCSs with complex dynamics, filling significant gaps in the field. One such approach is the use of fractional-order systems to generate conservative chaos. In 2017, Cang et al. [
24] introduced two 4D CCSs with conservative flows. In 2021, Leng et al. [
25] proposed a 4D fractional-order system with both dissipative and conservative properties, based on the Case A system previously outlined by Cang. This new system demonstrated exceptional stability and transient characteristics. The following year, Leng et al. [
26] constructed a novel fractional-order CCS. It was discovered in the paper that by altering the order or parameters of the system, a conservative system could exhibit dissipative phenomena. The system’s hardware implementation was carried out on a DSP experimental platform, confirming the accuracy of the theoretical analysis. Tian et al. [
27] proposed a novel 5D conservative hyperchaotic system, utilizing the Adomian decomposition method (ADM) [
28,
29] to transform an integer-order system into a fractional-order system.
Another method for constructing CCSs involves the application of Euler’s equations, which describe three-dimensional (3D) rigid bodies [
30]. This technique couples low-dimensional rigid body sub-equations by introducing a Hamiltonian vector field to obtain high-dimensional generalized Euler equations. However, both the Hamiltonian and Casimir energy conservation of the resulting equations are preserved. To induce chaos, this method typically disrupts the conservation of Casimir energy by introducing constants while maintaining the conservation of Hamiltonian energy, at which point chaos arises. Numerous articles have constructed Hamiltonian CCSs by using this method [
31]. Qi et al. [
32] proposed a 4D CCS with co-existing chaotic orbits by breaking the conservation of Casimir energy. Based on the rigid body Euler equation of binary product, Pan et al. [
33] proposed an effective method to construct high-dimensional CCSs based on skew-symmetric matrices and designed the 4D, 5D, and 6D Hamiltonian CCSs. It was proved mathematically that the system constructed by this method satisfies the conservation of Casimir energy and Hamiltonian energy.
Additionally, there is a method for constructing CCSs based on the special structure of cyclic symmetric systems [
34] that has garnered much attention from scholars, due to the property that all state equations share the same structure [
35]. Among those scholars, Zhang has made significant contributions in this area. Zhang et al. [
36] proposed a new method for constructing cyclic symmetric conservative chaos, which is applicable to the construction of 3D and higher-dimensional systems. In their paper, not only were the chaotic properties of the constructed systems determined by scattering, equilibrium points, and Lyapunov exponents, but also Zhang improved the offset-enhanced control of three CCSs constructed by him, and the system was ultimately implemented using a DSP hardware platform. In 2024, Zhang et al. [
37] used a 4D system and a 5D system as examples. The 4D system exhibited extreme multiple stability of homogeneity and heterogeneity with the introduction of a sinusoidal function term. The 5D system improved the offset enhancement control by using the introduction of the absolute value, thus exhibiting extreme multiple stability in multiple directions and achieving hyperchaotic roaming. Ultimately, in their paper, the 5D system was used as an example of the application of the circular symmetric conservative system in data-encrypted transmission. However, most of the aforementioned studies proposed low-dimensional chaotic systems with a relatively narrow window for system control parameters. In practical applications, if the system parameters could vary over a wide range while the system remained chaotic then this would be highly advantageous in areas such as secure communication and chaos control [
38,
39].
To obtain chaotic systems with a broad parameter range, it is often necessary to introduce nonlinear terms or extend the system to higher dimensions. In 2022, Zhang et al. [
40] proposed a 5D Hamiltonian conserved hyperchaotic system with equilibrium points of four center types, a wide parameter range, and co-existing hyperchaotic orbits. In 2024, Huang et al. [
41] proposed a 5D symmetric Hamiltonian system with hidden multiple stability. In 2023, Wu et al. [
42] proposed a modeling method for a class of 7D CCSs based on previous studies, which can maintain stable chaotic properties under a wide range of parameter variations. In the same year, Zhang et al. [
43] constructed a 5D conservative hyperchaotic system by deriving four 5D sub-Euler equations and combining them with existing sub-Euler equations to obtain ten 5D sub-Euler equations, which have a wide range of initial values and are multiply stable; and Zhou et al. [
44] proposed a new n-dimensional conservative chaotic model in 2023: this system has an extensive parameter range and possesses a high maximum Lyapunov exponent. Ultimately, the authors applied this system to the field of image encryption.
In this paper, by incorporating a memristor into a 4D CCS, a 5D CMHS is constructed. This system exhibits a rich array of phenomena, such as hyperchaos and transient quasi-periodicity across a broad spectrum of parameters and initial values. The principal contributions of this paper are as follows:
(1) A new 5D CMHS is proposed, and its conservation is demonstrated through the analysis of divergence and the Kaplan–Yorke dimension.
(2) The dynamics of the 5D CMHS are analyzed from multiple perspectives. The analyses indicate that the proposed system remains hyperchaotic over a wide range of parameters, characterized by an infinite number of equilibrium points and transient quasi-periodicity.
(3) The system’s chaotic sequence is tested by NIST, and the outcomes show that the sequence passes all 15 NIST tests, signifying that the system possesses hyperchaotic properties suitable for a pseudo-random number generator.
(4) Based on the proposed 5D CMHS, an image-encryption algorithm is presented. From a security perspective, its feasibility is demonstrated, and its superiority is proven by comparison with other encryption algorithms.
In
Section 2, the construction of a 5D CMHS is presented, along with proof of its conservativity. In
Section 3, the fundamental dynamical behaviors of the system are analyzed.
Section 4 investigates the hyperchaotic behavior and transient quasi-periodicity of the system over a broad range of parameters. In
Section 5 and
Section 6, the 5D CMHS is tested by NIST and its application to image encryption based on the newly proposed 5D CMHS is explored.
Section 7 summarizes the work presented in this paper.
5. NIST Test
The NIST SP800-22 statistical test suite [
56] is a statistical test suite published by the National Institute of Standards and Technology for evaluating and verifying the statistical performance of binary pseudo-random sequences generated by random number generators or pseudo-random number generators. The NIST test suite consists of 15 statistical tests that consider a variety of statistical characteristics from many different perspectives, to detect whether a sequence of random numbers is highly random. Depending on whether the test results pass their criteria, the feasibility and safety of the random number generator can be determined.
To obtain the binary sequence required for the measurement, it is necessary to first sample the chaotic orbit to obtain the real-valued sequence generated by the system. Assuming the sampling time is , the initial the values of system (3) are then set to to obtain the chaotic orbit, which is subsequently truncated. Sampling is then performed at the specified sampling interval, to obtain a 5D sequence . Finally, a simple encoding algorithm is applied, to transform the five-dimensional chaotic sequence into a binary sequence.
For this section, the chaotic random sequence generated based on system (3) was tested for randomness and uniformity through NIST tests.
A binary sequence
of size
bits was generated based on system (3), and
was divided equally into
groups, each containing a binary sequence of size 106 bits. The test yielded the
, as well as the pass ratio of each group of sequences, and then the corresponding
was calculated by Equation (
11):
where
is the incomplete gamma function,
is the number of values in subinterval
i, and
s is the sample size.
If the test results satisfied the following three test requirements at the same time, it indicated that the sequence passed the test in statistical significance and had good randomness:
(a) Each should be greater than the predefined significance level.
(b) Each corresponding pass rate should lie within the confidence interval , where . The confidence interval should be calculated to be (0.9601, 1.0298).
(c) Each p-ValueT should be greater than 0.0001.
The full test results are shown in
Table 3. For items with more tests, there were multiple
p-Values in the test results, and only the smallest
p-Value among all the items is listed in
Table 3. If the smallest
p-Value passed the test, it could be proved that the item passed every test. As can be seen in
Table 3, the
p-value, the passing percentage, and the
all satisfied the above three test requirements, and the pseudo-random number sequences generated in this chapter passed all the NIST tests.
In addition, in order to observe the distribution of the
p-Values more intuitively and assess the uniformity of the sequence,
Figure 9 shows a histogram of the distribution of the
p-Values in non-overlapping template matching detection. As can be seen from
Figure 9, the distribution of the
p-Values was very uniform. Therefore, the pseudo-random number sequence generated based on system (3) successfully passed all the NIST tests with good randomness and uniformity.
6. Image-Encryption Application
In recent years, with the advancement of chaos theory, chaos-based image-encryption algorithms have shone in the field of image encryption, due to their rich and complex dynamic behaviors and high sensitivity to initial values.
In nonlinear dynamical systems, the fundamental difference between CCSs and DCSs lies in the fact that the former do not have attractors, which provides unique advantages for image encryption [
44,
57]. CCSs maintain the constancy of phase space volume, unlike dissipative systems that tend towards one or more attractors, thereby reducing the predictability and reconstructability of the system’s state. Since image encryption requires algorithms to have a high degree of unpredictability and sensitivity, the characteristics of CCSs can effectively resist cryptographic attacks based on attractor analysis, enhancing the security of the encryption process. Therefore, when designing image-encryption algorithms, CCSs are considered a more ideal choice, due to their unique dynamic characteristics.
6.1. Image-Encryption Process
In a 256-level grayscale image, each pixel’s grayscale value is represented by 8 binary bits. By sequentially extracting each pixel’s bits, 8-bit planes can be formed. Specifically, the 1st plane contains the least significant bit of all the pixels in the image, while the 8th plane contains the most significant bit. By layering the image, a chaotic sequence generated by the 5D CMHS is used to encrypt each bit plane of the image individually. The specific encryption process is as follows:
step1: Select a grayscale image of pixels, represented in binary form, as the raw image to be processed, and decompose it into eight-bit planes. Then, perform a scrambling operation on the higher-order bit planes, namely, the 5th, 6th, 7th, and 8th bit planes.
step2: Utilize the fourth-order classical Runge–Kutta method to iterate the 5D CMHS, generating a chaotic sequence. From the state variables x and y, generate five sets of -sized chaotic sequences, and , and from the state variables z and w generate two sets of -sized chaotic sequences, and . Then, discard the first set of chaotic sequences from each group, to reduce the influence of the initial conditions. The remaining four sets are used for permutation operations, and the remaining one set is used for diffusion operations.
step3: Using Equation (
13), perform Arnold scrambling on the bit planes 5, 6, 7, and 8. Here,
denotes the initial pixel position of the image, and
represents the scrambled pixel position after the operation. Since pixel positions must be integers, the floating-point chaotic sequence generated by the chaotic system needs to be converted to integers. Then, use Equation (
12) to extract the values of
a and
b from the chaotic sequences
and
. After the scrambling operation, the image must be reshaped to ensure that the dimensions and structure of the image remain unchanged:
Here,
N is the size of the image, and round is the rounding function, ensuring that
a and
b are integers:
where
M and
N represent the height and width of the image, respectively, and
mod denotes
the modulo operation to ensure that the pixel positions are within the boundaries of
the image.
step4: To enhance the encryption effect, it is necessary to perform diffusion operations on the scrambled image. Equation (
14) is used to perform modular arithmetic and circular left-shift diffusion on the scrambled image, effectively improving the encryption effect:
Here, P represents the scrambled image, C and S are the key sequences extracted from the chaotic sequences and , represents the hash value of the first i pixels, ensuring that local changes affect all subsequent pixels, and indicates a circular left-shift operation on the lowest 3 bits of the data.
The decryption process is the reverse of the encryption process, as shown in
Figure 10:
6.2. Performance and Security Analysis
6.2.1. Histogram Analysis
The histogram represents the quantity of pixel values in an image. An encrypted image with an ideal histogram distribution is uniform, and it can effectively resist attacks. We selected three sets of images for testing, and the results are shown in
Figure 11.
Figure 11a,f,k are the original images, and the encrypted result images are shown as
Figure 11b,g,l.
Figure 11c,h,m are the decrypted images.
Figure 11c,h,m are the histograms of the three sets of original images, and
Figure 11e,j,o are the histograms of the encrypted images.
According to
Figure 11, visually the ciphertext image has a flat histogram, while the histogram of the plaintext image fluctuates; the
statistic (one-tailed hypothesis test) is often used to quantitatively measure the difference between the two. For a grayscale image with 256 levels of gray, let the size of the image be
, and assume that the frequency of each gray level
in its histogram follows a uniform distribution, which is
=
g =
,
i = 0, 1, 2,
…, 255; then,
follows a
distribution with 255 degrees of freedom. Given a significance level
, this ensures that
The commonly used significance level
is 0.05; thus, we have
. From
Table 4, it can be seen that the test statistic value of the original image is significantly greater than
, while the test statistic values of the ciphertext images are all less than
. It can be considered that at a significance level of 0.05 the histogram distribution of the ciphertext image is approximately uniform.
6.2.2. Key Space Analysis
In cryptography, the key space refers to the set of all possible keys, and its size directly determines the robustness of a cryptographic system against attacks. The larger the key space, the more effective it is in resisting exhaustive attacks. The algorithm proposed in this paper generates a 32-bit binary chaotic sequence as the key, based on the initial conditions , yielding a key space of , which is significantly larger than the commonly accepted security threshold of . Given the key space of , the algorithm can effectively withstand external exhaustive attacks.
6.2.3. Correlation Analysis of Adjacent Pixels
In addition to the image histogram, it is also necessary to compare the correlation characteristics between the plaintext image and its ciphertext image. Generally, plaintext images have strong correlations between adjacent pixel points in horizontal, vertical, and diagonal directions, while there should be no correlation between adjacent pixel points in the ciphertext image.
Let
N pairs of adjacent pixels be randomly selected from the original image, and their grayscale values are denoted as
, where
i = 1, 2, 3,…,
N. The correlation coefficient between the vectors
u =
and
v =
is calculated using the following formula:
Taking the House image as an example, the calculation results are shown in
Table 5. From
Table 5, it can be seen that the correlation between adjacent pixel points in the plaintext image is quite strong, while the correlation between adjacent pixel points in the ciphertext image is close to 0, indicating almost no correlation.
Figure 12 illustrates the correlation between adjacent pixels in the horizontal, vertical, and diagonal directions for both the plaintext and ciphertext images of the House. It can be observed from the figure that the pairs of adjacent pixels in the plaintext image are concentrated on the line
in all directions, whereas the pairs of pixels in the ciphertext image are uniformly distributed in a rectangular pattern. This corroborates the results shown in
Table 5, indicating that the algorithm has a good encryption effect.
6.2.4. Information Entropy
Information entropy reflects the uncertainty of image information. It is generally believed that the higher the information entropy, the greater the uncertainty, the richer the amount of information contained, and the less readable the information in the image. The calculation formula for information entropy is as follows:
In the formula,
L represents the number of gray levels in the image, and
denotes the probability of the occurrence of the gray value
i.
As shown in
Table 6, the experimental results of information entropy for plaintext and encrypted images were compared, and the encryption algorithm proposed in this paper was contrasted with the existing literature. The results indicate that the information entropy of the encryption algorithm proposed in this paper is closer to the theoretical maximum value of 8, thereby confirming its superior encryption effectiveness.
6.2.5. Key Sensitivity Analysis
In the field of image encryption, key sensitivity analysis is an important metric. Key sensitivity refers to the situation where a slight change in the key results in a significant difference between the two encrypted images obtained, indicating that the algorithm has strong key sensitivity. Conversely, if the difference is small, it is considered to have weak key sensitivity. The NPCR and the UACI are two common indicators used to measure key sensitivity. The NPCR measures the proportion of different pixel points when the key changes slightly, calculated by the following formula:
The UACI assesses key sensitivity by calculating the average value of entropy differences at each pixel position between two images. The calculation formula is as follows:
The key for this image-encryption algorithm is the initial value of the 5D CMHS, which is
. By slightly changing the initial values
and encrypting the same image again, we obtained two cipher images. Finally, we calculated their NPCR and UACI values. The calculation results are shown in
Table 7. As the table shows, the calculation results are close to the theoretical values:
6.2.6. Plain Image Sensitivity Analysis
Plaintext sensitivity refers to the degree of change in the ciphertext image output by the encryption algorithm when minor changes occur in the plaintext image. An excellent encryption algorithm should be highly sensitive to input, meaning that even minor changes in the plaintext image will result in significant differences in the output ciphertext image. Such minor differences can be achieved by altering a few pixel points in the plaintext image. For example, by randomly selecting a pixel point
from the plaintext image and changing its value, the variation is set to 1, i.e., the changed value is
256, and an almost imperceptible change in the plaintext image can be obtained. The results of the plaintext sensitivity analysis are shown in
Table 8. The results indicate that both the NPCR and UACI values are close to their ideal values; hence, the image-encryption system based on 5D CMHS can effectively resist plaintext attacks.
6.2.7. Noise Attacks and Data-Loss Attacks Analysis
In the field of image encryption, noise attacks and data-loss attacks are two common threats. They aim to compromise the integrity and confidentiality of encrypted images by introducing noise or deleting parts of the data. Although some algorithms may exhibit remarkable statistical properties, they may still fall short in terms of cryptographic security. Therefore, taking the Baboon image as an example, this study comprehensively evaluated the proposed encryption algorithm by introducing salt-and-pepper noise attacks, corner-cropping attacks, and center-cropping attacks.
As depicted in
Figure 13b,e,h, we subjected the encrypted images to salt-and-pepper noise attacks, center-cropping attacks, and corner-cropping attacks, respectively. The results indicate that, despite these attacks, the algorithm was still able to decrypt the images fairly intact, as shown in
Figure 13c,f,i. This demonstrates that the algorithm has superior performance, in terms of noise resistance and data loss resistance, and that it is capable of effectively withstanding noise attacks and data-loss attacks, thereby providing more reliable security guarantees in practical applications.
6.2.8. Time Complexity Analysis
Time complexity analysis is a crucial metric for evaluating the practicality of encryption algorithms, especially in scenarios with high real-time requirements, where algorithms need to complete complex computations within a reasonable timeframe. In this study, standard test images with resolutions of
and
pixels were used, and the results are shown in
Table 9. The encryption time for the
image was 1.41 s, while that for the
image was 5.36 s. The increase in time consumption was primarily due to the expansion of the key space through the high-dimensional characteristics and broad parameter range of the five-dimensional chaotic system, which increased the computational load for generating chaotic sequences and involved multiple iterations in the scrambling and diffusion phases. Although the time cost increased, the algorithm still maintained high efficiency. The experimental results demonstrate that the proposed algorithm achieves a balanced optimization between encryption speed and attack resistance by trading off complexity and security, meeting real-time requirements.