Single Neuronal Dynamical System in Self-Feedbacked Hopfield Networks and Its Application in Image Encryption

Image encryption is a confidential strategy to keep the information in digital images from being leaked. Due to excellent chaotic dynamic behavior, self-feedbacked Hopfield networks have been used to design image ciphers. However, Self-feedbacked Hopfield networks have complex structures, large computational amount and fixed parameters; these properties limit the application of them. In this paper, a single neuronal dynamical system in self-feedbacked Hopfield network is unveiled. The discrete form of single neuronal dynamical system is derived from a self-feedbacked Hopfield network. Chaotic performance evaluation indicates that the system has good complexity, high sensitivity, and a large chaotic parameter range. The system is also incorporated into a framework to improve its chaotic performance. The result shows the system is well adapted to this type of framework, which means that there is a lot of room for improvement in the system. To investigate its applications in image encryption, an image encryption scheme is then designed. Simulation results and security analysis indicate that the proposed scheme is highly resistant to various attacks and competitive with some exiting schemes.


Introduction
Neural networks and neuro-dynamics expand to different application areas including signal processing, information security, encryption and associative memory [1][2][3][4][5][6]. The Hopfield network is a typical dynamic neural network with abundant dynamic characteristics. Since Hopfield proposed the model, it has been applied to solving multifarious optimization problems [7][8][9]. However, the conventional Hopfield network often obtained a solution which was far from the optimal solution [10].
Since the obstacle was reported, multitudinous improved methods have been applied to the Hopfield network [11][12][13][14][15]. Among these modifications of the Hopfield network, a self-feedbacked Hopfield network has similar properties with the conventional Hopfield network, but have higher convergence speed [15]. It was also proved to have good chaotic dynamic behavior [16]. Therefore, the self-feedbacked Hopfield network has been widely used in optimization problems and image encryption [17][18][19][20][21][22][23][24]. However, the self-feedbacked Hopfield network still has some interesting properties to be discovered. We found that the single neuron of the self-feedbacked Hopfield network also showed complex dynamic behavior. Self-feedbacked Hopfield networks that were used to generate chaos phenomena have complex structures, a large computational amount and fixed parameters [16,[18][19][20][21][22]. Due to these properties, self-feedbacked Hopfield networks need to be combined with other chaotic maps [18][19][20][21], which have consequently limited the application. On the contrary, the structure and calculation of single neuron are simplified, and the single neuron can present chaos phenomenon as its parameters vary in continuous range. Therefore, the single neuron has a broad application scope.

The Hopfield Networks
The Hopfield network [8] is defined as Equations (1) and (2): where, C i represents the capacitance of the i neuron; u i (t) represents the input of neuron i at time instance, t; v j (t) represents the output of neuron j at time instance, t; R i is the resistance of neuron i; T −1 ij is the finite impedance between the output v j and the neuron i; I i is any other fixed input current to neuron i; f () is the activation function of neurons. The structure of conventional Hopfield network is shown in Figure 1. (2) are transformed to Equations (3) and (4):

Single Neuronal Dynamical System in Self-Feedbacked Hopfield Networks
The structure of self-feedbacked Hopfield network is shown in Figure 2. (2) are transformed to Equations (3) and (4):

Single Neuronal Dynamical System in Self-Feedbacked Hopfield Networks
The structure of self-feedbacked Hopfield network is shown in Figure 2. According to Figure 2, we have ≠ 0. For a single neuron, we don't add the output of other neurons, so set = 0, ∈ [1,2, . . . , n] ( ≠ ). The Equations (3) and (4) can be converted into single neuron format, as Equations (5) and (6): The structure of positively self-feedbacked Hopfield network [15]. According to Figure 2, we have w ii = 0. For a single neuron, we don't add the output of other neurons, so set w ij = 0, j ∈ [1, 2, . . . , n] (j = i). The Equations (3) and (4) can be converted into single neuron format, as Equations (5) and (6): where, ∆t is unit interval, set ∆t = 1. The Equation (7) can then be obtained: We assume k = 1 − 1 (7) is transformed to Equation (8): For conventional Hopfield network, the activation function is sigmoid. Therefore, this study uses sigmoid function as activation function. The Equation (5) is transformed to Equation (9): Thus, the single neuronal dynamical system in self-feedbacked Hopfield networks is obtained.

Dynamical Behavior in Single Neuronal Dynamical System
On the basis of Equations (8) and (9), it should be noted that the single neuronal dynamical system (SNDS) has four parameters. We can vary them to show complex dynamic behaviors. When the parameters hold specific value, a sequence of bifurcation leading to chaos can be observed by changing one parameter. To unmask the dynamical behavior of the SNDS, the single-parameter bifurcation diagrams and the corresponding evolution diagrams of the Lyapunov exponent are drawn, as shown in Figures 3-6. In the figures, there is distinct correspondence between bifurcation diagrams and evolution diagrams of the Lyapunov exponent. For parameter γ, Figure 3 shows multiple instances of entering and exiting chaos, which are associated with multiple bifurcations phenomenon. The instances that exit chaos are sudden, and it corresponds to the sudden decrease of Lyapunov exponent in the evolution diagram. For parameter k, as shown in Figure 4, it first gradually enters chaos, and then gradually exits after a period of evolution. In the evolution, chaos is not continuous. Furthermore, the Lyapunov exponent diagram of k has symmetry in the domain of definition. Parameter z also appears discontinuous chaos phenomenon in a large range, and parameter h appears chaos phenomenon only within a very small range.
In addition, the double-parameter evolution diagrams of Lyapunov exponent are used for a clearer understanding of the dynamical behavior of SNDS, as shown in Figure 7. The Figure includes six parameter combinations. Each combination is presented by two two-dimensional evolution diagrams of Lyapunov exponent. The latter two-dimensional evolution diagram is formed on the basis of setting Lyapunov exponent which is less than zero to be zero. In Figure 7, some interesting phenomenon can be observed. The combinations of γ − z, γ − k, and k − z appear wide area of chaos, and the area is banded in the diagram. This corresponds to the single-parameter evolution diagram of the Lyapunov exponent. On the contrary, the chaos range of the combination with parameter h is narrow.

Efficiency Analysis
High efficiency of the chaotic map is necessary as practical applications always involve the generation of a large number of pseudorandom sequences. Compared with selffeedbacked Hopfield networks, SNDS has low implementation cost. Table 1 shows the time elapsed by SNDS and self-feedbacked Hopfield networks when generating pseudorandom sequences. The experimental environments are as follows: Matlab R2017a, Intel (R) Core (TM) i5-9400F CPU @ 2.90 GHz with 24 GB memory, Windows 10 Operation System. In the experiment, each sequence is generated 100 times, and the average running time is taken as the result. This indicates that SNDS has the higher efficiency than self-feedbacked Hopfield networks.

Enhanced Single Neuronal Dynamic System
By incorporating SNDS into the framework proposed in [54], The enhanced single neuronal dynamic system (ESNDS) is obtained. It is described by Equation (10): where the parameter v i (t) is the value of v i (t) after an intermediate calculation. The Lyapunov exponent evolution diagram of n is shown as Figure 8. In this paper, n is set to a fixed value of 14.

Efficiency Analysis
High efficiency of the chaotic map is necessary as practical applications always involve the generation of a large number of pseudorandom sequences. Compared with selffeedbacked Hopfield networks, SNDS has low implementation cost. Table 1 shows the time elapsed by SNDS and self-feedbacked Hopfield networks when generating pseudorandom sequences. The experimental environments are as follows: Matlab R2017a, Intel (R) Core (TM) i5-9400F CPU @ 2.90GHz with 24 GB memory, Windows 10 Operation System. In the experiment, each sequence is generated 100 times, and the average running time is taken as the result. This indicates that SNDS has the higher efficiency than selffeedbacked Hopfield networks.

Enhanced Single Neuronal Dynamic System
By incorporating SNDS into the framework proposed in [54], The enhanced single neuronal dynamic system (ESNDS) is obtained. It is described by Equation (10): where the parameter ( ) is the value of ( ) after an intermediate calculation. The Lyapunov exponent evolution diagram of is shown as Figure 8. In this paper, is set to a fixed value of 14. For ESNDS, the bifurcation diagrams and Lyapunov exponent evolution diagrams of single-parameter are shown in Figures 9-12. It can be seen that the chaotic range of all parameters tends to be continuous. The Lyapunov exponent of Parameter falls first and rises later, and Lyapunov exponent > 0 occurs around = 150. The other three parameters are also in chaos over a wide range. Note that for h, the chaotic property of this parameter has been greatly improved. It means that SNDS can achieve better performance For ESNDS, the bifurcation diagrams and Lyapunov exponent evolution diagrams of single-parameter are shown in Figures 9-12. It can be seen that the chaotic range of all parameters tends to be continuous. The Lyapunov exponent of Parameter γ falls first and rises later, and Lyapunov exponent > 0 occurs around γ = 150. The other three parameters are also in chaos over a wide range. Note that for h, the chaotic property of this parameter has been greatly improved. It means that SNDS can achieve better performance by using the frameworks suitable for a simple chaotic system. This greatly increases the application potential of SNDS. ntropy 2021, 23, x FOR PEER REVIEW 10 of 27 by using the frameworks suitable for a simple chaotic system. This greatly increases the application potential of SNDS.  by using the frameworks suitable for a simple chaotic system. This greatly increases the application potential of SNDS.

NIST SP800-22 Test
To demonstrate the robustness of ESNDS and the potential of its application in image encryption, the NIST SP800-22 test standard is used for ESNDS. It is designed by National Institute of Standards and Technology (NIST) to validate the randomness of binary sequences [59]. NIST SP800-22 is the most complete statistical test suite for randomness test of binary sequences [60]. The binary numbers are generated by the value of in the iterative process of ESNDS. For each value of , we discard the former 10 decimal digits and compare the result with 0.5, the process is shown as Equations (11) and (12): The NIST test standard includes 15 subsets. In the experiment, all subsets were considered, and each subset can output a − . If the − is greater than 0.01, the sequence is thought to pass a subset. The length of each binary sequence is 1,000,000 bits, and we test 100 binary sequences for each subset. During the process, the initial values of parameters for ESNDS are set as follows: = 250, = 0.6, = −0.1, ℎ = 0.01, and = 0.1. The result is shown in Table 2, and − of the last round is put into the table. According to [59], the minimum pass rate of each subset is 96 percent. Therefore, a dynamical system is chaotic enough if the minimum pass rate is achieved in all subsets.

NIST SP800-22 Test
To demonstrate the robustness of ESNDS and the potential of its application in image encryption, the NIST SP800-22 test standard is used for ESNDS. It is designed by National Institute of Standards and Technology (NIST) to validate the randomness of binary sequences [59]. NIST SP800-22 is the most complete statistical test suite for randomness test of binary sequences [60]. The binary numbers are generated by the value of v i in the iterative process of ESNDS. For each value of v i , we discard the former 10 decimal digits and compare the result with 0.5, the process is shown as Equations (11) and (12): The NIST test standard includes 15 subsets. In the experiment, all subsets were considered, and each subset can output a p-value. If the p-value is greater than 0.01, the sequence is thought to pass a subset. The length of each binary sequence is 1,000,000 bits, and we test 100 binary sequences for each subset. During the process, the initial values of parameters for ESNDS are set as follows: γ = 250, k = 0.6, z = −0.1, h = 0.01, and u 0 = 0.1. The result is shown in Table 2, and p − value of the last round is put into the table. According to [59], the minimum pass rate of each subset is 96 percent. Therefore, a dynamical system is chaotic enough if the minimum pass rate is achieved in all subsets. To further investigate the pseudo-random sequence generated by ESNDS, two binary sequences are used in TestU01. As an empirical statistical test suite, TestU01 can evaluate the randomness of sequences through a collection of utilities [61]. The length of two binary sequences is 30,000,000 bits and 1,000,000,000 bits, respectively. In standard tests suits, the sequence size of nearly 30,000,000 is commonly used [62,63]. In the experiment, three predefined batteries, Rabbit, Alphabit, and Block Alphabit, are used to evaluate the randomness of bits generated by ESNDS. The initial values of parameters for ESNDS are set as follows: γ = 250, k = 0.6, z = −0.1, h = 0.01, and u 0 = 0.1. The result is shown in Table 3. It can be seen that the sequences have strong randomness and ESNDS is effective.

The Sensitivity to Initial Condition
The sensitivity to initial condition is how slightly a parameter or initial value change will generate different sequence. In this section, the four parameters and initial value of ESNDS are studied. The result is shown in Figure 13. It is seen that the sequences vary at about ten iterations of all parameters and initial value. Therefore, ESNDS is sufficiently sensitive to initial condition and can fully ensure encryption security.

Sample Entropy
Sample Entropy (SE) is used to describe the complexity of a time series quantitatively [64]. The computing method of SE is defined in [65]. The time series with a lower degree of regularity always have a larger SE. Therefore, a lager SE indicates that the time series is higher complexity. In order to reflect the complexity of the sequences generated by ESNDS clearly, we introduced two simple chaotic maps (i.e., Sine map, Logistic map) and Two coupled chaotic maps which are proposed in [48,55]. The coupled chaotic map in [48] is defined as Equation (13), and that in [55] is defined as Equations (14) and (15).
For intuitive comparison, the parameters ∈ [0,1] and ∈ [−1,0] are selected to depict the SE of ESNDS, as shown in Figure 14. Furthermore, Figure 14a includes the SE of coupled chaotic map in [55] along parameters and , and Figure 14b includes the SE of coupled chaotic map in [48] and simple chaotic maps along parameter r. It can be seen that ESNDS have relatively wider chaotic range and larger SE than the simple chaotic maps and the coupled chaotic maps. This indicates that ESNDS can generate sequences with more complex properties. It is of significance for chaotic maps applied in data security.

Sample Entropy
Sample Entropy (SE) is used to describe the complexity of a time series quantitatively [64]. The computing method of SE is defined in [65]. The time series with a lower degree of regularity always have a larger SE. Therefore, a lager SE indicates that the time series is higher complexity. In order to reflect the complexity of the sequences generated by ESNDS clearly, we introduced two simple chaotic maps (i.e., Sine map, Logistic map) and Two coupled chaotic maps which are proposed in [48,55]. The coupled chaotic map in [48] is defined as Equation (13), and that in [55] is defined as Equations (14) and (15). x For intuitive comparison, the parameters k ∈ [0, 1] and z ∈ [−1, 0] are selected to depict the SE of ESNDS, as shown in Figure 14. Furthermore, Figure 14a includes the SE of coupled chaotic map in [55] along parameters α and r, and Figure 14b includes the SE of coupled chaotic map in [48] and simple chaotic maps along parameter r. It can be seen that ESNDS have relatively wider chaotic range and larger SE than the simple chaotic maps and the coupled chaotic maps. This indicates that ESNDS can generate sequences with more complex properties. It is of significance for chaotic maps applied in data security.  As considering the complexity of sequences generated by chaotic maps, the high ef-306 ficiency of chaotic maps is also necessary. The implementation cost of ESNDS is calcu-307 lated in different length of sequence, and it is also compared with coupled chaotic maps 308 proposed in [48] and [55], as shown in Table 4. In the experiment, each sequence is gen-309 erated 100 times, and the average running time is taken as the result. It can be seen that 310 implementation cost of ESNDS is in the middle of the three chaotic maps. Therefore, 311 ESNDS is suitable for data security.

Efficiency Analysis
In considering the complexity of sequences generated by chaotic maps, the high efficiency of chaotic maps is also necessary. The implementation cost of ESNDS is calculated in different length of sequence, and it is also compared with coupled chaotic maps proposed in [48,55], as shown in Table 4. In the experiment, each sequence is generated 100 times, and the average running time is taken as the result. It can be seen that implementation cost of ESNDS is in the middle of the three chaotic maps. Therefore, ESNDS is suitable for data security.

Encryption Process
Step 1: The original grayscale image is read as a M × N matrix X for further processing. In addition, each element in the matrix is an integer from 0 to 255.
Step 2: The chaotic sequence is obtained from the ESNDS for encryption. u 0 , γ, k, z and h are initial values of ESNDS, so they are used as the security keys. Iterate the ESNDS (M × N + M + N + U 0 ) times, and discard the former U 0 elements. Therefore, a new sequence with (M × N + M + N) is obtained.
Step 3: Take the former M elements as sequence a, the next N elements as sequence b, and the rest elements as sequence L. The following modifications were made to sequence a and b, as Equation (16): Step 4: Obtain the column permutation matrix. The process is shown in Figure 15.

Encryption Process
Step 1: The original grayscale image is read as a × matrix for furth cessing. In addition, each element in the matrix is an integer from 0 to 255.
Step 2: The chaotic sequence is obtained from the ESNDS for encryption. , and ℎ are initial values of ESNDS, so they are used as the security keys. Iterate the ( × + + + ) times, and discard the former elements. Therefore, a quence with( × + + ) is obtained.
Step 3: Take the former elements as sequence , the next element quence , and the rest elements as sequence . The following modifications were m sequence and , as Equation (16): Step 4: Obtain the column permutation matrix. The process is shown in Figu Step 5: Obtain the row permutation matrix. The process is shown in Figure 1   Step 5: Obtain the row permutation matrix. The process is shown in Figure 16. Step 6: The permutated matrix is converted into the 1D matrix = , , . . . and sort the sequence in ascending order. According to the sorting result, matr obtained. The process is shown in Figure 17. Step 7: Obtain the diffused matrix from the sequence and the matri Equations (17) and (18): Step8: Convert into the encrypted image with the size of × .
The decryption is the inverse process of encryption.
In the experiment, the initial value of ESNDS = 0.1, the parameters = 2 0.6, = −0.1, ℎ = 0.01, and four images are used to verify encryption effect of cryption method. The original images and results of encryption are shown in Figu   Figure 16. Matrix permutating process of row.
Step 6: The permutated matrix is converted into the 1D matrix P = {p 1 , p 2 , . . . , p M×N }, and sort the sequence L in ascending order. According to the sorting result, matrix P is obtained. The process is shown in Figure 17. Step 6: The permutated matrix is converted into the 1D matrix = , , . . . and sort the sequence in ascending order. According to the sorting result, matr obtained. The process is shown in Figure 17. Step 7: Obtain the diffused matrix from the sequence and the matrix Equations (17) and (18): Step8: Convert into the encrypted image with the size of × .
The decryption is the inverse process of encryption.
In the experiment, the initial value of ESNDS  Step 7: Obtain the diffused matrix H from the sequence L and the matrix P by Equations (17) and (18): Step8: Convert H into the encrypted image with the size of M × N. The decryption is the inverse process of encryption.
In the experiment, the initial value of ESNDS u 0 = 0.1, the parameters γ = 250, k = 0.6, z = −0.1, h = 0.01, and four images are used to verify encryption effect of the encryption method. The original images and results of encryption are shown in Figure 18.
Step8: Convert into the encrypted image with the size of × .
The decryption is the inverse process of encryption. In the experiment, the initial value of ESNDS = 0.1, the parameters = 250, = 0.6, = −0.1, ℎ = 0.01, and four images are used to verify encryption effect of the encryption method. The original images and results of encryption are shown in Figure 18.

Security Key Space
Key space refers to the summation of the different keys that can be used for encryption. Due to multiple parameters of ESNDS, it is very complicated to determine the range of all the keys that can generate chaotic sequences simultaneously. Therefore, we confirm the range of some parameters by the two-dimensional diagram of Lyapunov exponent to determine the minimum key space. The two-dimensional evolution diagram of Lyapunov exponent of − is shown as Figure 19. Figure 19 and Section 4.2.2 show that the both space of and is about 0.9 × 10 , and the space of is 1 × 10 . We can get the minimum key space is 0.9 × 10 × 0.9 × 10 × 10 ≈ 2 . The minimum key space is larger than 2 which enough to resist brute force attacks [66,67].

Information Entropy
The information entropy is a measurement standard of the degree of information ordering in digital images [68]. It is defined as Equation (19):

Security Key Space
Key space refers to the summation of the different keys that can be used for encryption. Due to multiple parameters of ESNDS, it is very complicated to determine the range of all the keys that can generate chaotic sequences simultaneously. Therefore, we confirm the range of some parameters by the two-dimensional diagram of Lyapunov exponent to determine the minimum key space. The two-dimensional evolution diagram of Lyapunov exponent of k − z is shown as Figure 19. Figure 19 and Section 4.2.2 show that the both space of k and z is about 0.9 × 10 16 , and the space of u 0 is 1 × 10 16 . We can get the minimum key space is 0.9 × 10 16 × 0.9 × 10 16 × 10 16 ≈ 2 162 . The minimum key space is larger than 2 128 which enough to resist brute force attacks [66,67].

Security Key Space
Key space refers to the summation of the different keys that can be used for encryption. Due to multiple parameters of ESNDS, it is very complicated to determine the range of all the keys that can generate chaotic sequences simultaneously. Therefore, we confirm the range of some parameters by the two-dimensional diagram of Lyapunov exponent to determine the minimum key space. The two-dimensional evolution diagram of Lyapunov exponent of − is shown as Figure 19. Figure 19 and Section 4.2.2 show that the both space of and is about 0.9 × 10 , and the space of is 1 × 10 . We can get the minimum key space is 0.9 × 10 × 0.9 × 10 × 10 ≈ 2 . The minimum key space is larger than 2 which enough to resist brute force attacks [66,67].

Information Entropy
The information entropy is a measurement standard of the degree of information ordering in digital images [68]. It is defined as Equation (19): where represents the grayscale level of an image, and ( ) represents the probability

Information Entropy
The information entropy is a measurement standard of the degree of information ordering in digital images [68]. It is defined as Equation (19): where n represents the grayscale level of an image, and p(X i ) represents the probability of the grayscale value X i . For a completely random image, the theoretical value of information entropy is 8 [69]. As shown in Table 5, the information entropy of encrypted images is close to the theoretical value. It shows the degree of information ordering tends to disorder after the encryption scheme.

Correlation Analysis
In plaintext images, adjacent pixels tend to have high correlations. This is related to the discernibility of the information in the images. Therefore, it is necessary to reduce the correlation between adjacent pixels in the encrypted images [70]. The equation is shown as Equation (20): where, x and y are the gray values of adjacent pixels, and ρ xy represents the correlation coefficient between adjacent pixels. The horizontal, vertical and diagonal correlation of original image Lena and encrypted image Lena is shown in Figure 20. As shown in Table 6, compared with original images, the correlation coefficient of encrypted images is greatly reduced. This means that the encrypted images effectively conceal the information of the original images. In addition, Table 7 demonstrates the correlation coefficient of encrypted Lena using various encryption schemes. It can be seen that our scheme achieves relatively favorable performance among these methods.  Table 7. Correlation coefficient of various schemes.

Sensitivity Analysis
A good encryption scheme should be sensitive to tiny changes in key and plaintext image. To test the sensitivity of the proposed scheme, two with only 1 × 10 differences are used to encrypt the original images, respectively. The difference between two encrypted images can be measured through the Number of Pixel Change Rate (NPCR) and the Unified Average Changing Intensity (UACI). The NPCR and UACI are calculated by Equations (21) and (22) [72]: where and are two images with the size of × . If ( , ) ≠ ( , ) , then ( , ) = 1, otherwise, ( , ) = 0. According to [77], the expected value of NPCR and UACI are 99.6094% and 33.4635% for 8-bit grayscale images. Table 8 shows the value of NPCR and UACI of four images. It can be seen that the proposed encryption scheme is sensitive to tiny changes in key.

Sensitivity Analysis
A good encryption scheme should be sensitive to tiny changes in key and plaintext image. To test the sensitivity of the proposed scheme, two u 0 with only 1 × 10 16 differences are used to encrypt the original images, respectively. The difference between two encrypted images can be measured through the Number of Pixel Change Rate (NPCR) and the Unified Average Changing Intensity (UACI). The NPCR and UACI are calculated by Equations (21) and (22) [72]: where P 1 and P 2 are two images with the size of M × N. If P 1 (i, j) = P 2 (i, j), then B(i, j) = 1, otherwise, B(i, j) = 0. According to [77], the expected value of NPCR and UACI are 99.6094% and 33.4635% for 8-bit grayscale images. Table 8 shows the value of NPCR and UACI of four images. It can be seen that the proposed encryption scheme is sensitive to tiny changes in key.

Histogram Analysis
The histogram analysis refers to the number of times each value appears, so as to reflect the distribution of pixel values of an image [21]. The ideal histogram should be flat and smooth to resist statistic attacks. The Figure 21 shows the histograms of four original images and the histograms of corresponding encrypted images. The pixel value distribution of the four encrypted images is uniform, so it can resist statistic attacks.

Noise Robustness
Due to noise attack or noise jamming in the transmission channel, the pixel value modification of cipher images may appear [78,79]. The noise makes the information in cipher images difficult to recover. However, receivers would like to recover the original images as much as possible in the situation. Thus, an encryption scheme should have an ability of resisting noise.
To test the ability of resisting noise, an experiment on noise attack is performed. Four different proportions of 'salt & pepper' noise are added to the encrypted Lena. The decryption process is then applied to the images with "salt & pepper" noise. The results are shown in Figure 22. It can be seen that the decrypted images recover most information of the original images.

Robustness to Data Loss
In practical application, digital images are vulnerable to data loss in the process of communication for all kind of reason. This may be caused by the various interception, and some parts of digital images may be missing. In this case, the receiver can be easily failed to get the intact data. To cope with this, the encrypted images should have good anti-cutting performance.
Our proposed encryption scheme has enough robustness to data loss. The data loss is performed at the rate of 25% and 50% in different positions, and the processed images are used for decryption. The results are shown in Figure 23. It can be seen that the decrypted images restore most of the original details visually. This shows the encryption scheme has enough robustness to data loss. cipher images difficult to recover. However, receivers would like to recover the original images as much as possible in the situation. Thus, an encryption scheme should have an ability of resisting noise.
To test the ability of resisting noise, an experiment on noise attack is performed. Four different proportions of 'salt & pepper' noise are added to the encrypted Lena. The decryption process is then applied to the images with "salt & pepper" noise. The results are shown in Figure 22. It can be seen that the decrypted images recover most information of the original images.

Robustness to Data Loss
In practical application, digital images are vulnerable to data loss in the process of communication for all kind of reason. This may be caused by the various interception, and some parts of digital images may be missing. In this case, the receiver can be easily failed to get the intact data. To cope with this, the encrypted images should have good anticutting performance.
Our proposed encryption scheme has enough robustness to data loss. The data loss is performed at the rate of 25% and 50% in different positions, and the processed images are used for decryption. The results are shown in Figure 23. It can be seen that the decrypted images restore most of the original details visually. This shows the encryption scheme has enough robustness to data loss.

Speed Analysis
Since the proposed encryption scheme is a kind of symmetric encryption scheme, the decryption is the inverse process of encryption. We only analyze the encryption speed in this section.
For the time complexity analysis of the scheme, the time-consuming part includes floating-point operations for the construction of chaotic sequences in ESNDS and permutation-diffusion process. Table 9 lists the computational complexity of the proposed encryption scheme as well as some other chaos-based image encryption schemes. The efficiency of the proposed scheme is comparable with existing chaos-based ciphers. Table 9. Time complexity of different schemes.

Speed Analysis
Since the proposed encryption scheme is a kind of symmetric encryption scheme, the decryption is the inverse process of encryption. We only analyze the encryption speed in this section.
For the time complexity analysis of the scheme, the time-consuming part includes floating-point operations for the construction of chaotic sequences in ESNDS and permutationdiffusion process. Table 9 lists the computational complexity of the proposed encryption scheme as well as some other chaos-based image encryption schemes. The efficiency of the proposed scheme is comparable with existing chaos-based ciphers. Table 9. Time complexity of different schemes.

Scheme Time Complexity
Proposed O(3MN + 2M + 2N) [51] O(8MN) [71] O 18MN + 2MNlog MN 2 [75] O(9MN) [76] O(Mlog(8N) + 8NlogM + M + 8N) Furthermore, the speed of the encryption scheme is tested. The experimental environment is same as that in Section 3.2. The images with different size are encrypted, and the running time is shown in Table 10. In the experiment, each encryption is repeated 100 times, and the average running time is taken as the result. It can be seen that the average encryption/decryption speed of proposed scheme is enough for image encryption applications.

Conclusions
In this paper, the single neuronal dynamical system in self-feedbacked Hopfield network is proposed, and its derivation process of the discrete form is given. The chaotic dynamic behavior of the system is described from single-parameter and double-parameter perspectives. The implementation cost of the system is also lower than self-feedbacked Hopfield networks. Furthermore, we apply a framework for improving chaotic properties of the simple chaotic system to our system and achieve good performance. It is important to note that this applicability can make for the system being considered in more fields. In addition, an image encryption scheme based on the enhanced system is herein designed. The simulation results and security analysis prove that the scheme has an excellent performance.
The single neuronal dynamical system in self-feedbacked Hopfield Networks still has a large scope for exploration. In future work, we will continue improving the system, such as changing the activation function.