Next Article in Journal
Nonequilibrium Thermodynamics in Biochemical Systems and Its Application
Previous Article in Journal
Computer Vision Based Automatic Recognition of Pointer Instruments: Data Set Optimization and Reading
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Eigenfaces-Based Steganography

1
Institute of Computer Science, Pedagogical University of Krakow, 30-084 Krakow, Poland
2
Cryptography and Cognitive Informatics Laboratory, AGH University of Science and Technology, 30-059 Krakow, Poland
*
Author to whom correspondence should be addressed.
Entropy 2021, 23(3), 273; https://doi.org/10.3390/e23030273
Submission received: 30 January 2021 / Revised: 17 February 2021 / Accepted: 22 February 2021 / Published: 25 February 2021
(This article belongs to the Section Information Theory, Probability and Statistics)

Abstract

:
In this paper we propose a novel transform domain steganography technique—hiding a message in components of linear combination of high order eigenfaces vectors. By high order we mean eigenvectors responsible for dimensions with low amount of overall image variance, which are usually related to high-frequency parameters of image (details). The study found that when the method was trained on large enough data sets, image quality was nearly unaffected by modification of some linear combination coefficients used as PCA-based features. The proposed method is only limited to facial images, but in the era of overwhelming influence of social media, hundreds of thousands of selfies uploaded every day to social networks do not arouse any suspicion as a potential steganography communication channel. From our best knowledge there is no description of any popular steganography method that utilizes eigenfaces image domain. Due to this fact we have performed extensive evaluation of our method using at least 200,000 facial images for training and robustness evaluation of proposed approach. The obtained results are very promising. What is more, our numerical comparison with other state-of-the-art algorithms proved that eigenfaces-based steganography is among most robust methods against compression attack. The proposed research can be reproduced because we use publicly accessible data set and our implementation can be downloaded.

1. Introduction

Steganography is a technique of hidden communication. The fact of passing messages between sender and recipient is kept secret by embedding messages in inconspicuous containers. These may be either common files, for example images and videos, or unexpected media, like geospatial data [1], network packets [2] and others [3]. In the digital era nearly any type of file may carry additional hidden data, but images are among most popular because the human visual system is unable to perceive subtle changes introduced by embedding process. There are many possible approaches to image-based steganography and we will discuss selected methods in the following subsection.

1.1. State-of-the-Art in Steganography

In spatial domain, the most known steganography technique is least significant bit (LSB) in which some bits of pixels are replaced with bits of a message. It may utilize various numbers of bits and mapping strategies, for example in [4,5] four least significant bits are substituted. Algorithm described in [6] uses plane bit substitution method in which message bits are embedded into the pixel value(s) of an image. A steganography transformation machine is proposed for solving binary operation for manipulation of the original image with help of LSB operator based matching. Ref. [7] uses five pixels pair differencing technique that is a combination of LSB and pixel value differencing (PVD) steganography. In [8] LSB message encoding is preceded by CRC-32 checksum, then the codeword is compressed by gzip just before encrypting it by AES, and is finally added to encrypted header information for further processing. During embedding the encrypted data, the Fisher–Yates Shuffle algorithm is used for selecting the next pixel location.
Least significant bit replacement is among the most used spatial domain techniques whereas discrete cosines transform (DCT) [9] and discrete wavelet transform (DWT) [10] based methods are major choices in the frequency domain. Additionally, many steganography techniques are based on singular value decomposition (SVD) for embedding secret message. These techniques conceal data in either right singular vectors, left singular vectors, singular values or combinations of all approaches in spatial domain or transform domain with satisfactory performance to various attacks [11,12,13,14]. Paper [15] describes image and [16] video steganography for face recognition for trusted and secured authentication by applying principal component analysis (PCA), namely the eigenfaces method. In those approaches signcryption is included for additional security measures.
An opposite approach for image steganography is presented in paper [17] where authors presented new generating technique which does not require any additional image to cover secret text. Instead, the image is created from scratch, on the basis of the text that we want to send securely.
The other steganographic techniques are based on the key generating process. In paper [18] authors used a new method of data hiding using Catalan numbers and Dyck words. The hidden message is generated with the data carrier and an adequate complex stego key. An important characteristic of the proposed method is that the data carrier retains its original shape, without supplements or modifications. Paper [19] presents an effective method of encryption and decryption of images in multi-party communications. Encryption is based on weighted Moore–Penrose inverse over the constant matrix.
There are many documents describing state-of-the-art steganography methods. A comprehensive discussion about combining individual’s biometric characteristics with steganography may be found in paper [20]. The more general surveys are presented in [21,22,23,24].

1.2. Motivation of This Paper

In this paper we propose a novel transform domain technique in which the message is hidden in components of linear combination of high order eigenfaces vectors. By high order we mean eigenvectors responsible for dimensions with low amount of overall image variance, which are usually related to high-frequency parameters of image (details). They seem to have marginal influence on image quality, especially if the eigenfaces method is trained on large enough data sets.
The choice of facial images as medium is based on overwhelming influence of social media in which hundreds of thousands of photos are uploaded every day. The high percentage of them are selfies which do not arouse any suspicion as potential steganography communication channel.
The main difference between proposed algorithm and already existing solutions is that our method utilized a certain fragments of the image, namely faces that are present in pictures. Due to this fact after hiding a secret, the rest of the image beside faces remains unmodified. Thanks to this there are no global histogram shifts in the image, which is a main indicator of potential image modification. What is more, we wanted our method to be highly robust to various common image transformations. We wanted obtain a bit-level accuracy of encoded data rather than pixel-level accuracy like in [14], so our priority was robustness rather than carrier capacity. Therefore proposed method can be easily used not only to watermarking but also to communication through the channel that does not preserve integrity of the message, for example when the image is uploaded on social media, its quality may be altered.
For our best knowledge there is no description of any popular steganography method that utilizes eigenfaces image domain. Due to this fact we have performed extensive evaluation of our method using at least 200,000 facial images for training and robustness evaluation of the proposed approach. The proposed research can be reproduced because we use publicly accessible data set and our implementation can be download from GitHub repository (https://github.com/browarsoftware/EigenfacesSteganography, accessed on 30 January 2021).
The paper is composed of five sections. In the Section 2 we have presented proposition of our steganography method, the dataset we have used for tests and evaluation metrics. We have performed extensive validation of proposed method which results are presented in Section 3. In the Section 4 and Section 5 and we have discussed obtained results presenting advantages and disadvantages of eigenfaces-based steganography and potential solutions that can be used to overcome its drawbacks.

2. Materials and Methods

In this section we propose our novel method, present the data set we used for training and validation and explain robustness tests we have done.

2.1. Eigenfaces and Eigenfaces-Based Steganography

An eigenface is a k-dimensional vector of real values that represents features of pre-aligned facial image. Eigenfaces are based on principal component analysis (PCA). Let us suppose that we have a set of l images [ I 1 , I 2 , , I l ] with uniform dimensionality m × n . Each face image being initially a two-dimensional matrix is “flattened” by placing its columns one below another, thereby becoming a single column vector. Then we use PCA to perform variance analysis and to recalculate current coordinates of those vectors into PCA coordinates systems where axes are ordered from those that represent highest variance to those that represent lowest variance. Let us assume that we have matrix D with dimensionality ( n · m ) × l in which each column is a flattened image:
D [ n · m , l ] = [ I 1 , I 2 , , I l ]
Next we calculate a column vector M (so called mean face) in which each row is a mean value of a corresponding row in matrix D
M = i = 0 l I i , 1 l i = 0 l I i , 2 l i = 0 l I i , n · m l
where I i , j is the j-th pixel of i-th image. This mean face is then subtracted from each column of matrix D to calculate new matrix called D :
D [ n · m , l ] = [ I 1 M , I 2 M , , I l M ]
Then a covariance matrix is created:
C [ l , l ] = 1 l · D [ n · m , l ] T · D [ n · m , l ]
Because C is symmetric and positively defined, all eigenvalues have real positive values. Eigenfaces E are calculated as:
E [ n · m , l ] = D [ n · m , l ] · E C [ l , l ]
where E C are eigenvectors of C ordered from the highest to the lowest eigenvalue.
In order to generate eigenfaces-based features e i of face image I i one has to perform following operation:
e i = E [ n · m , l ] T · ( I i M )
The inverse procedure that recalculates image vector coordinates to original coordinate system is:
I i = ( E [ n · m , l ] · e i ) + M
We can use k first eigenvectors where k < l . In this case ( I i I i and I i keeps at least percentage of variance equals to scaled cumulative sum of eigenvalues cooresponding to k first eigenvectors. In other words when k < l , vector I i represents “compressed” facial image I i in respect to overall variance. e i features are coefficients of linear combination of E. Coefficients with lower indices corresponds to dimensions with higher variance.
Eigenface-based feature calculation is performed after designation of E [ n · m , l ] in (5). We have to subtract mean vector M from the face image I i in the same manner, as it was performed in (3). Image I i may be replaced by any face image, also not included in D, however it has to have the same resolution as images in D, namely m × n . When I i is not from data set D, then we can safely assume that I i I i .
The idea of eigenface-based steganography is to replace range of coefficients e [ j , j + o ] with a binary encoded message s with the length o. j is an offset from the beginning of the eigenface-based features vector. The message is stored by changing original values of linear combination to binary values: { 1 d i v , 1 d i v } where 1 d i v represents 0, 1 d i v represents 1 and d i v is a scaling parameter. Transformation of original binary message s into values of message s , which is directly inserted into eigenfaces coefficients, goes as follows:
s i = ( 2 · s i 1 ) d i v
where s i is i-th binary coefficient of vector s that contains a message to hide. The inverse procedure is:
s i = round s i · d i v + 1 2
where round denotes rounding to the closest integer.
The offset value j and the maximum value of secret length o need to be determined. It is a subject of discussion in following sections. The message hiding algorithm is presented in Algorithm 1. Message recovering algorithm is presented in Algorithm 2.
Algorithm 1: Message encoding algorithm.
  Data: J—image containing a face,
  s—message to hide with length o bits,
  j—offset from which the coefficients will be replaced by message,
   d i v —divider parameter.
  Result: Image J with hidden data.
  1. Extract face image from J (Figure 1A);
  2. Perform aligning of extracted face and store it in matrix I i ;
  3. Generate e i from I i using (6) (Figure 1B,D);
  4. Starting from index j replace original o values in e i by s i , which are scaled coefficients that represents binary message data (8) (Figure 1E,F);
  5. By application of (7) generate I i using modified e i (Figure 1G);
  6. Insert I i into J replacing original face, J image is created (Figure 1H);
Figure 1. This figure presents an outline of eigenface-based steganography secret encoding framework. (A.) a face image is extracted from an image; (B.) a mean face is subtracted from face image. (C.) presents selected eigenfaces visualized as 2D images. (D.) By applying Equation (6) we can generate eigenfaces-based features of a face, which is a linear combination of coefficients. Values in sum (D.) are actual coefficients of the face in turquoise frame. High-order coefficients (E.) of that linear combination have fractional influence on reconstructed image visual quality and can be used to hide the secret. Hiding can be done simply be replacing those coefficients by rescaled secret with Equation (8). (F.) is a linear combination coefficient with high-order coefficients replaced by a secret. That linear combination is used to reconstruct face image using Equation (7), which is represented as the face inside green frame. The last two steps is: (G.) adding mean face to face with hidden data and: (H.) inserting face image to original hosting media.
Figure 1. This figure presents an outline of eigenface-based steganography secret encoding framework. (A.) a face image is extracted from an image; (B.) a mean face is subtracted from face image. (C.) presents selected eigenfaces visualized as 2D images. (D.) By applying Equation (6) we can generate eigenfaces-based features of a face, which is a linear combination of coefficients. Values in sum (D.) are actual coefficients of the face in turquoise frame. High-order coefficients (E.) of that linear combination have fractional influence on reconstructed image visual quality and can be used to hide the secret. Hiding can be done simply be replacing those coefficients by rescaled secret with Equation (8). (F.) is a linear combination coefficient with high-order coefficients replaced by a secret. That linear combination is used to reconstruct face image using Equation (7), which is represented as the face inside green frame. The last two steps is: (G.) adding mean face to face with hidden data and: (H.) inserting face image to original hosting media.
Entropy 23 00273 g001
Figure 1 presents the procedure of encoding a secret. All numerical values presented in this figure will be justified in the Section 3. Person in image has been anonymized. The number above eigenface is its index. Eigenfaces are ordered by decreasing eigenvalues that corresponds to them. As can be seen several first eigenfaces represents global properties of image (i.e., lighting), next eigenfaces are responsible for high-frequency details.
Algorithm 2: Message decoding algorithm.
  Data: J—image containing a face,
  j—offset from which the coefficients have been replaced by message,
   d i v —divider parameter
  Result: s—decoded message with length o bits.
  1. Extract face image from J;
  2. Perform aligning of extracted face and store it in matrix I i ;
  3. Generate e i from I i using (6);
  4. Starting from index j extract o values from e i which are rescaled message’s values ( s i ), then restore the binary message s i by rescaling with d i v parameter (9);
The procedure of decoding a secret, which is presented in Figure 2, is very similar to process of encoding.

2.2. The Data Set

In order to successfully generate eigenfaces with a strong descriptive power and to validate our approach, we needed a sufficiently large collection of faces. Among the largest publicly available open faces repositories we have chosen Large-scale CelebFaces Attributes (CelebA) Dataset [25] which may be downloaded from Internet (http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html, accessed on 1 January 2021). It consisted of 202,598 images of celebrities, actors, sportspersons etc. In this research we have used aligned and cropped version of those images. Each of aligned and cropped photos has the same resolution and the face is centered so that eyes of each person are located nearly in the same position. There is no information how the alignment has been done; however it can be very closely reproduced with Histogram of Oriented Gradients (HOG) method [26]. The face position estimator can be created using Python dlib’s library implementation of the work of Kazemi et al. [27] with face landmark data set (https://github.com/davisking/dlib-models, accessed on 1 January 2021.) trained on [28]. In order to reduce the complexity of further computations, we have limited the size of images to resolution 70 × 109 , cropping all regions of image beside very face. For the same reason we have also converted images from RGB to grayscale. We have to remember, however, that proposed steganography approach may also be applied to RGB data, because eigenfaces are calculated in the same way on multiple colour channels as on single channel. Nonetheless, this discussion is out of scope of this paper.

2.3. Comparison of Covariance Matrices

The very heart of eigenfaces approach is preparation of a representative data set that is used to calculate linear transform from original coordinates to space (obtained by principal component analysis). Theoretically the more face samples we take (provided that the data set covers representative sample of images for future encoding), the better variance analysis we get. However, given that we operate on personal computers with limited RAM, we have to keep in mind that solving eigenvalue problem is complex and time demanding task, especially for relatively large matrices (i.e., 10 4 × 10 4 or larger). Due to this we wanted to estimate the size of training set above which the covariance matrix will not change much. In order to do so, we compared the geodesic distance between covariance matrix generated from whole training data set C ref 2 and covariance matrices generated from subsets of validation data set of different sizes [ C k 1 2 , , C k p 2 ] where [ k 1 , , k p ] are subsets of validation data set.
In order to calculate covariance matrix C 2 we used formula:
C [ n · m , n · m ] 2 = 1 l · D [ n · m , l ] · D [ n · m , l ] T
where C 2 has the same dimension [ n · m , n · m ] no matter how many images l were used in calculation.
Due to the fact that covariance matrices are symmetric positive definite, we can measure distance between them using the Log-Euclidean Distance (LED) [29]. Let as assume that A and B are symmetric positive definite. The geodesic distance between A and B can be expressed in the domain of matrix logarithms [30] as:
L E D ( A , B ) = log ( A ) log ( B ) F
In the above equation F is a Frobenius norm which is calculated for n by m matrix A as [31]:
A F = i = 1 n j = 1 m a i j 2
Let us assume that C and D are square matrices. A matrix C is a logarithm of D when:
e C = D
where matrix exponential is defined as:
e A = i = 0 A i i !
Any non-singular matrix has infinitely many logarithms. As the covariance matrix does not have negative eigenvalues, we can use method described in [32] to calculate the logarithm.
Another approach for calculating influence of correlation matrix size on quality of reconstruction from limited number of PCA coefficients is based on two measures: averaged mean square error (MSE) (15) between actual ( A c ) and reconstructed ( R e ) image and averaged Pearson correlation coefficient (CC) (16) between actual and PCA compressed image. Mean square error between actual and reconstructed image is defined as:
M S E ( A c , R e ) = 1 n i = 1 n A c i R e i 2
The Pearson correlation coefficient (CC) between actual A c and reconstructed R e image is defined as:
C C ( A c , R e ) = C o v ( A c , R e ) σ A c σ R e
where C o v ( A c , R e ) is covariance coefficient between A c and R e and σ A c , σ R e are standard deviations of A c and R e .
To better visualize relationship of LED and size of covariance matrix, we can also use relative values of LED coefficient:
Relative L E D i = L E D i L E D i 1 L E D i
Relative MSE and Relative CC may be calculated similarly as in (17).
LED geodesic distances were compared with averaged mean square error (MSE) and averaged Pearson correlation coefficient (CC) between actual and reconstructed images from validation data set generated by PCA. The goal of this test was to check if the geodesic distance corresponds to averaged MSE and CC of PCA-compressed data.

2.4. Robustness Tests

We tested the robustness of proposed steganography method to common transformations. Each test has been applied to the image that contained hidden data.
  • Rotation of image about its centre element using third-order spline interpolation.
  • Salt and pepper—replacing given number of randomly chosen pixels with either 0 (black) or 255 (white) value.
  • JPEG compression with various quality settings [33].
  • Linear scaling with bicubic pixels interpolation.
  • Image cropping—for obvious reasons the eigenfaces method is very sensitive to image cropping, however it seems to be resistant to mixing encoded signal with an original image. We can do it with following steps. At first we create a matrix C i r , which elements c i r i j are defined using following formulas:
    c i r i j = 2 · i n 1 2 + 2 · j m 1 2 ; i 0 , , n ; j 0 , , m
    C i r = 1 1 2 c i r 1 , 1 c i r 1 , m c i r n , 1 c i r n , m
    We can use that matrix to mix the original image I with image with encoded data E by linearly scaling the amount of E using threshold parameter t:
    C i r t = a i , j w h e n a i , j < 1 t 1 w h e n a i , j 1 t
    M i x = C i r t · E + ( 1 C i r t ) · I
Figure 3 presents how changes the shape of circular elements when the value of threshold t increases.
Messages retrieved from disturbed images were compared with the original message using following similarity coefficients or measures:
  • Binary coefficient (BC) that equals 0 when both messages are identical and 1 otherwise.
  • Levenshtein distance (LD) [34] which is a measure of the similarity between two strings. The distance is the number of deletions, insertions, or substitutions required to transform one into another. The greater the Levenshtein distance, the more divergent those strings are [35].
  • Sørensen–Dice coefficient [36]:
    D S C ( v 1 , v 2 ) = 2 · ( 1 v 1 v 2 ) v 1 + v 2
    where v 1 , v 2 are binary vectors and v is cardinal of binary vector v.
We have also used two following metrics to compare results obtained by our solution to state-of-the-art method: Pearson correlation (CC) and peak-signal-to-noise-ratio (PSNR). CC is used to evaluate the similarity between the original secret and the secret message and PSNR in dB is used to evaluate the similarity of the original image and the stegoimage.
P S N R = 10 · l o g 10 l m a x 2 M S E
where l m a x 2 is the maximum value of vector pixels over the original image (in our case 255) and the M S E is represents the mean square error between the stegoimage and the original image.

2.5. Hiding Data in Larger Images

When the facial image is a part of larger photo (which is true in most cases), it should first be extracted, then encoded and decoded using eigenfaces and finally inserted back into the photo. As the modified face is not identical to the original, it may be possible to spot visual artifacts in a rectangular region containing a face. In order to blur the manipulation, we can use the clipping procedure described in Equations (18)–(21). Then, step 5 of the message encoding algorithm (Algorithm 1) is extended in following manner:
 
5. By application of (7) generate I i using (8), apply (18)–(21) to blur borders of inserted data.
 
The algorithm has one additional parameter t (threshold from (20)). The message decoding algorithm (Algorithm 2) remains unchanged.

3. Results

The proposed solution was implemented in Python 3.8 (however it seemed there were no obstacles to run it on lower version of this language). Among the most important packages we used was OpenCV-python 4.1.2.30 for general purpose image processing. For algorithms training and evaluation we used a PC computer equipped with Intel i7-9700F 3.00 GHz CP, 64 GB RAM, NVIDIA GeForce RTX 2060 GPU with Windows 10 OS, and the second PC with similar hardware architecture, however with 128 GB RAM with Linux OS.
We used data set described in Section 2.2 to evaluate this research. The faces data set was divided into halves. The first half (101,299 faces) was used as the training data set; the second half (101,300 faces) was a validation/test data set.
At first we estimated how many images we should use to create eigenfaces. In order to do so, we compared covariance matrices using methodology described in Section 2.3. To create C ref 2 (9) matrix, we used the whole validation dataset. In order to create C k a 2 matrices, we used training dataset that was divided into subsets. Subset with index a contained # k a = 2025 · a faces (2025 is about 2% of training dataset), where a [ 1 , 2 , , p ] .
C ref 2 and C k a 2 were compared using Log-Euclidean Distance (10). It was possible because both matrices had identical dimensions. Then, for each C k a 2 , every image I b in training data set was encoded and then decoded using eigenfaces that explained at least 0.999 of variance; as a result an image I b was created. MSE (15) and CC (16) were calculated for I b and corresponding I b (they were calculated for images from the test data set because C k a 2 were generated from validation data set). Averaged values of MSE and CC showed how well eigenfaces of various C k a described the data set. We also made calculations of relative values of LED and MSE according to Equation (16). CC coefficient had a high value from the beginning and did not change much and due to this we skipped calculation of relative CC. When we used 30,375 images to create correlation matrix for PCA, both relative LED and relative MSE dropped below 0.03 . Basing on this fact, we decided that it might be a sufficient size of the training data set to evaluate our methodology. What is more, calculation of eigen decomposition for 3 × 10 4 matrix could be done in a reasonable time and further increase of data set did not change much in LED and averaged MSE. We have marked our choice of dataset size in Table 1 with two horizontal lines.
Results from Table 1 (beside CC, which did not change much during the experiment) are visualized in Figure 4.
There was a limited amount of data to be hidden within eigenfaces, which were constrained the most by the face image resolution that was used to produce matrix D (1) and then E [ n · m , l ] (5). The second constraint was distribution of variance among eigenfaces, the influence of which on the capacity of the medium is discussed later on.
Typically steganography algorithms are evaluated on set of benchmark images like Lena, Pepper, Airplane etc. In our case however eigenfaces-based steganography operates only on face data. Because of it we need different validation dataset. Evaluation of robustness of proposed steganography algorithm was performed with methodology described in Section 2.4. We used training data set containing 30,375 faces and validation data set with 101,300 faces. Scaling parameter d i v in binary data encoding was arbitrary set to 20.
After applying PCA, we have calculated the number of dimensions that describe variance in test data set (consisting of 30,375 faces). 0.75 of variance was explained by 15 dimensions, 0.9 variance by 90 dimensions, 0.95 variance by 254 dimensions, 0.99 variance by 1499 dimensions and finally 0.999 variance by 4473 dimensions. Larger number of dimensions used for image encoding may introduce some high-frequency noises caused by not statistically significant data fluctuations in training data set. We can encode data in any eigenfaces coefficients between first and 4473rd value; however changes in linear combination of coefficients that are more important for variance explanation will be clearly visible in the encoded image. Because of that we decided to modify data between 1499 and 4473 coefficients. In this configuration potential maximal capacity of the image is ( 4473 1499 ) / 8 371 Bytes. In further tests we have considered following lengths of messages: 18 bytes (∼5% of capacity), 37 bytes (∼10% of capacity), 87 bytes (∼23% of capacity), 174 (∼47% of capacity) and 370 (∼100% of capacity). We have chosen those values for convenience of calculations; also distribution of lengths allows to nicely plot the dependencies of proposed steganography method performance as a function of robustness tests parameters. Using five possible values of messages length unevenly distributed in data set allows visualizing general characteristics of proposed method. Of course it is possible to evaluate method using more sample points, however it would not introduce much new information. Additionally, evaluation of such large validation data set (approximately 100,000 images) lasted 24 to even 36+ h for each robustness test on hardware that we have used.
In robustness tests, besides various parameters described in Section 2.4, we used five mentioned lengths of encoded messages (18, 37, 87, 174 and 370). Obtained results are presented in Table 2, Table 3, Table 4, Table 5 and Table 6 and Figure 5, Figure 6, Figure 7, Figure 8 and Figure 9 using averaged values of Binary coefficient (BC), Levenshtein distance (LD) and Sørensen–Dice coefficient (DSC) on all images from validation data set.
Next we have tested performance of algorithms described in Section 2.1 and Section 2.5 using various lengths of encoded messages and t parameter. We have calculated following statistics between original image and image with hidden data: averaged MSE, averaged maximal difference of pixels and averaged Pearson correlation coefficient of pixels. Results are presented in Table 7 and Figure 10. Figure 11 visualizes example differences between original image and image with hidden data. In some tables we have skipped certain zero-filled ranges in order to make results shorter and more comprehensible for reader. The larger ranges of values are presented in figures.
The proposed method has been compared with following contemporary algorithms [11,12,13,14] in terms of CC and PSNR—see Table 8. We have evaluated robustness of the proposed steganography method (PM) against compression attack, which seems to be most common scenario in case of publishing stego images in social media.

4. Discussion

For the reasons described in Section 3, we have selected training data set containing 30,375 faces to generate eigenfaces. This value might have been different if the calculation had been done on images with different resolution or on different data set, however the reasoning remains unchanged. In order to make eigenfaces descriptive enough for particular resolution, we need to take data set for which Relative LED (or/and Relative MSE) is below certain threshold. In our case 0.03 seems to be adequate value—as can be seen in Figure 2, it introduces plateau on the Relative MSE and Relative LED plots. Taking more faces to generate covariance matrix would not change much in obtained results. The rest of calculations and discussion is conducted on eigenfaces generated from selected data set.
The proposed steganography method exhibits differing resistance to robustness tests. Although it it quite vulnerable to salt and paper disturbance (see Table 3), it seems to deal very nice with changes applied to whole image domain. Both clipping and JPEG compression do not damage the message if the whole possible capacity is not exhausted. The study found that when message length is set to 87 bytes, we can safely use a quality coefficient equal to 89, which only affects 0.041 (4.1%) of messages (see Table 5).
When image quality equals to 92 or more, no changes are observed for all validation data set (containing more than 100,000 of various faces images!). Result for BC measure corresponds to DSC and LD coefficients. The longer the message becomes, the more errors are introduced by compression. This situation is very similar if we take into account clipping (see Table 2). When the message length is set to 37 bytes and t = 0.6 , the BC equals 0.066 which means that over 93 % of messages have been successfully recovered. When t increases to 0.9 , the successful recovery rate rises to 100%. However the larger t we take, the more visible is a rectangular region in which message is hidden. Due to this fact, when the message lenght is set to 37 ( 10 % of image capacity), we suggest using t in range of [ 0.6 , 0.7 ] in order to increase difficulty of message detection. More details will be provided together with performance tests.
The limitation of the proposed method is basically the same as in the original eigenfaces approach [37]. The descriptive power of the method is determined by the variance of images in dataset D that was used to generate eigenfaces. There might be a face with a certain facial features which are not represented in D and reconstructed face I i might differ from the original face I i . Those differences will be visible as high frequency noises and they might be easily spotted. The most straightforward solution to this issue is to use large dataset D with diversified face features. From the same reason face of the person wearing heavy, untypical makeup might be inappropriate carrier of hidden data. Additionally, it is recommended that a person on the photo should face the camera, which is an additional limitation.
Because rotation affects face aligning, eigenfaces are not robust to rotation (or robustness is on very low level—see Table 4). However this weakness may be compensated by face image aligning. The HOG-based face features detection, which is often used for this task, is a robust and repeatable technique that performs translation, rotation and scaling of original image. Due to this fact those three linear transformations, which might be used to change images with encoded data, might be then compensated by face image aligning. This, however, depends of face aligning procedure that we apply and it is not in the scope of this paper.
Eigenfaces technique requires that a facial image need to have the same resolution as faces in training dataset that was used to generate eigenfaces. We have to check if scaling the image with hidden data and then returning to the old resolution affects the payload, provided that we use bicubic pixels interpolation. Results in Table 6 show that our method has limited robustness for downscaling and very good robustness for upscaling. For upscaling by 5%, more than 98% messages were unaffected for all tested lengths; the more we upscale the better results we get. This means that we can use eigenfaces generated from images with lower resolution to hide data into facial images with higher resolution. A facial image has to be downscaled and then upscaled to old resolution. This is very important information, because thanks to that our method can be used in a certain range of facial images resolutions without necessity to generate eigenfaces to each possible resolution. Of course if we upscale image too much, a person observing the face might spot artifacts caused by interpolation.
The next experiment we have done was evaluation of the performance of the proposed algorithm with various encoded message lengths and t parameter (cropping size) values. Obtained results are presented in Table 7 and Figure 10. We have compared over 100,000 original images from the validation data set with images with encoded data using MSE, maximal difference between corresponding pixels and CC averaged. Those results confirmed observations we have made in robustness evaluation. Parameters in range [ 0.6 , 0.7 ] resulted in a relatively small value of distance between original image and one with encoded data. As can be seen in second row of Figure 11, when message size is set to 37 ( 10 % of source image capacity) and t = 0.6 , the differences are virtually impossible to notice. Those parameters are our recommendation for this particular image resolution we have evaluated. After message encoding we need to check if it can be recovered (according to Table 2 in 6.6 % faces there might be a problem with it). To overcome this situation, we have to increase t value or use another facial image. When message length is close to 50 % of image capacity, we can even visually spot that some changes has been applied to the region of the face. We do not recommend exploiting full possible image capacity: it is better to split message between several faces and then put it together again.
The peak signal-to-noise ratio (PSNR) is the most common metric used to evaluate the stego image quality. The PSNR measures the similarity between two images (how two images are close to each other—higher value means better results) [38]. As can be seen in Table 8, our algorithm has obtain very similar results to best state-of-the-art approaches. CC and PSNR are getting worse with decreasing quality of compression and increasing of stego message length. In terms of CC our method has overperformed [11,12] and has slightly worse results than [13,14]. In terms of PSNR our method has over performed all but [12]. We can conclude that our method is among most robust algorithms against compression attack.

5. Conclusions

Basing on the discussion presented in previous section we can conclude that proposed steganography method for hiding data in face images is usable and may be an interesting alternative to other state of the art approaches. The algorithm using parameters of eigenvectors linear combination turned out to be resistant to JPEG compression, clipping and scaling. These features are especially important for practical use, when facial image is only a rectangular subset of larger photo. In such cases we first need to detect the face and extract it from original image, which usually requires applying some transformations. Of course we should use the same face extraction method to generate training data set from which eigenfaces are computed. What is more, our numerical comparison with other state-of-the-art algorithms proved that eigenface-based steganography is among most robust methods against compression attack. The future work for further advancements may include improvements of robustness against downscaling attacks and taking advantage of fact that some images might contains pictures of several faces. Additional data hidden in several faces might be used as a check sum and data correction of the original secret.

Author Contributions

T.H. was responsible for conceptualization, proposed methodology, software, implementation and writing the original draft; K.K. was responsible for software, data curation and validation, M.R.O. was responsible for software, data curation and validation. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Pedagogical University of Krakow.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest. The funder had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Kurtuldu, O.; Demirci, M. StegoGIS: A new steganography method using the geospatial domain. Turk. J. Electr. Eng. Comput. Sci. 2019, 27, 532–546. [Google Scholar] [CrossRef]
  2. Jankowski, B.; Mazurczyk, W.; Szczypiorski, K. PadSteg: Introducing inter-protocol steganography. Telecommun. Syst. 2013, 52, 1101–1111. [Google Scholar] [CrossRef] [Green Version]
  3. Koptyra, K.; Ogiela, M.R. Multiply information coding and hiding using fuzzy vault. Soft Comput. 2019, 23, 4357–4366. [Google Scholar] [CrossRef]
  4. Zakaria, A.; Hussain, M.; Wahid, A.; Idris, M.; Abdullah, N.; Jung, K.H. High-Capacity Image Steganography with Minimum Modified Bits Based on Data Mapping and LSB Substitution. Appl. Sci. 2018, 8, 2199. [Google Scholar] [CrossRef] [Green Version]
  5. Mohamed, M.; Mohamed, L. High Capacity Image Steganography Technique based on LSB Substitution Method. Appl. Math. Inf. Sci. 2016, 10, 259–266. [Google Scholar] [CrossRef]
  6. Saghir, B.; Ahmed, E.; Zen Alabdeen Salh, G.; Mansour, A. A Spatial Domain Image Steganography Technique Based on Pseudorandom Permutation Substitution Method using Tree and Linked List. Int. J. Eng. Trends Technol. 2015, 23, 209–217. [Google Scholar] [CrossRef]
  7. Gulve, A.; Joshi, M. A High Capacity Secured Image Steganography Method with Five Pixel Pair Differencing and LSB Substitution. Int. J. Image Graph. Signal Process. 2015, 7, 66–74. [Google Scholar] [CrossRef] [Green Version]
  8. Kasapbaşı, M.C.; Elmasry, W. New LSB-based colour image steganography method to enhance the efficiency in payload capacity, security and integrity check. Sādhanā 2018, 43. [Google Scholar] [CrossRef] [Green Version]
  9. Das, A.; Das, P.; Chakraborty, K.; Sinha, S. A New Image Steganography Method using Message Bits Shuffling. J. Mech. Contin. Math. Sci. 2018, 13, 1–15. [Google Scholar] [CrossRef]
  10. Shete, K.; Patil, M.; Chitode, J. Least Significant Bit and Discrete Wavelet Transform Algorithm Realization for Image Steganography Employing FPGA. Int. J. Image Graph. Signal Process. 2016, 8, 48–56. [Google Scholar] [CrossRef] [Green Version]
  11. Bergman, C.; Davidson, J. Unitary embedding for data hiding with the SVD. In Security, Steganography, and Watermarking of Multimedia Contents VII; International Society for Optics and Photonics: Bellingham, WA, USA, 2005; Volume 5681, pp. 619–630. [Google Scholar] [CrossRef] [Green Version]
  12. Chung, K.L.; Yang, W.N.; Huang, Y.H.; Wu, S.T.; Hsu, Y.C. On SVD-based watermarking algorithm. Appl. Math. Comput. 2007, 188, 54–57. [Google Scholar] [CrossRef]
  13. Chang, C.C.; Lin, C.C.; Hu, Y.S. An SVD oriented watermark embedding scheme with high qualities for the restored images. Int. J. Innov. Comput. Inf. Control. 2007, 3, 609–620. [Google Scholar]
  14. Chanu, Y.J.; Singh, K.M.; Tuithung, T. A Robust Steganographic Method based on Singular Value Decomposition. Int. J. Inf. Comput. Technol. 2014, 4, 717–726. [Google Scholar]
  15. Hingorani, C.; Bhatia, R.; Pathai, O.; Mirani, T. Face Detection and Steganography Algorithms for Passport Issuing System. Int. J. Eng. Res. Technol. (IJERT) 2014, 3, 1438–1441. [Google Scholar]
  16. Raju, K.; Srivatsa, S. Video Steganography for Face Recognition with Signcryption for Trusted and Secured Authentication by using PCASA. Int. J. Comput. Appl. 2012, 56, 1–5. [Google Scholar] [CrossRef]
  17. Kadry, S.; Nasr, S. New Generating Technique for Image Steganography. Innova Cienc. 2012, 4, 46. [Google Scholar] [CrossRef] [Green Version]
  18. Saračević, M.; Adamović, S.; Miškovic, V.; Maček, N.; Šarac, M. A novel approach to steganography based on the properties of Catalan numbers and Dyck words. Future Gener. Comput. Syst. 2019, 186–197. [Google Scholar] [CrossRef]
  19. Hassan, R.; Pepíć, S.; Saračević, M.; Ahmad, K.; Tasic, M. A Novel Approach to Data Encryption Based on Matrix Computations. Comput. Mater. Contin. 2020, 66, 1139–1153. [Google Scholar] [CrossRef]
  20. McAteer, I.; Ibrahim, A.; Guanglou, Z.; Yang, W.; Valli, C. Integration of Biometrics and Steganography: A Comprehensive Review. Technologies 2019, 7, 34. [Google Scholar] [CrossRef] [Green Version]
  21. Hamid, N.; Yahya, A.; Ahmad, R.B.; Al-Qershi, O.M. Image Steganography Techniques: An Overview. Int. J. Comput. Sci. Secur. (IJCSS) 2012, 6, 168–187. [Google Scholar]
  22. Surana, J.; Sonsale, A.; Joshi, B.; Sharma, D.; Choudhary, N. Steganography Techniques. IJEDR 2017, 5, 989–992. [Google Scholar]
  23. Shelke, F.M.; Dongre, A.A.; Soni, P.D. Comparison of different techniques for Steganography in images. Int. J. Appl. Innov. Eng. Manag. (IJAIEM) 2014, 3, 171–176. [Google Scholar]
  24. Rejani, R.; Murugan, D.; Krishnan, D.V. Comparative Study of Spatial Domain Image Steganography Techniques. Int. J. Adv. Netw. Appl. 2015, 7, 2650–2657. [Google Scholar]
  25. Liu, Z.; Luo, P.; Wang, X.; Tang, X. Deep Learning Face Attributes in the Wild. In Proceedings of the International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015. [Google Scholar]
  26. Dalal, N.; Triggs, B. Histograms of oriented gradients for human detection. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–26 June 2005; Volume 1, pp. 886–893. [Google Scholar] [CrossRef] [Green Version]
  27. Kazemi, V.; Sullivan, J. One Millisecond Face Alignment with an Ensemble of Regression Trees. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 24–27 June 2014. [Google Scholar] [CrossRef]
  28. Sagonas, C.; Antonakos, E.; Tzimiropoulos, G.; Zafeiriou, S.; Pantic, M. 300 Faces In-The-Wild Challenge: Database and results. Image Vis. Comput. 2016, 47, 3–18. [Google Scholar] [CrossRef] [Green Version]
  29. Arsigny, V.; Fillard, P.; Pennec, X.; Ayache, N. Geometric Means in a Novel Vector Space Structure on Sysmetric Positive-Definite Matrices. SIAM J. Matrix Anal. Appl. 2006, 29, 328–347. [Google Scholar] [CrossRef] [Green Version]
  30. Huang, Z.; Wang, R.; Shan, S.; Chen, X. Learning Euclidean-to-Riemannian Metric for Point-to-Set Classification. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 1677–1684. [Google Scholar] [CrossRef]
  31. Golub, G.H.; Van Loan, C.F. Matrix Computations, 4th ed.; Johns Hopkins University Press: Baltimore, MD, USA, 2013. [Google Scholar]
  32. Al-Mohy, A.; Higham, N. Improved Inverse Scaling and Squaring Algorithms for the Matrix Logarithm. SIAM J. Sci. Comput. 2012, 34, 153–169. [Google Scholar] [CrossRef] [Green Version]
  33. Lau, W.L.; Li, Z.L.; Lam, K. Effects of JPEG compression on image classfication. Int. J. Remote. Sens. 2003, 24, 1535–1544. [Google Scholar] [CrossRef]
  34. Levenshtein, V. Binary Codes Capable of Correcting Deletions, Insertions and Reversals. Sov. Phys. Dokl. 1966, 10, 707. [Google Scholar]
  35. Haldar, R.; Mukhopadhyay, D. Levenshtein Distance Technique in Dictionary Lookup Methods: An Improved Approach. arXiv 2011, arXiv:1101.1232. [Google Scholar]
  36. Sørensen, T. A method of establishing groups of equal amplitude in plant sociology based on similarity of species and its application to analyses of the vegetation on Danish commons. K. Dan. Vidensk. Selsk. 1948, 5, 1–34. [Google Scholar]
  37. Sirovich, L.; Kirby, M. Low-Dimensional Procedure for the Characterization of Human Faces. J. Opt. Soc. Am. A Opt. Image Sci. 1987, 4, 519–524. [Google Scholar] [CrossRef] [PubMed]
  38. Almohammad, A.; Ghinea, G. Stego image quality and the reliability of PSNR. In Proceedings of the 2010 2nd International Conference on Image Processing Theory, Tools and Applications, Paris, France, 7–10 July 2010; pp. 215–220. [Google Scholar] [CrossRef]
Figure 2. This figure presents an outline of eigenfaces-based steganography secret decoding framework. The person in the image has been anonymized. (A.) a face image is extracted from an image with embedded secret; (B.) a mean face is subtracted from face image. (C.) presents selected eigenfaces visualized as 2D images. The number above eigenface is its index (those are same eigenfaces as in Figure 1). (D.) By applying Equation (6) we can generate eigenface-based features of a face, which is a linear combination of coefficients (D.) Values in sum (D.) are actual coefficients of the face in orange frame. High-order coefficients (E.) of that linear combination are secret data scaled by Equation (8). In order to recover original data we have to apply Equation (9) to those coefficients.
Figure 2. This figure presents an outline of eigenfaces-based steganography secret decoding framework. The person in the image has been anonymized. (A.) a face image is extracted from an image with embedded secret; (B.) a mean face is subtracted from face image. (C.) presents selected eigenfaces visualized as 2D images. The number above eigenface is its index (those are same eigenfaces as in Figure 1). (D.) By applying Equation (6) we can generate eigenface-based features of a face, which is a linear combination of coefficients (D.) Values in sum (D.) are actual coefficients of the face in orange frame. High-order coefficients (E.) of that linear combination are secret data scaled by Equation (8). In order to recover original data we have to apply Equation (9) to those coefficients.
Entropy 23 00273 g002
Figure 3. This figure presents grayscale-coded shapes of circular elements defined in Equations (17)–(19) depending of value of threshold t.
Figure 3. This figure presents grayscale-coded shapes of circular elements defined in Equations (17)–(19) depending of value of threshold t.
Entropy 23 00273 g003
Figure 4. This plot visualizes data from Table 1. Red line indicates our choice of number of faces in training data set that was later used to generate eigenfaces in second part of the experiment. It was justified in Section 3.
Figure 4. This plot visualizes data from Table 1. Red line indicates our choice of number of faces in training data set that was later used to generate eigenfaces in second part of the experiment. It was justified in Section 3.
Entropy 23 00273 g004
Figure 5. This figure presents visualization of data from Table 2 (clipping).
Figure 5. This figure presents visualization of data from Table 2 (clipping).
Entropy 23 00273 g005aEntropy 23 00273 g005b
Figure 6. This figure presents visualization of data from Table 3 (salt and pepper).
Figure 6. This figure presents visualization of data from Table 3 (salt and pepper).
Entropy 23 00273 g006
Figure 7. This figure presents visualization of data from Table 4 (rotation).
Figure 7. This figure presents visualization of data from Table 4 (rotation).
Entropy 23 00273 g007
Figure 8. This figure presents visualization of data from Table 5 (JPEG compression).
Figure 8. This figure presents visualization of data from Table 5 (JPEG compression).
Entropy 23 00273 g008
Figure 9. This figure presents visualization of data from Table 6 (scaling).
Figure 9. This figure presents visualization of data from Table 6 (scaling).
Entropy 23 00273 g009
Figure 10. This figure presents visualization of data from Table 7 (prototype evaluation).
Figure 10. This figure presents visualization of data from Table 7 (prototype evaluation).
Entropy 23 00273 g010aEntropy 23 00273 g010b
Figure 11. This figure shows an exemplary original image and images with hidden data of various lengths (first row) and differences between original image and images with hidden data of various lengths (bottom row). The threshold parameter for mixing (19) was set to t = 0.6 .
Figure 11. This figure shows an exemplary original image and images with hidden data of various lengths (first row) and differences between original image and images with hidden data of various lengths (bottom row). The threshold parameter for mixing (19) was set to t = 0.6 .
Entropy 23 00273 g011
Table 1. This table presents values of Log-Euclidean Distance (LED), mean square error (MSE) and Pearson correlation (CC) coefficients calculated from the test data set. In order to calculate MSE and CC for each C k a 2 each image I b in training data set is encoded and then decoded using eigenfaces that explain at least 0.999 of variance.
Table 1. This table presents values of Log-Euclidean Distance (LED), mean square error (MSE) and Pearson correlation (CC) coefficients calculated from the test data set. In order to calculate MSE and CC for each C k a 2 each image I b in training data set is encoded and then decoded using eigenfaces that explain at least 0.999 of variance.
Number of FacesLEDRelative LEDMSERelative MSECC
20253452.18239.0920.993
40502463.7320.40129.9950.2330.995
60751337.6450.84224.5640.1810.996
8100213.5845.26321.0080.1450.997
10,125162.3280.31618.4810.120.997
12,150138.0890.17616.660.0990.998
14,175122.3480.12915.2880.0820.998
16,200111.5430.09714.2420.0680.998
18,225103.7860.07513.4020.0590.998
20,25097.5480.06412.670.0550.998
22,27592.350.05612.0720.0470.998
24,30088.0450.04911.5720.0410.999
26,32584.4980.04211.1560.0360.999
28,35081.390.03810.7940.0320.999
30,37578.8410.03210.4680.030.999
32,40076.5430.0310.1940.0260.999
34,42574.4970.0279.9330.0260.999
36,45072.6560.0259.7110.0220.999
38,47571.0630.0229.5110.0210.999
40,50069.5750.0219.3150.0210.999
42,52568.1870.029.1530.0170.999
44,55067.0040.0188.9790.0190.999
46,57565.9190.0168.8250.0170.999
48,60064.9580.0158.6810.0160.999
50,62563.9670.0158.5610.0140.999
Table 2. This table presents evaluation results of robustness test of clipping (18)–(21). Values in DSC, BC and LD are averaged on whole validation data set. We have used various values of threshold parameter t and various lengths of hidden data.
Table 2. This table presents evaluation results of robustness test of clipping (18)–(21). Values in DSC, BC and LD are averaged on whole validation data set. We have used various values of threshold parameter t and various lengths of hidden data.
ParameterLengthDSCBCLD
0.5180.0010.0790.167
0.61800.0120.022
0.71800.0010.001
0.818000
0.918000
118000
0.5370.0030.2950.896
0.6370.0010.0660.167
0.73700.010.021
0.83700.0010.001
0.937000
137000
0.5870.0110.9377.056
0.6870.0040.6162.622
0.7870.0010.250.636
0.88700.0230.047
0.987000.001
187000
0.51740.02123.459
0.61740.0080.9859.991
0.71740.0030.7853.764
0.817400.2040.555
0.917400.010.018
1174000
0.53700.041195.991
0.63700.023159.254
0.73700.013135.393
0.83700.0060.97515.468
0.93700.0010.4241.834
137000.0170.018
Table 3. This table presents evaluation results of robustness test of salt and pepper. Values in DSC, BC and LD are averaged on whole validation data set. We have used various values of noise level and various lengths of encoded data.
Table 3. This table presents evaluation results of robustness test of salt and pepper. Values in DSC, BC and LD are averaged on whole validation data set. We have used various values of noise level and various lengths of encoded data.
ParameterLengthDSCBCLD
1180.0020.1710.012
2180.0140.7360.101
3180.0320.9350.218
4180.0540.9860.346
1370.0020.2430.013
2370.0150.8350.107
3370.0330.9690.228
4370.0560.9940.357
1870.0020.4130.016
2870.0170.9410.124
3870.0370.9930.252
4870.0620.9980.383
11740.0030.6210.024
21740.0220.9830.158
31740.0460.9990.3
41740.07310.435
13700.0090.8910.065
23700.04310.278
33700.07510.44
43700.10710.571
Table 4. This table presents evaluation results of robustness test of rotation. Values in Sørensen–Dice coefficient (DSC), binary coefficient (BC) and Levenshtein distance (LD) are averaged on whole validation data set. We have used various values of rotation angle and various lengths of encoded data.
Table 4. This table presents evaluation results of robustness test of rotation. Values in Sørensen–Dice coefficient (DSC), binary coefficient (BC) and Levenshtein distance (LD) are averaged on whole validation data set. We have used various values of rotation angle and various lengths of encoded data.
ParameterLengthDSCBCLD
−1.25180.18210.775
−1180.1040.9990.542
−0.75180.0360.8590.23
−0.5180.0040.2630.032
−0.251800.0030
018000
0.251800.0030
0.5180.0040.2480.031
0.75180.0350.8190.218
1180.10.9920.521
1.25180.17810.761
−1.25370.19410.801
−1370.1110.572
−0.75370.0380.9390.239
−0.5370.0040.3150.029
−0.253700.0040
037000
0.253700.0030
0.5370.0040.3030.028
0.75370.0360.9170.228
1370.10810.55
1.25370.19110.779
−1.25870.23110.864
−1870.13710.672
−0.75870.04810.306
−0.5870.0040.4970.027
−0.258700.0030
087000
0.258700.0030
0.5870.0040.5310.027
0.75870.04810.299
1870.14310.679
1.25870.2410.87
−1.251740.26610.909
−11740.16910.764
−0.751740.06710.419
−0.51740.0060.9990.046
−0.2517400.0030
0174000
0.2517400.0020
0.51740.0050.9830.037
0.751740.06610.415
11740.17410.777
1.251740.27310.916
−1.253700.34110.955
−13700.24310.881
−0.753700.12110.632
−0.53700.01710.121
−0.2537000.0010
0370000
0.2537000.0010
0.53700.01510.118
0.753700.12210.643
13700.25210.892
1.253700.34610.961
Table 5. This table presents evaluation results of robustness test of JPEG compression. Values in DSC, BC and LD are averaged on whole validation data set. We have used various values of JPEG quality parameter and various lengths of encoded data.
Table 5. This table presents evaluation results of robustness test of JPEG compression. Values in DSC, BC and LD are averaged on whole validation data set. We have used various values of JPEG quality parameter and various lengths of encoded data.
ParameterLengthDSCBCLD
9518010
9218010
891800.9990
861800.9730.002
83180.0020.8280.012
9537010
9237010
893700.9970
863700.9210.002
83370.0020.6160.016
9587010
9287010
898700.9590.001
86870.0010.6420.007
83870.0040.220.032
95174010
9217400.9850
891740.0010.5720.005
861740.0040.110.031
831740.0120.0130.09
9537000.9960
923700.0010.4390.006
893700.0070.020.055
863700.02300.161
833700.04500.29
Table 6. This table presents evaluation results of robustness test of scaling. Values in DSC, BC and LD are averaged on whole validation data set. We have used various scaling factors and various lengths of encoded data.
Table 6. This table presents evaluation results of robustness test of scaling. Values in DSC, BC and LD are averaged on whole validation data set. We have used various scaling factors and various lengths of encoded data.
ParameterLengthDSCBCLD
0.8180.010.5970.069
0.85180.0050.730.038
0.95180.0010.9460.005
118010
1.051800.9960
1.151800.9990
1.218010
0.8370.0080.5330.058
0.85370.0040.6760.032
0.953700.9410.003
137010
1.053700.9960
1.153700.9990
1.23700.9990
0.8870.010.2460.069
0.85870.0050.3950.038
0.958700.9070.003
187010
1.058700.9940
1.158700.9990
1.287010
0.81740.01700.121
0.851740.0070.0060.056
0.9517400.8120.003
1174010
1.0517400.9920
1.1517400.9990
1.2174010
0.83700.0500.324
0.853700.02400.174
0.953700.00300.027
1370010
1.0537000.9850
1.15370010
1.2370010
Table 7. This table presents evaluation of performance of algorithms described in Section 2.1 and Section 2.5 using various length of encoded messages and values of parameter t. We have calculated following statistics between original image and image with hidden data: averaged MSE, averaged maximal difference of pixels and averaged Pearson correlation coefficient of pixels.
Table 7. This table presents evaluation of performance of algorithms described in Section 2.1 and Section 2.5 using various length of encoded messages and values of parameter t. We have calculated following statistics between original image and image with hidden data: averaged MSE, averaged maximal difference of pixels and averaged Pearson correlation coefficient of pixels.
ParameterLengthMSEMAXCC
0.618373.43536.1860.994
0.718389.07836.5020.995
0.818397.77436.6860.996
0.918402.55936.8070.997
0.637391.53938.5250.993
0.737408.59538.8290.994
0.837418.29739.0640.995
0.937423.63739.1780.995
0.687464.67744.7390.989
0.787488.08245.1120.99
0.887502.18445.4620.991
0.987510.29545.6490.992
0.6174644.555.9860.981
0.7174684.5256.480.982
0.8174710.13156.760.984
0.9174725.69157.0770.985
Table 8. This table presents comparison of Pearson correlation and peak-signal-to-noise-ratio (PSNR) for proposed and state-of-the-art methods. The number beside the proposed method (PM) is the length of the secret.
Table 8. This table presents comparison of Pearson correlation and peak-signal-to-noise-ratio (PSNR) for proposed and state-of-the-art methods. The number beside the proposed method (PM) is the length of the secret.
Quality of Compression[11][12][13][14]PM18PM37PM87PM174
Pearson correlation (CC)
5049.7332.8675.2676.1272.8572.2768.2061.63
6050.5833.3485.7685.8881.9880.9276.9870.33
7051.1435.9197.2997.2491.9691.0487.4080.90
8053.6843.2199.8099.9299.0998.7597.3493.98
9054.5164.2099.9599.921.001.0099.9999.89
Peak-signal-to-noise-ratio (PSNR)
5031.9634.0532.0232.0231.5031.4431.3831.35
6032.2134.5331.8031.8032.2332.1632.1032.07
7032.5435.1431.4831.4633.2333.1533.1033.05
8032.9535.9831.6531.6535.0434.9834.9534.90
9033.5837.7032.6632.6739.0539.0138.9538.94
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Hachaj, T.; Koptyra, K.; Ogiela, M.R. Eigenfaces-Based Steganography. Entropy 2021, 23, 273. https://doi.org/10.3390/e23030273

AMA Style

Hachaj T, Koptyra K, Ogiela MR. Eigenfaces-Based Steganography. Entropy. 2021; 23(3):273. https://doi.org/10.3390/e23030273

Chicago/Turabian Style

Hachaj, Tomasz, Katarzyna Koptyra, and Marek R. Ogiela. 2021. "Eigenfaces-Based Steganography" Entropy 23, no. 3: 273. https://doi.org/10.3390/e23030273

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop