Next Article in Journal
Online Topology Inference from Streaming Stationary Graph Signals with Partial Connectivity Information
Previous Article in Journal
Spatially Adaptive Regularization in Image Segmentation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Image Hashing Algorithm for Authentication with Multi-Attack Reference Generation and Adaptive Thresholding

1
School of Computer Science and Technology, Tiangong University, Tianjin 300387, China
2
School of Mathematical Sciences, Tiangong University, Tianjin 300387, China
3
Department of Computer Science, University of Surrey, Guildford GU2 7XH, UK
4
School of Computer Science and Information Engineering, Tianjin University of Science and Technology, Tianjin 300457, China
*
Author to whom correspondence should be addressed.
Algorithms 2020, 13(9), 227; https://doi.org/10.3390/a13090227
Submission received: 18 August 2020 / Revised: 31 August 2020 / Accepted: 3 September 2020 / Published: 8 September 2020
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)

Abstract

:
Image hashing-based authentication methods have been widely studied with continuous advancements owing to the speed and memory efficiency. However, reference hash generation and threshold setting, which are used for similarity measures between original images and corresponding distorted version, are important but less considered by most of existing models. In this paper, we propose an image hashing method based on multi-attack reference generation and adaptive thresholding for image authentication. We propose to build the prior information set based on the help of multiple virtual prior attacks, and present a multi-attack reference generation method based on hashing clusters. The perceptual hashing algorithm was applied to the reference/queried image to obtain the hashing codes for authentication. Furthermore, we introduce the concept of adaptive thresholding to account for variations in hashing distance. Extensive experiments on benchmark datasets have validated the effectiveness of our proposed method.

1. Introduction

With the aid of sophisticated photoediting software, multimedia content authentication is becoming increasingly prominent. Images edited by Photoshop may mislead people and cause social crises of confidence. In recent years, image manipulation has received a lot of criticism for its use in altering the appearance of image content to the point of making it unrealistic. Hence, tampering detection, a scheme that identifies the integrity and authenticity of the digital multimedia data, has emerged as an important research topic. Perceptual image hashing [1,2,3,4] supports image content authentication by representing the semantic content in a compact signature, which should be sensitive to content altering modifications but robust against content preserving manipulations such as blur, noise and illumination correction [5,6,7].
A perceptual image hashing system generally consists of three pipeline stages: the pre-processing stage, the hashing generation stage and the decision making stage. The major purpose of pre-processing is to enhance the robustness of features by preventing the effects of some distortions. After that, the reference hashes are generated and transmitted through a secure channel. For the test image, the same perceptual hash process will apply to the queried image to be authenticated. After the image hashing is generated, the task of image authentication can be validated by the decision making stage. The reference hash will be compared with image hashes in the test database for content authentication based on the selected distance metric, such as Hamming distance. Currently the majority of perceptual hashing algorithms for authentication application can roughly be divided into the five categories: invariant feature transform-based methods [8,9,10,11,12,13], local feature points-based schemes [14,15,16,17,18,19,20,21,22,23], dimension reduction-based hashing [24,25,26,27,28,29], statistical features-based hashing [30,31,32,33,34,35] and leaning-based hashing [36,37,38,39].
For the decision making stage of perceptual hashing-based image authentication framework, only a few studies have been devoted to the reference generation and threshold selection. For reference hashing generation, Lv et al. [36] proposed obtaining an optimal estimate of the hash centroid using kernel density estimation (KDE). In this method, the centroid was obtained as the value which yields the maximum estimated distribution. Its major drawbacks are that the binary codes are obtained by using a data independent method. Since the hashing generation is independent of the data distribution, data independent hashing methods may not consider the characteristics of data distribution in hashing generation. Currently, more researchers are beginning to focus on the data dependent methods with learning for image tamper detection. Data dependent methods with learning [40,41,42,43] can be trained to optimally fit data distributions and specific objective functions, which produce better hashing codes to preserve the local similarity. In our previous work [44], we proposed a reference hashing method based on clustering. This algorithm makes the observation that the hashes of the original image actually not be the centroid of its cluster set. Therefore, how to learn the reference hashing code for solving multimedia security problems is an important topic for current research. As for authentication decision making, the simple way is to use threshold-based classifiers. Actually, perceptual differences under the image manipulations are often encountered when information is provided by different textural images. Traditional authentication tasks aim to identify the tampered results from distance values among different image codes. In this kind of task, the threshold is regarded as a fixed value. However, in a number of real-world cases, the objective truth cannot be identified by one fixed threshold for any image.
In this paper, we extend our previous work [44] and propose an image hashing algorithm framework for authentication with multi-attack reference generation and adaptive thresholding. According to the requirement of authentication application, we propose to build the prior information set based on the help of multiple virtual prior attacks, which is produced by applying virtual prior distortions and attacks on the original images. Differently from the traditional image authentication task, we address this uncertainty and introduce the concept of adaptive thresholding to account for variations in hashing distance. The main difference here is that a different threshold value is computed for each image. This technique provides more robustness to changes in image manipulations. We propose a data dependent semi-supervised image authentication scheme by using an attack-specific, adaptive threshold to generate a hashing code. This threshold tag is embedded in the hashing code transmission which can be reliably extracted at the receiver. The framework of our algorithm is shown in Figure 1. We firstly introduce the proposed multi-attack reference hashing algorithm. Then, we describe how original reference images were generated for experiments. After that, the perceptual hashing process was applied to the reference/queried image to be authenticated, so as to obtain the hashing codes. Finally, the reference hashes were compared with queried image hashes in the test database for content authentication.

2. Problem Statement and Contributions

Authentication is an important issue of multimedia data protection; it makes possible to trace the author of the multimedia data and to allow the determination of whether an original multimedia data content was altered in any way from the time of its recording. The hash value is a compact abstract of the content. We can re-generate a hash value from the received content, and compare it with the original hash value. If they match, the content is considered as authentic. In the proposed algorithm, we aim to compute the common hashing function h k ( . ) for image authentication work. Let D ( . , . ) indicate a decision making function for comparing two hash values. For given thresholds τ , the perceptual image hashing for tamper detection should satisfy the following criteria. If two images x and y are perceptually similar, their corresponding hashes need to be highly correlated, i.e., D ( h k ( x ) , h k ( y ) ) < τ . Otherwise, If z is the tampered image from x, we should have D ( h k ( x ) , h k ( z ) ) > τ .
The main contributions can be summarized as follows:
(1)
We propose building the prior information set based on the help of multiple virtual prior attacks, which we did by applying virtual prior distortions and attacks to the original images. On the basis of said prior image set we aimed to infer the clustering centroids for reference hashing generation, which is used for a similarity measure.
(2)
We effectively exploited the semi-supervised information into the perceptual image hashing learning. Instead of determining metric distance on training results, we explored the hashing distance for thresholding by considering the effect on different images.
(3)
In order to account for variations in exacted features of different images, we took into account the pairwise variations among different originally-received image pairs. Those adaptive thresholding improvements maximally discriminate the malicious tampering from content-preserving operations, leading to an excellent tamper detection rate.

3. Proposed Method

3.1. Multi-Attack Reference Hashing

Currently, most image hashing method take the original image as the reference. However, the image hashes arising from the original image may not be the hash centroid of the distorted copies. As shown in Figure 2a, we applied 15 classes of attacks on five original images and represent their hashes in 2-dimensional space for both the original images and their distorted copies. From Figure 2a, we can observe five clusters in the hashing space. From Figure 2b by zooming into one hash cluster, we note an observation that the hashes of the original image actually may not be the centroid of its cluster.
For l original images in the dataset, we apply V type content preserving attacks with different types of parameter settings to generate simulated distorted copies. Let us denote the feature matrix of attacked instances in set Ψ v as X v R m × t . Here, v = 1 , 2 , , V , m is the dimensionality of data feature and t is the number of instances for attack v. Finally, we get the feature matrices for the total n instance as X = { X 1 , , X V } , and here n = t V . Note that the feature matrices are normalized to zero-centered.
By considering the total reconstruction errors of all the training objects, we have the following minimization problem in a matrix form, which jointly exploits the information from various content preserving multi-attack data.
J 1 ( U ˜ , X ˜ ) = α ( | | X ˜ U ˜ X | | F 2 + β | | U ˜ | | F 2 ) ,
where X ˜ is the shared latent multi-attack feature representation. The matrix U ˜ can be viewed as the basis matrix, which maps the input multi-attack features onto the corresponding latent features. Parameter α , β is a nonnegative weighting vector to balance the significance.
From the information-theoretic point of view, the variance over all data is measured, and taken as a regularization term:
J 2 ( U ˜ ) = γ | | U ˜ X | | F 2 ,
where γ is a nonnegative constant parameter.
The image reference for authentication is actually an infinite clustering problem. The reference is usually generated based on the cluster centroid image. Therefore, we also consider keeping the cluster structures. We formulate this objective function as:
J 3 ( C , G ) = λ | | X ˜ C G | | F 2 ,
where C R k × l and G { 0 , 1 } l × n are the clustering centroid and indicator.
Finally, the formulation can be written as:
m i n U ˜ , X ˜ , C , G α | | X ˜ U ˜ X | | F 2 + β | | U ˜ | | F 2 γ | | U ˜ X | | F 2 + λ | | X ˜ C G | | F 2 .
Our objective function simultaneously learns the feature representations X ˜ and finds the mapping matrix U ˜ , the cluster centroid C and indicator G . The iterative optimization algorithm is as follows.
Fixing all variables but optimize U ˜ : The optimization problem (Equation (4)) reduces to:
m i n J ( U ˜ v ) = α | | X ˜ U ˜ X | | F 2 + β | | U ˜ | | F 2 γ tr ( U ˜ X X T U ˜ T ) .
By setting the derivation J ( U ˜ ) U ˜ =0, we have:
U ˜ = X ˜ X T ( ( α γ ) X X T + β I ) 1 .
Fix all variables but optimize X ˜ : Similarly, we solve the following optimization problem:
m i n F ( X ˜ ) = α | | X ˜ U ˜ X | | F 2 + λ | | X ˜ C G | | F 2 ,
which has a closed-form optimal solution:
X ˜ = α U ˜ X + λ C G .
Fix all variables but C and G : For the cluster centroid C and indicator G , we obtain the following problem:
m i n C , G | | X ˜ C G | | F 2 .
Inspired by the optimization algorithm ADPLM (adaptive discrete proximal linear method) [45], we initialize C = X ˜ G T and update C as follows:
C p + 1 = C p 1 μ Γ ( C p ) ,
where Γ ( C p ) = | | B C G | | F 2 + ρ | | C T 1 | | , ρ = 0.001, p = 1 , 2 , 5 denote the p-th iteration.
The indicator matrix G at indices ( i , j ) is obtained by:
g i , j p + 1 = 1 j = a r g m s i n H ( b i , c s p + 1 ) 0 o t h e r w i s e ,
where H ( b i , c s ) is the distance between the i-th feature codes x i and the s-th cluster centroid c s .
After we infer the cluster centroid C and the multi-attack feature representations X ˜ , the corresponding l reference images are generated. The basic idea is to compare the hashing distances among the nearest content, preserving the attacked neighbors of each original image and corresponding cluster centroid.

3.2. Semi-Supervised Hashing Code Learning

For the reference and received images, we use the semi-supervised learning algorithm for hashing code generation and image authentication. Firstly, all the input image is converted to a normalized size 256 × 256 by using the bi-linear interpolation. The resizing operation makes our hashing robust against image rescaling. Then, the Gaussian low-pass filter is used to blur the resized image, which can reduce the influences of high-frequent components on the image, such as noise contamination or filtering. Let F ( i , j ) be the element in the i-th row and the j-th column of the convolution mask. It is calculated by
F ( i , j ) = F ( 1 ) ( x , y ) i j F ( 1 ) ( x , y ) ,
in which F ( 1 ) ( x , y ) is defined as
F ( 1 ) ( x , y ) = e ( i 2 + j 2 ) 2 σ 2 ,
where σ is the standard deviation of all elements in the convolution mask.
Next, the RGB color image is converted into the CIE LAB space and the image is represented by the L component. The reason is that the L component closely matches human perception of lightness. The RGB color image is firstly converted into the XYZ color space by the following formula:
X Y Z = 0.4125 0.3576 0.1804 0.2127 0.7152 0.0722 0.0193 0.1192 0.9502 R G B ,
where R, G, and B are the red, green and blue components of the color pixel. We convert it into the CIE LAB space by the following equation:
L = 116 f ( Y / Y w ) 16 A = 500 f [ ( X / X w ) f ( Y / Y w ) ] B = 200 f [ ( X / X w ) f ( Z / Z w ) ] ,
where X w = 0.950456, Y w = 1.0 and Z w = 1.088754 are the CIE XYZ tristimulus values of the reference white point, and f ( t ) is determined by:
f ( t ) = t 1 / 3 , if t > 0.008856 7.787 t + 16 / 116 , o t h e r w i s e .
Figure 3 illustrates an example of the preprocessing.
Let us say we have N images in our training set. Select L images as labeled images, L N . The features of a single image are expressed as x R M , where M is the extracted feature length. The features of all images are represented as X = { x 1 , x 2 , , x N } , where X R M × N . The features of labeled images are represented as X R M × L . Note that these feature matrices are normalized to zero-centered. The goal of our algorithm is to learn hash functions that map X R M × N to a compact representation H R K × N in a low-dimensional Hamming space, where K is the digits length. Our hash function is defined as:
H = W T X .
The hash function of a single image is defined as:
h i = W T x i .
In order to learn a W that is simultaneously maximizing the empirical accuracy on the labeled image and the variance of hash bits over all images, the empirical accuracy on the labeled image is defined as:
P 1 ( W ) = ( x i , x j ) S E i j h i h j ( x i , x j ) D E i j h i h j ,
where matrix E is the classification of marked image pairs, as follows:
E ( i , j ) = 1 ( x i , x j ) S 1 ( x i , x j ) D 0 o t h e r w i s e ,
Specifically, a pair ( x i , x j ) S is denoted as a perceptually similar pair when the two images are the same images or the attacked images of a same image, and a pair ( x i , x j ) D is denoted as a perceptually different pair when the two images are different images or when one suffered from malicious manipulations or perceptually significant attacks.
Equation (19) can also be represented as:
P 1 ( W ) = 1 2 tr { ( W T X n ) E ( W T X n ) T } .
This relaxation is quite intuitive. That is, the similar images are desired to not only have the same sign but also large projection magnitudes, while the projections for dissimilar images not only have different signs but also are as different as possible.
Moreover, to maximize the amount of information per hash bit, we want to calculate the maximum variance of all hash bits of all images and use it as a regularization term of the hash function.
V ( W ) = i = 1 N v a r ( h i ) = N v a r ( W T x i ) .
Due to the indifferentiability of the above function, it is difficult to calculate its extreme value. However, the maximum variance of the hash function is the lower bound of the scale variance of the projected data, so the information theoretic regularization is represented as:
P 2 ( W ) = 1 2 tr { ( W T X ) ( W T X ) T } .
Finally, the overall semi-supervised objective function combines the relaxed empirical fitness term from Equation (21) and the regularization term from Equation (23).
P ( W ) = P 1 ( W ) + η P 2 ( W ) = 1 2 tr { W T ( X n E X n T + η X X T ) W } ,
where η = 0.25 is a tradeoff parameter. The optimization problem is as follows:
m a x W P ( W ) s . t . W W T = I ,
where the constraint W W T = I makes the projection directions orthogonal. We learn the optimal projection W that is obtained by means of eigenvalue decomposition of matrix M .

3.3. Adaptive Thresholds-Based Decision Making

To measure the similarity between hashes of original and attacked/tampered images, the metric distance between two hashing code is calculated by:
d ( h 1 , h 2 ) = h 1 h 2 2 h 1 h 2 ,
where h 1 and h 2 are two image hashes. In general, the more similar the images, the smaller the distance. The greater the difference, the greater the distance.
Then, the threshold T is defined to judge whether the image is a similar image or a tampered image.
S i m i l a r i m a g e s p a i r , if ( d T ) T a m p e r e d i m a g e s p a i r , if ( d > T ) .
If the distance is less than a given threshold, the two images are judged as visually identical images. Otherwise, they are judged as distinct images.
Traditional image tamper detection algorithms take a fixed value as the threshold to judge similar images/tampered images. However, due to the different characteristics among images, some images cannot be correctly judged by the fixed threshold value. In our adaptive thresholds algorithm, we firstly find the maximum value for the distance value of the similar images and the minimum value for the distance value of the tampered images. In order to prevent the two values from being too extreme, we set the following limits:
d i s t m i n = max ( d i s t 1 ) s . t . ( d i s t m i n m e d i a n ( d i s t 1 ) ) > ψ d i s t m a x = min ( d i s t 2 ) s . t . d i s t m a x > ξ ,
where d i s t 1 is the distance between the similar image and the original image; d i s t 2 is the distance between the tampered image and the original image. ψ and ξ are two constants set experimentally. Then, the resulting maximum and minimum values are compared with fixed thresholds:
τ ˜ = τ , if ( d i s t m i n < τ < d i s t m a x ) d i s t m a x ( d i s t m a x d i s t m i n ) / 3 , if ( d i s t m i n < d i s t m a x τ ) d i s t m i n + ( d i s t m a x d i s t m i n ) / 3 , if ( τ d i s t m i n < d i s t m a x ) ( d i s t m i n + d i s t m a x ) / 2 , o t h e r w i s e ,
where τ is a fixed threshold obtained experimentally, τ ˜ is the adaptive threshold suitable for this image. Then, all images have their own thresholds, which are represented as:
T ˜ = [ τ ˜ 1 , τ ˜ 2 , , τ ˜ n ] .
Finally, we put the adaptive threshold at the top of the hash code and transfer it along with the hash code. Thus, the final hash code is represented as:
h i ˜ = [ τ ˜ i , h i ] .

4. Experiments

4.1. Data

Our experiments were carried out on two real-world datasets. The first came from the CASIA [46], which contains 918 image pairs, including 384 × 256 real images and corresponding distorted images with different texture characteristics. The other one was RTD [47,48], which contains 220 real images and corresponding distorted images with resolution 1920 × 1080.
To ensure that the images of the training set were different from the images of the testing set, we selected 301 non-repetitive original images and their corresponding tampered images to generate 66,231 images as our training data. Furthermore, 10,000 images were randomly selected from 66,231 images as a labeled subset. We adopted 226 repetitive original images and their corresponding set of tampered images to determine the threshold value of each image. The remaining images in CASIA and RTD datasets were used to test performance.

4.2. Baselines

We compared our proposed algorithm with a number of baselines. In particular, we compared it with:
Wavelet-based image hashing [49]: It is an invariant feature transform-based method, which develops an image hash from the various sub-bands in a wavelet decomposition of the image and makes it convenient to transform from the space-time domain to the frequency.
SVD-based image hashing [24]: It belongs to dimension reduction-based hashing and it uses spectral matrix invariants as embodied by singular value decomposition. The invariant features based on matrix decomposition show good robustness against noise addition, blurring and compressing attacks.
RPIVD-based image hashing [30]: It incorporates ring partition and invariant vector distance into image hashing by calculating the images statistics. The statistical information of the images includes: mean, variance, standard deviation, kurtosis, etc.
Quaternion-based image hashing [12]: This method considers multiple features, and constructs a quaternion image to implement a quaternion Fourier transform for hashing generation.

4.3. Perceptual Robustness

To validate the perceptual robustness of proposed algorithm, we applied twelve types of content preserving operations: (a) Gaussian noise addition with the variance of 0.005. (b) Salt and pepper noise addition with a density of 0.005. (c) Gaussian blurring with the standard deviation of the filter 10. (d) Circular blurring with a radius of 2. (e) Motion blurring with the amount of the linear motion 3 and the angle of the motion blurring filter 45. (f) Average filtering with filter size of 5. (g) Median filtering with filter size of 5. (h) Wiener filtering with filter size of 5. (i) Image sharpening with the parameter alpha of 0.49. (j) Image scaling with the percentage 1.2. (k) Illumination correction with parameter gamma 1.18. (l) JPEG compression with quality factor 20.
We extracted the reference hashing code based on the original image (ORH) and our proposed multi-attack reference hashing (MRH). For the content-preserving distorted images, we calculated the corresponding distances between reference hashing codes and content-preserving images’ hashing codes. The statistical results under different attacks are presented in Table 1. Just as shown, the hashing distances for the four baseline methods were small enough. In our experiments, we set the threshold τ = 0.12 to distinguish the similar images and forgery images from the CASIA dataset for the PRIVD method. Similarly, for the other three methods, we set the thresholds as 1.2, 0.0012 and 0.008 correspondingly for their best results.

4.4. Discriminative Capability

The discriminative capability of a image hashing means that visually distinct images should have significantly different hashes. In other words, two images that are visually distinct should have a very low probability of generating similar hashes. Here, RTD dataset consisting of 220 different uncompressed color images was adopted to validate the discriminative capability of our proposed multi-attack reference hashing algorithm. We first extracted reference hashing codes for all 220 images in RTD and then calculated the hashing distance for each image with the other 219 images. Thus, we can finally obtained 220 × (220−1)/2 = 24,090 hashing distances. Figure 4 shows the distribution of these 24,090 hashing distances between hashing pairs with varying thresholds, where the abscissa is the hashing distance and the ordinate represents the frequency of hashing distance. It can be seen clearly from the histogram that the proposed method has good discriminative capability. For instance, we set τ = 0.12 as the threshold on CASIA dataset when extracting the reference hashing by RPIVD method. The minimum value for hashing distance was 0.1389, which is above the threshold. The results show that the multi-attack reference hashing can replace the original image-based reference hashing with good discrimination.

4.5. Authentication Results

As the reference hashing performance for authentication, we compared the proposed multi-attack reference hashing (MRH) with original image-based reference hashing (ORH) on four baseline image hashing methods, i.e., wavelet-based image hashing, SVD-based image hashing, RPIVD-based image hashing and QFT-based image hashing, with twelve content-preserving operations. The results are shown in Table 2 and Table 3. Note that higher values indicate better performance for all metrics. It was observed that the proposed MRH algorithm outperformed the ORH algorithm by a clear margin, irrespective of the content preserving operation and image datasets (RTD and CASIA). This is particularly evident for illumination correction. For instance, in contrast to original image-based reference hashing, the multi-attack reference hashing increased the AUC of illumination correction by 21.98% on the RTD image dataset when getting the reference hashing by wavelet, as shown in Table 2. For the QFT approach, the multi-attack reference hashing we proposed was more stable and outstanding than other corresponding reference hashings. Since the QFT robust image hashing technique is used to process the three channels of the color image, the chrominance information of the color image can be prevented from being lost and image features are more obvious. Therefore, the robustness of the multi-attack reference hashing is more able to resist geometric attacks and content preserving operations. For instance, the multi-attack reference hashing increased the precision of Gaussian noise by 3.28% on the RTD image.
For performance analysis, we took wavelet-based and SVD-based image hashing to extract features and used the semi-supervised method to train W for each content-preserving manipulations. The experimental results are summarized in Table 4. They show the probability of the true authentication capability of the proposed method compared to the methods: wavelet-based, SVD-based features and the corresponding semi-supervised method. Here, for the wavelet-based method, ψ = 0.02 and ξ = 0 ; and for SVD-based method, ψ = 0.005 and ξ = 0 . The column of similar image represents the true authentication capability of the judgment of a similar image, which indicates the robustness of the algorithm. The column of tampering image represents the true authentication capability of tampering image, which indicates the discrimination of the algorithm. Higher values mean better robustness and differentiation. Only our approach selected adaptive thresholds, as other approaches choose a fixed threshold that balances robustness and discrimination.

5. Domains of Application

With the aid of sophisticated photoediting software, multimedia content security is becoming increasingly prominent. By using image editing tool, such as Photoshop, the counterfeiters can easily tamper the color attribute to distort the actual meanings of images. Figure 5 shows some real examples for image tamper. These edited images spread over the social network, which not only disturb our daily lives, but also seriously threat our social harmony and stability. If tampered images are extensively used in the official media, scientific discovery, and even forensic evidence, the degree of trustworthiness will undoubtedly be reduced, thus having a serious impact on various aspects of society.
Many image hashing algorithms are widely used in image authentication, image copy detection, digital watermarking, image quality assessment and other fields, as shown in Figure 6. Perceptual image hashing aims to be smoothly invariant to small changes in the image (rotation, crop, gamma correction, noise addition, adding a border). This is in contrast to cryptographic hash functions that are designed for non-smoothness and to change entirely if any single bit changes. Our proposed perceptual image hashing algorithm is mainly for image authentication applications. Our technique is suitable for processing large image data, making it a valuable tool for image authentication applications.

6. Conclusions

In this paper, we have proposed a hashing algorithm based on multi-attack reference generation and adaptive thresholding for image authentication. We effectively exploited simultaneously the supervised content-preserving images and multiple attacks for feature generation and the hashing learning. We specially take into account the pairwise variations among different originally-received image pairs, which makes the threshold more adaptable and the value more reasonable. We performed extensive experiments on two image datasets and compared our results with the state-of-the-art hashing baselines. Experimental results demonstrated that the proposed method yields superior performance. For image hashing-based authentication, a scheme with not only high computational efficiency but also reasonable authentication performance is expected. Compared with other original image-based reference generation, the limitation of the our work is that it is time consuming for cluster operation. In the future work, we will design the co-regularized hashing for multiple features, which is expected to show even better performance.

Author Contributions

Conceptualization, L.D.; methodology, L.D., Z.H. and Y.W.; software, L.D., Z.H. and X.W.; validation, Z.H. and Y.W.; formal analysis, L.D., X.W. and A.T.S.H.; data curation, Z.H. and Y.W.; writing-original draft preparation, L.D., Z.H. and Y.W.; writing-review and editing, X.W., and A.T.S.H.; funding acquisition, L.D. and X.W. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (grant number 61602344), and the Science and Technology Development Fund of Tianjin Education Commission for Higher Education, China (grant number 2017KJ091, 2018KJ222).

Conflicts of Interest

The authors declare that there are no conflict of interest regarding the publication of this article.

References

  1. Kalker, T.; Haitsma, J.; Oostveen, J.C. Issues with DigitalWatermarking and Perceptual Hashing. Proc. SPIE 2001, 4518, 189–197. [Google Scholar]
  2. Tang, Z.; Chen, L.; Zhang, X.; Zhang, S. Robust Image Hashing with Tensor Decomposition. IEEE Trans. Knowl. Data Eng. 2018, 31, 549–560. [Google Scholar] [CrossRef]
  3. Yang, H.; Yin, J.; Yang, Y. Robust Image Hashing Scheme Based on Low-Rank Decomposition and Path Integral LBP. IEEE Access 2019, 7, 51656–51664. [Google Scholar] [CrossRef]
  4. Tang, Z.; Yu, M.; Yao, H.; Zhang, H.; Yu, C.; Zhang, X. Robust Image Hashing with Singular Values of Quaternion SVD. Comput. J. 2020. [Google Scholar] [CrossRef]
  5. Tang, Z.; Ling, M.; Yao, H.; Qian, Z.; Zhang, X.; Zhang, J.; Xu, S. Robust Image Hashing via Random Gabor Filtering and DWT. CMC Comput. Mater. Contin. 2018, 55, 331–344. [Google Scholar]
  6. Karsh, R.K.; Saikia, A.; Laskar, R.H. Image Authentication Based on Robust Image Hashing with Geometric Correction. Multimed. Tools Appl. 2018, 77, 25409–25429. [Google Scholar] [CrossRef]
  7. Gharde, N.D.; Thounaojam, D.M.; Soni, B.; Biswas, S.K. Robust Perceptual Image Hashing Using Fuzzy Color Histogram. Multimed. Tools Appl. 2018, 77, 30815–30840. [Google Scholar] [CrossRef]
  8. Tang, Z.; Yang, F.; Huang, L.; Zhang, X. Robust image hashing with dominant dct coefficients. Optik Int. J. Light Electron Opt. 2014, 125, 5102–5107. [Google Scholar] [CrossRef]
  9. Lei, Y.; Wang, Y.-G.; Huang, J. Robust image hash in radon transform domain for authentication. Signal Process. Image Commun. 2011, 26, 280–288. [Google Scholar] [CrossRef]
  10. Tang, Z.; Dai, Y.; Zhang, X.; Huang, L.; Yang, F. Robust image hashing via colour vector angles and discrete wavelet transform. IET Image Process. 2014, 8, 142–149. [Google Scholar] [CrossRef]
  11. Ouyang, J.; Coatrieux, G.; Shu, H. Robust hashing for image authentication using quaternion discrete fourier transform and log-polar transform. Digit. Signal Process. 2015, 41, 98–109. [Google Scholar] [CrossRef]
  12. Yan, C.-P.; Pun, C.-M.; Yuan, X. Quaternion-based image hashing for adaptive tampering localization. IEEE Trans. Inf. Forensics Secur. 2016, 11, 2664–2677. [Google Scholar] [CrossRef]
  13. Yan, C.-P.; Pun, C.-M. Multi-scale difference map fusion for tamper localization using binary ranking hashing. IEEE Trans. Inf. Forensics Secur. 2017, 12, 2144–2158. [Google Scholar] [CrossRef]
  14. Wang, P.; Jiang, A.; Cao, Y.; Gao, Y.; Tan, R.; He, H.; Zhou, M. Robust image hashing based on hybrid approach of scale-invariant feature transform and local binary patterns. In Proceedings of the 2018 IEEE 23rd International Conference on Digital Signal Processing (DSP), Shanghai, China, 19–21 November 2018; pp. 1–5. [Google Scholar]
  15. Qin, C.; Hu, Y.; Yao, H.; Duan, X.; Gao, L. Perceptual image hashing based on weber local binary pattern and color angle representation. IEEE Access 2019, 7, 45460–45471. [Google Scholar] [CrossRef]
  16. Yan, C.-P.; Pun, C.-M.; Yuan, X.-C. Adaptive local feature based multi-scale image hashing for robust tampering detection. In Proceedings of the TENCON 2015—2015 IEEE Region 10 Conference, Macao, China, 1–4 November 2015; pp. 1–4. [Google Scholar]
  17. Yan, C.P.; Pun, C.M.; Yuan, X.C. Multi-scale image hashing using adaptive local feature extraction for robust tampering detection. Signal Process. 2016, 121, 1–16. [Google Scholar] [CrossRef]
  18. Pun, C.-M.; Yan, C.-P.; Yuan, X. Robust image hashing using progressive feature selection for tampering detection. Multimed. Tools Appl. 2017, 77, 11609–11633. [Google Scholar] [CrossRef]
  19. Qin, C.; Chen, X.; Luo, X.; Xinpeng, Z.; Sun, X. Perceptual image hashing via dual-cross pattern encoding and salient structure detection. Inf. Sci. 2018, 423, 284–302. [Google Scholar] [CrossRef]
  20. Monga, V.; Evans, B. Robust perceptual image hashing using feature points. In Proceedings of the 2004 International Conference on Image Processing, Singapore, 24–27 October 2004; pp. 677–680. [Google Scholar]
  21. Qin, C.; Chen, X.; Dong, J.; Zhang, X. Perceptual image hashing with selective sampling for salient structure features. Displays 2016, 45, 26–37. [Google Scholar] [CrossRef]
  22. Qin, C.; Sun, M.; Chang, C.-C. Perceptual hashing for color images based on hybrid extraction of structural features. Signal Process. 2018, 142, 194–205. [Google Scholar] [CrossRef]
  23. Anitha, K.; Leveenbose, P. Edge detection based salient region detection for accurate image forgery detection. In Proceedings of the IEEE International Conference on Computational Intelligence and Computing Research, Coimbatore, India, 18–20 December 2015; pp. 1–4. [Google Scholar]
  24. Kozat, S.S.; Mihcak, K.; Venkatesan, R. Robust perceptual image hashing via matrix invariances. In Proceedings of the 2004 International Conference on Image Processing, Singapore, 24–27 October 2004; pp. 3443–3446. [Google Scholar]
  25. Ghouti, L. Robust perceptual color image hashing using quaternion singular value decomposition. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, Florence, Italy, 4–9 May 2014; pp. 3794–3798. [Google Scholar]
  26. Abbas, S.Q.; Ahmed, F.; Zivic, N.; Ur-Rehman, O. Perceptual image hashing using svd based noise resistant local binary pattern. In Proceedings of the International Congress on Ultra Modern Telecommunications and Control Systems and Workshops, Lisbon, Portugal, 18–20 October 2016; pp. 401–407. [Google Scholar]
  27. Tang, Z.; Ruan, L.; Qin, C.; Zhang, X.; Yu, C. Robust image hashing with embedding vector variance of lle. Digit. Signal Process. 2015, 43, 17–27. [Google Scholar] [CrossRef]
  28. Sun, R.; Zeng, W. Secure and robust image hashing via compressive sensing. Multimed. Tools Appl. 2014, 70, 1651–1665. [Google Scholar] [CrossRef]
  29. Liu, H.; Xiao, D.; Xiao, Y.; Zhang, Y. Robust image hashing with tampering recovery capability via low-rank and sparse representation. Multimed. Tools Appl. 2016, 75, 7681–7696. [Google Scholar] [CrossRef]
  30. Tang, Z.; Zhang, X.; Li, X.; Zhang, S. Robust image hashing with ring partition and invariant vector distance. IEEE Trans. Inf. Forensics Secur. 2016, 11, 200–214. [Google Scholar] [CrossRef]
  31. Srivastava, M.; Siddiqui, J.; Ali, M.A. Robust image hashing based on statistical features for copy detection. In Proceedings of the IEEE Uttar Pradesh Section International Conference on Electrical, Computer and Electronics Engineering, Varanasi, India, 9–11 December 2017; pp. 490–495. [Google Scholar]
  32. Huang, Z.; Liu, S. Robustness and discrimination oriented hashing combining texture and invariant vector distance. In Proceedings of the 26th ACM International Conference on Multimedia, Seoul, Korea, 22–26 October 2018; pp. 1389–1397. [Google Scholar]
  33. Zhang, D.; Chen, J.; Shao, B. Perceptual image hashing based on zernike moment and entropy. Electron. Sci. Technol. 2015, 10, 12. [Google Scholar]
  34. Chen, Y.; Yu, W.; Feng, J. Robust image hashing using invariants of tchebichef moments. Optik Int. J. Light Electron Opt. 2014, 125, 5582–5587. [Google Scholar] [CrossRef]
  35. Hosny, K.M.; Khedr1, Y.M.; Khedr, W.I.; Mohamed, E.R. Robust image hashing using exact gaussian-hermite moments. IET Image Process. 2018, 12, 2178–2185. [Google Scholar] [CrossRef]
  36. Lv, X.; Wang, A. Compressed binary image hashes based on semisupervised spectral embedding. IEEE Trans. Inf. Forensics Secur. 2013, 8, 1838–1849. [Google Scholar] [CrossRef]
  37. Bondi, L.; Lameri, S.; Guera, D.; Bestagini, P.; Delp, E.J.; Tubaro, S. Tampering detection and localization through clustering of camera-based cnn features. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Honolulu, HI, USA, 21–26 July 2017; pp. 1855–1864. [Google Scholar]
  38. Yarlagadda, S.K.; Güera, D.; Bestagini, P.; Zhu, F.M.; Tubaro, S.; Delp, E.J. Satellite image forgery detection and localization using gan and one-class classifier. arXiv 2018, arXiv:1802.04881. [Google Scholar] [CrossRef] [Green Version]
  39. Du, L.; Chen, Z.; Ke, Y. Image hashing for tamper detection with multi-view embedding and perceptual saliency. Adv. Multimed. 2018, 2018, 4235268 . [Google Scholar] [CrossRef]
  40. Wang, Y.; Zhang, L.; Nie, F.; Li, X.; Chen, Z.; Wang, F. WeGAN: Deep Image Hashing with Weighted Generative Adversarial Networks. IEEE Trans. Multimed. 2020, 22, 1458–1469. [Google Scholar] [CrossRef]
  41. Wang, Y.; Ward, R.; Wang, Z.J. Coarse-to-Fine Image DeHashing Using Deep Pyramidal Residual Learning. IEEE Signal Process. Lett. 2020, 22, 1295–1299. [Google Scholar] [CrossRef]
  42. Jiang, C.; Pang, Y. Perceptual image hashing based on a deep convolution neural network for content authentication. J. Electron. Imaging 2018, 27, 043055. [Google Scholar] [CrossRef]
  43. Peng, Y.; Zhang, J.; Ye, Z. Deep Reinforcement Learning for Image Hashing. IEEE Trans. Multimed. 2020, 22, 2061–2073. [Google Scholar] [CrossRef] [Green Version]
  44. Du, L.; Wang, Y.; Ho, A.T.S. Multi-attack Reference Hashing Generation for Image Authentication. In Proceedings of the Digital Forensics and Watermarking—18th International Workshop (IWDW 2019), Chengdu, China, 2–4 November 2019; pp. 407–420. [Google Scholar]
  45. Zheng, Z.; Li, L. Binary Multi-View Clustering. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 41, 1774–1782. [Google Scholar] [CrossRef] [PubMed]
  46. Dong, J.; Wang, W. Casia image tampering detection evaluation database. In Proceedings of the 2013 IEEE China Summit and International Conference on Signal and Information Processing, Beijing, China, 6–10 July 2003; pp. 422–426. [Google Scholar]
  47. Korus, P.; Huang, J. Evaluation of random field models in multi-modal unsupervised tampering localization. In Proceedings of the 2016 IEEE International Workshop on Information Forensics and Security, Abu Dhabi, UAE, 4–7 December 2016; pp. 1–6. [Google Scholar]
  48. Korus, P.; Huang, J. Multi-scale analysis strategies in prnu-based tampering localization. IEEE Trans. Inf. Forensics Secur. 2017, 12, 809–824. [Google Scholar] [CrossRef]
  49. Venkatesan, R.; Koon, S.M.; Jakubowski, M.H.; Moulin, P. Robust image hashing. In Proceedings of the IEEE International Conference on Image Processing, Thessaloniki, Greece, 7–10 October 2001; pp. 664–666. [Google Scholar]
Figure 1. Block diagram of the proposed algorithm.
Figure 1. Block diagram of the proposed algorithm.
Algorithms 13 00227 g001
Figure 2. The examples of hash clusters.
Figure 2. The examples of hash clusters.
Algorithms 13 00227 g002
Figure 3. An example of preprocessing.
Figure 3. An example of preprocessing.
Algorithms 13 00227 g003
Figure 4. Distribution of hashing distances between hashing pairs with varying thresholds.
Figure 4. Distribution of hashing distances between hashing pairs with varying thresholds.
Algorithms 13 00227 g004
Figure 5. The German-language daily tabloid, Blick, forged the flooding water to blood red, and distributed the falsified image to news channels.
Figure 5. The German-language daily tabloid, Blick, forged the flooding water to blood red, and distributed the falsified image to news channels.
Algorithms 13 00227 g005
Figure 6. A generic framework of image hashing and an application perspective.
Figure 6. A generic framework of image hashing and an application perspective.
Algorithms 13 00227 g006
Table 1. Hashing distances under different content-preserving manipulations.
Table 1. Hashing distances under different content-preserving manipulations.
MethodManipulationORHMRH
MaxMinMeanMaxMinMean
Gaussian noise0.028280.000150.001970.028470.000140.00196
Salt&Pepper0.019180.000210.002520.019180.000240.00251
Gaussian blurring0.000380.000050.000170.000670.000060.00019
Circular blurring0.000480.000060.000220.000690.000060.00021
Motion blurring0.000340.000060.000150.000650.000050.00016
WaveletAverage filtering0.000710.000070.000330.000710.000090.00030
Median filtering0.007040.000060.000990.007530.000070.00099
Wiener filtering0.001010.000080.000280.000870.000080.00028
Image sharpening0.009060.000090.001150.009060.000100.00114
Image scaling0.000390.000050.000130.000640.000060.00018
Illumination correction0.084580.004470.027590.084580.004430.02757
JPEG compression0.001430.000090.000260.002750.000130.00051
Gaussian noise0.006160.000070.000310.006160.000070.00030
Salt&Pepper0.003390.000080.000340.003380.000070.00033
Gaussian blurring0.000170.000070.000100.001130.000070.00011
Circular blurring0.000180.000060.000100.001140.000060.00011
Motion blurring0.000170.000070.000100.001130.000060.00011
SVDAverage filtering0.000250.000070.000110.001110.000060.00012
Median filtering0.001660.000070.000150.001900.000070.00016
Wiener filtering0.000350.000050.000110.001130.000070.00012
Image sharpening0.001040.000070.000180.000990.000070.00018
Image scaling0.000160.000070.000100.001140.000070.00011
Illumination correction0.006620.000140.001490.006740.000140.00150
JPEG compression0.000310.000070.000100.000530.000080.00012
Gaussian noise0.258270.008640.030860.290810.011150.03234
Salt&Pepper0.228550.011310.029930.257890.011910.03033
Gaussian blurring0.035600.004110.014710.140230.005450.01786
Circular blurring0.061260.004470.017130.134690.005650.01924
Motion blurring0.035700.003620.014320.185100.004730.01825
RPIVDAverage filtering0.070370.005430.021090.201900.005910.02237
Median filtering0.061260.005120.022340.183600.006250.02465
Wiener filtering0.071560.004210.018030.204210.005810.02041
Image sharpening0.063240.006090.024420.182830.007060.02765
Image scaling0.033110.002750.011540.182330.003810.01761
Illumination correction0.116160.007690.028640.209440.010470.02920
JPEG compression0.070370.005430.021090.061800.007070.02155
Gaussian noise6.971510.135080.735636.303020.116360.60460
Salt&Pepper7.637190.169980.662007.506440.150730.63441
Gaussian blurring0.262370.005130.025190.108200.003180.01449
Circular blurring0.265290.007120.031630.179370.004600.02075
Motion blurring0.264080.004650.022860.107290.003000.01318
QFTAverage filtering0.301540.009760.044030.307190.007600.03263
Median filtering0.951200.030840.198220.871490.027060.19345
Wiener filtering0.643730.017460.080460.688510.015510.07616
Image sharpening6.556060.051881.523986.555960.051891.52398
Image scaling0.510830.040310.100670.524040.028000.09827
Illumination correction4.370010.273570.842804.366920.273480.84170
JPEG compression7.555230.137521.2915813.18160.135851.46682
Table 2. Comparisons between the original image-based reference hashing and the proposed multi-attack reference hashing (RTD dataset).
Table 2. Comparisons between the original image-based reference hashing and the proposed multi-attack reference hashing (RTD dataset).
ManipulationWaveletSVDRPIVDQFT
PrecisionRecallF1AUCPrecisionRecallF1AUCPrecisionRecallF1AUCPrecisionRecallF1AUC
Original image-based reference hashing
Gaussian noise0.62570.95000.75450.84420.85370.47730.61220.85010.83260.82110.82680.89910.89780.75910.82270.9241
Salt&Pepper0.54850.97730.70260.80430.85370.47730.61220.85070.88060.81190.74490.90880.88510.77270.82520.9184
Gaussian blurring1.00000.87270.93200.98661.00000.44090.61200.98741.00000.74650.85490.95571.00000.72270.83910.9948
Circular blurring1.00000.87330.93460.97871.00000.43640.60760.98520.98210.76040.85710.94471.00000.72270.83910.9948
Motion blurring1.00000.87270.93460.97871.00000.42730.59870.98681.00000.74770.85560.95721.00000.72270.83910.9949
Average filtering1.00000.88640.93980.96651.00000.43180.60320.97900.95980.76610.85200.93511.00000.72270.83910.9948
Median filtering0.69670.95000.80380.90120.98980.44090.61010.95440.93990.78900.85790.92121.00000.74090.85120.9721
Wiener filtering0.98470.87730.92790.97131.00000.43180.60320.98220.98800.75690.85710.94271.00000.72270.83910.9950
Image sharpening0.71780.93640.81260.88720.97090.45450.61920.93680.89800.80730.85020.91550.88510.85370.85370.8537
Image scaling1.00000.87730.93460.98921.00000.43180.60320.98731.00000.73850.84960.96770.88510.77270.82520.9184
Illumination correction0.50001.00000.66670.55930.54790.83180.66060.67540.90210.80280.84950.90730.64290.90000.75000.8498
JPEG compression1.00000.49090.65850.92711.00000.43180.60320.98461.00000.30730.47020.94080.92730.69550.79480.9015
Multi-attack reference hashing
Gaussian noise0.83450.52730.64620.84650.84620.30000.44300.88460.96000.33030.49150.89480.92790.87730.90190.9588
Salt&Pepper0.76190.58180.65980.80460.95070.61360.74590.92630.97060.30280.46150.90570.95000.69090.80000.9355
Gaussian blurring1.00000.60000.75000.99551.00000.60450.75350.99040.99270.64150.77940.98801.00000.68180.81080.9952
Circular blurring1.00000.49550.66260.98111.00000.60450.75350.99040.99260.63680.77590.98701.00000.68180.81080.9953
Motion blurring1.00000.49090.65850.98491.00000.60000.75000.99550.98550.64150.77710.98571.00000.68180.81080.9952
Average filtering1.00000.49550.66260.97091.00000.60910.75710.99550.97140.31190.47220.92701.00000.68180.81080.9952
Median filtering0.95900.53180.68420.90130.99260.60910.75490.98031.00000.32110.48610.92581.00000.68180.81080.9809
Wiener filtering1.00000.49090.65850.97031.00000.60450.75350.99010.98540.63680.77360.98581.00000.68640.81400.9950
Image sharpening0.89860.56360.69270.88841.00000.28640.44520.93130.97220.32110.48280.90710.91670.70000.79380.9011
Image scaling1.00000.49550.66260.98281.00000.60000.75000.99550.98550.64150.64150.98680.94940.68180.79370.9607
Illumination correction0.50461.00000.67070.77910.63760.83180.72190.78480.97140.31190.47220.90620.75000.79090.76990.8405
JPEG compression1.00000.49090.65850.92561.00000.60450.75350.99001.00000.30730.47020.92640.94030.85910.89790.9598
Table 3. Comparisons between original image-based reference hashing and the proposed multi-attack reference hashing (CASIA dataset).
Table 3. Comparisons between original image-based reference hashing and the proposed multi-attack reference hashing (CASIA dataset).
ManipulationWaveletSVDRPIVDQFT
PrecisionRecallF1AUCPrecisionRecallF1AUCPrecisionRecallF1AUCPrecisionRecallF1AUC
Original image-based reference hashing
Gaussian noise0.74510.66230.80100.79090.83850.70150.76390.88250.97820.68300.80440.90210.88020.89650.88830.9520
Salt&Pepper0.81280.64810.72120.83070.89780.69830.78550.91640.96990.63290.76600.92820.88370.88560.88470.9572
Gaussian blurring0.96940.58610.73050.94340.99370.68520.88110.95120.95020.64520.76850.89811.00000.86380.92690.9989
Circular blurring0.93990.59590.72930.95260.96960.69390.80890.82740.81240.68560.74360.84671.00000.86380.92690.9989
Motion blurring0.97450.58170.72850.95260.99520.67970.80780.96420.98270.62010.76040.91611.00000.86380.92690.9989
Average filtering0.87860.62310.72910.89170.87280.71790.78780.88350.65620.77380.71010.77391.00000.86380.92690.9989
Median filtering0.88380.65030.73070.84570.92690.70480.80070.90470.72960.72160.72560.80801.00000.86490.92760.9939
Wiener filtering0.89970.61550.73090.90550.94850.70150.80650.92120.82270.69210.75390.85061.00000.86380.92690.9980
Image sharpening0.71940.71970.71860.78780.80890.77020.78910.86560.65260.82680.72950.80140.65650.93900.77270.8653
Image scaling0.98680.57190.72410.96400.99520.68080.80850.96720.95810.62340.75530.91801.00000.86270.92630.9986
Illumination correction0.50080.99780.66690.60630.62560.85730.72330.75410.99410.55790.71470.98100.88540.90850.89680.9616
JPEG compression1.00000.49090.65850.92710.96760.68300.80080.95800.95650.64950.77360.90760.71480.92810.80760.8861
Multi-attack reference hashing
Gaussian noise0.76040.65690.70490.79930.86470.69610.77130.89020.94290.86380.90160.95780.91300.89220.90250.9646
Salt&Pepper0.84150.63620.72460.84070.92610.69610.79480.92020.97380.84970.90750.96930.89060.89540.89300.9614
Gaussian blurring1.00000.56640.72320.97971.00000.66340.79760.98070.95840.80460.87480.94811.00000.87580.93380.9989
Circular blurring0.99430.57080.72530.96240.99510.66450.79690.96440.85960.81550.83700.90811.00000.87580.93380.9989
Motion blurring1.00000.56540.72230.98001.00000.66560.79920.98570.98670.80790.88840.96181.00000.87580.93380.9989
Average filtering0.94510.60020.73420.92010.95740.68520.79870.92030.69150.83280.75560.83491.00000.87580.93380.9989
Median filtering0.79540.64380.71160.83660.90770.69610.78790.90380.77950.82970.80380.88511.00000.87690.93440.9958
Wiener filtering0.98180.58710.73480.93690.98420.67760.80260.95420.86590.81770.84110.91951.00000.87690.93440.9984
Image sharpening0.72710.70810.71740.79580.79010.77890.78440.85810.67220.92920.78010.89820.65790.94340.77490.8685
Image scaling0.99230.55990.71590.95210.99520.67540.80470.96570.97160.82100.88990.96401.00000.87800.93500.9988
Illumination correction0.50080.99780.66690.60430.60030.86380.70840.73890.99730.81110.89460.99150.88430.91610.89990.9649
JPEG compression0.99250.57630.72920.94200.97790.67540.79900.95370.97200.83680.89940.96270.71450.92700.80700.8859
Table 4. Result for the probability of true authentication capability.
Table 4. Result for the probability of true authentication capability.
MethodSimilar ImagesTampered Image
DWT95.64%95.81%
Semi-Supervised (DWT)95.65%95.78%
OUR (DWT)96.19%97.14%
SVD84.97%84.92%
Semi-Supervised (SVD)85.12%85.08%
OUR (SVD)86.06%85.46%

Share and Cite

MDPI and ACS Style

Du, L.; He, Z.; Wang, Y.; Wang, X.; Ho, A.T.S. An Image Hashing Algorithm for Authentication with Multi-Attack Reference Generation and Adaptive Thresholding. Algorithms 2020, 13, 227. https://doi.org/10.3390/a13090227

AMA Style

Du L, He Z, Wang Y, Wang X, Ho ATS. An Image Hashing Algorithm for Authentication with Multi-Attack Reference Generation and Adaptive Thresholding. Algorithms. 2020; 13(9):227. https://doi.org/10.3390/a13090227

Chicago/Turabian Style

Du, Ling, Zehong He, Yijing Wang, Xiaochao Wang, and Anthony T. S. Ho. 2020. "An Image Hashing Algorithm for Authentication with Multi-Attack Reference Generation and Adaptive Thresholding" Algorithms 13, no. 9: 227. https://doi.org/10.3390/a13090227

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop