# Copy-Move Forgery Detection Using Scale Invariant Feature and Reduced Local Binary Pattern Histogram

^{1}

^{2}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. SIFT-Based Copy-Move Forgery Detection

#### 2.1. SIFT Descriptor

**I**, SIFT features are extracted at different scales using a scale-space representation by an image pyramid. Potential keypoints are selected using scale-space extrema. All the potential keypoints are further refined according to a contrast and an edge threshold. This process is used to eliminate unstable keypoints in the SIFT algorithm. In the next step, an orientation is assigned to each keypoint to achieve invariance to image rotation. The gradient magnitude and direction are calculated in a local window centered at the SIFT keypoint. An orientation histogram with 36 bins covering 360 degrees is created. The highest peak in the histogram is considered as the dominant orientation. Furthermore, any peak above 80% of the highest peak is also considered while calculating the main orientation.

**k**

_{1},

**k**

_{2}, …,

**k**

_{m}} be the m-keypoints extracted using the SIFT for the given image,

**I**. Keypoint descriptors {

**f**

_{1},

**f**

_{2}, …,

**f**

_{m}} corresponding to {

**k**

_{1},

**k**

_{2}, …,

**k**

_{m}} are generated using the above procedures. Figure 2 shows the generation method of a descriptor

**f**

_{i}for a keypoint

**k**

_{i}at the image location (x

_{i},y

_{i}). As shown in Figure 2, the SIFT descriptor is represented as a list of gradients for the main direction in 16 4 × 4 sub-blocks around the keypoint. However, simply listing local gradients in this manner can tend to degrade the matching performance as the pixel values around the keypoints change.

#### 2.2. Matching

**f**

_{i}needs to be compared with all the other descriptors other than

**f**

_{i}represented as

**f**

_{j≠i}. This process is called matching. Let {d

_{1,i}, d

_{2,i}, …, d

_{m}

_{−1,i}} be the set of distances from

**f**

_{i}and m−1 descriptors other than

**f**

_{i}. At this point, each distance has the following relationship,

**f**

_{i}is determined to be matched with the key point at distance d

_{1,i}if the following relation is satisfied, that is

**k**

_{A}has been moved to

**k**

_{B}. For a correct match,

**k**

_{A}and

**k**

_{B}should be matched by Equation (2). However, the keypoint

**k**

_{B}is detected as the potentially matched keypoint with

**k**

_{A}because

**k**

_{B}has the closest distance (d

_{1,A}= 0.2684) to

**k**

_{A}. In contrast, the distance between

**k**

_{A}and

**k**

_{C}is calculated as d

_{2,A}= 0.2721, and no matching occurs because d

_{1,A}/d

_{2,A}exceeds the threshold t. This is because the background (16 × 16 window centered at

**k**

_{A}) of

**k**

_{A}is different from that of

**k**

_{B}. This situation may also occur if the image is compressed after CMF.

## 3. Proposed Method

#### 3.1. Reduced LBP Feature

_{n}is the intensity of the neighboring pixel, I(p,q) is the intensity of the pixel at (p,q), and N is the number of neighboring pixels chosen at a given radius. In this paper, we use N = 8 (3 × 3 local window centered at (p,q)), which generates am 8-bit LBP value. In Equation (3), s(x) is defined as

_{i},y

_{i}) be the set of pixels that exist in a 16 × 16 local window centered at the keypoint

**k**

_{i}location, (x

_{i},y

_{i}). Using all the pixels at (p,q) ∈ Ω(x

_{i},y

_{i}), LBP values are calculated as depicted in Figure 4.

_{c}(p,q) (c = 0, 1, 2, …, 8) be the LBP with c consecutive 1’s. Obviously, L

_{0}(p,q) = 00000000, and L

_{8}(p,q) = 11111111. For c = 1, 2, …, 7, L

_{c}(p,q) has 8 binary patterns that can be all be viewed as rotationally shifted versions of a single pattern. Figure 5 shows various uniform patterns. Let L

_{non}(p,q) be any pattern that has no consecutive 1’s except for L

_{0}(p,q). In conclusion, the 256-level LBP values can be divided into 10 groups, that is, nine types of L

_{c}(p,q) and one L

_{non}(p,q).

#### 3.2. Proposed Descriptor

_{c}(p,q) and L

_{non}(p,q) as a new descriptor for CMFD. To maintain the rotation-invariant characteristic of the new descriptor, we check the occurrence of L

_{c}(p,q) alone. L

_{non}(p,q) can reflect a frequent variation in a small window, which may occur because of noise, quantization errors, or small background changes. To reduce the effect of these variations, all non-uniform patterns are checked as they occur. The proposed descriptor

**r**

_{i}corresponding to keypoint

**k**

_{i}is obtained as

_{c}(x

_{i},y

_{i}) and R

_{non}(x

_{i},y

_{i}) are the normalized number of occurrences of L

_{c}(p,q) and L

_{non}(p,q), respectively, in Ω(x

_{i},y

_{i}). R

_{c}(x

_{i},y

_{i}) is calculated by

_{c}(p,q)] is the number of occurrences of the L

_{c}(p,q) pattern in Ω(x

_{i},y

_{i}), and |Ω(x

_{i},y

_{i})| is the cardinality of Ω(x

_{i},y

_{i}). R

_{non}(x

_{i},y

_{i}) can be obtained in a similar manner.

**r**

_{i}is composed of the histogram of L

_{c}(p,q) and L

_{non}(p,q). It is a 10 dimensional feature vector. Figure 6 illustrates the

**r**

_{i}generation method using the reduced LBP histogram.

**g**

_{i}is the proposed descriptor for CMFD and has 138 dimensional features. Because the descriptor,

**r**

_{i}is generated for a relatively large area, unlike

**f**

_{i}, it may be sufficiently robust to handle small pixel changes and quantization errors caused by image compression.

**k**

_{B}is detected as the potentially matched keypoint with

**k**

_{A}because

**k**

_{B}has the closest distance (d

_{1,A}= 0.1426) to

**k**

_{A}. The distance between

**k**

_{A}and

**k**

_{C}is calculated as d

_{2,A}= 0.2380. Because d

_{1,A}/d

_{2,A}does not exceed the threshold t, we can determine that

**k**

_{A}and

**k**

_{B}are the matched pair. Based on this result, we conclude that the proposed LBP-based descriptor plays an important role in removing the effect of a small fluctuation that occurred because there was a keypoint close to the boundary of the copy-moved portion and the authentic region.

#### 3.3. Estimation of Affine Transform and False Match Removal

_{i},y

_{i}) and (x’

_{i},y’

_{i}) be the pixel locations from a region and its duplicate, respectively. These two locations are related by an affine transform as follows.

_{11}, t

_{12}, t

_{21}, t

_{22}, x

_{0}, and y

_{0}are the transform parameters. To estimate these parameters, at least three pairs of corresponding keypoints that are not collinear are required. However, the obtained parameters of the affine transform are inaccurate because of the mismatched keypoints. To eliminate unreliable keypoint matches, the widely used RANSAC scheme is employed. The parameters can be obtained using the RANSAC algorithm exhibit a high degree of accuracy. Furthermore, the affine transform parameters can also be used to determine the correlation map of the duplicated region.

#### 3.4. Localization

**W**be the warped image obtained by transforming the image according to the affine transform. The correlation coefficient at a pixel location (x,y), ρ(x,y) is computed as

_{μ}and W

_{μ}are the average values of

**I**and

**W**in the area Λ(x,y). A Gaussian filter is applied to the correlation map to reduce the noisy pixels, and a binary correlation map is obtained by a threshold. If the ρ(x,y) value for point (x,y) is greater than a threshold, this point is considered to be true, otherwise, this point is assigned a value of false.

#### 3.5. Summary of Proposed Method

**k**

_{i}at the image location (x

_{i},y

_{i}), the conventional 128-dimensional descriptor

**f**

_{i}is generated. For all the pixels in a 16 × 16 window centered at that keypoint location, pixel-wise LBP values are calculated. Next, the 256-level LBP values are reduced to 10 types of values. A histogram of the reduced LBP values is generated, and we let this 10-dimensional histogram be the additional descriptor

**r**

_{i}. Next,

**g**

_{i}, which is the combination of

**f**

_{i}and

**r**

_{i}, becomes the new descriptor for CMFD. For the final output, the false matching removal step, followed by localization using the RANSAC algorithm is performed.

## 4. Experimental Results

#### 4.1. Dataset

#### 4.2. Detection Results on Various Datasets

#### 4.2.1. Results Obtained on MICC-F220 Dataset

#### 4.2.2. Results Obtained on CMH Dataset

_{1}), 25 rotated images (CMH

_{2}), 25 resized images (CMH

_{3}), and 35 images that are both rotated and resized (CMH

_{4})). Additionally, to address compressions, we compressed the full image into a JPEG format using quality factors of 70%, 80%, and 90% (CMH

_{5}). Table 4 presents the detection performance on this dataset. Every image has its own ground truth indicating the original and cloned regions in white color. For comparison, LSIF [46], SIFT-based forensic method (SIFT) [36], Helmert transform-based method (HT) [50], SIFT and J-linkage-based method (SIFTJL) [31], Zernike moment-based method (ZM) [34], GDCMFD [8], adaptive over-segmentation-based method (AO) [51], and segmentation-based CMFD (SCMFD) [15] are used.

_{1}, CMH

_{2}, CMH

_{3}, and CMH

_{4}) at the pixel level. As shown in Table 4, all the CMFD methods except SCMFD exhibit good FPR values. However, only LSIT, HT, AO, and the proposed method exhibit valid TPR values. The proposed algorithm achieves the highest ACC, and HT exhibits the second highest performance.

_{5}test dataset. Table 5 shows the detection performance on uncompressed CMH

_{5}dataset at the pixel level. As shown in Table 5, the performance of many CMFD approaches, except those of LSIF, SIFT, and the proposed method, is degraded. In particular, the performance degradation of the block-based algorithms, such as HT, ZM, and GDCMFD, is considerable. The TPR, FPR, and ACC values of our methods are almost the same as those of the uncompressed CMH datasets. This advantage of the proposed CMFD method can be attributed to the addition of the reduced LBP descriptor.

#### 4.2.3. Results on Obtained D Dataset

_{0}, D

_{1}, D

_{2}, and D

_{3}). The first dataset D

_{0}is made of 50 uncompressed images with simply translated copies. D

_{1}is created by copy-pasting objects after rotation, and D

_{2}is derived by of applying scaling to the copies. The subset D

_{3}comprises 50 original images without tampering. For comparison, HT [50], SIFTJL [31], GDCMFD [8], and AO [51] are used.

_{0}, D

_{1}, and D

_{2}datasets. Because the D

_{3}dataset contains only authentic images, only FPRs are compared in Table 6. The proposed method does not find any image to be manipulated as a copy-move image, and has an FPR value of zero. Figure 10 illustrates forged localization examples for the D dataset. As shown in Figure 10, the proposed CMFD algorithm exhibits the best localization performance.

#### 4.2.4. Results Obtained on COVERAGE Dataset

## 5. Discussion

_{5}, our approach exhibited almost the same detection accuracy as that for uncompressed image datasets. In particular, our algorithm also achieved the highest rank for the dataset wherein the image was geometrically transformed. HT also exhibited a fairy good ranking, but fell short of that of the proposed method. Based on the results of Table 8, we can conclude that the proposed method results in a more uniform and consistent CMFD detection performance, regardless of the type of dataset.

## 6. Conclusions

## Author Contributions

## Funding

## Conflicts of Interest

## References

- Zhang, Z.; Wang, C.; Zhou, X. A survey on passive image copy-move forgery detection. J. Inf. Process. Syst.
**2018**, 14, 6–31. [Google Scholar] [CrossRef] - Zheng, L.; Zhang, Y.; Thing, V. A survey on image tampering and its detection in real-world photos. J. Vis. Commun. Image Represent
**2019**, 58, 380–399. [Google Scholar] [CrossRef] - Teerakanok, S.; Uehara, T. Copy-move forgery detection: A state-of-the-art technical review and analysis. IEEE Access.
**2019**, 7, 40550–40568. [Google Scholar] [CrossRef] - Lee, J.C.; Chang, C.P.; Chen, W.K. Detection of copy-move image forgery using histogram of orientated gradients. Inf. Sci.
**2015**, 321, 250–262. [Google Scholar] [CrossRef] - Fadl, S.M.; Semary, N.A. Robust copy-move forgery revealing in digital images using polar coordinate system. Neurocomputing.
**2017**, 265, 57–65. [Google Scholar] [CrossRef] - Bi, X.; Pun, C.M.; Yuan, X.C. Multi-level dense descriptor and hierarchical feature matching for copy-move forgery detection. Inf. Sci.
**2016**, 345, 226–242. [Google Scholar] [CrossRef] - Mahmood, T.; Mehmood, Z.; Shah, M.; Saba, T. A robust technique for copy-move forgery detection and localization in digital images via stationary wavelet and discrete cosine transform. J. Vis. Commun. Image Represent.
**2018**, 53, 202–214. [Google Scholar] [CrossRef] - Silva, E.; Carvalho, T.; Ferreira, A.; Rocha, A. Going deeper into copy-move forgery detection: Exploring image telltales via multi-scale analysis and voting processes. J. Vis. Commun. Image Represent
**2015**, 29, 16–32. [Google Scholar] [CrossRef] - Davarzani, R.; Yaghmaie, K.; Mozaffari, S.; Tapak, M. Copy-move forgery detection using multiresolution local binary patterns. Forensic Sci. Int.
**2013**, 231, 61–72. [Google Scholar] [CrossRef] - Lynch, G.; Shih, F.Y.; Liao, H.-Y. An efficient expanding block algorithm for image copy-move forgery detection. Inf. Sci.
**2013**, 239, 253–265. [Google Scholar] [CrossRef] - Zhao, J.; Guo, J. Passive forensics for copy-move image forgery using a method based on DCT and SVD. Forensic Sci. Int.
**2013**, 233, 158–166. [Google Scholar] [CrossRef] - Sun, Y.; Ni, R.; Zhao, Y. Nonoverlapping blocks based copy-move forgery detection. Secur. Commun. Netw.
**2018**, 2018, 1301290. [Google Scholar] [CrossRef] [Green Version] - Gao, Y.; Gao, T.; Fan, L.; Yang, Q. A robust detection algorithm for copy-move forgery in digital images. Forensic Sci. Int.
**2012**, 214, 33–43. [Google Scholar] [CrossRef] - Zhong, J.; Gan, Y.; Young, J.; Huang, L.; Lin, P. A new block-based method for copy move forgery detection under image geometric transforms. Multimed. Tools Appl.
**2017**, 76, 14887–14903. [Google Scholar] [CrossRef] - Li, J.; Li, X.; Yang, B.; Sun, X. Segmentation-based image copy-move forgery detection scheme. IEEE Trans. Inf. Forensics Secur.
**2015**, 10, 507–518. [Google Scholar] [CrossRef] - Pun, C.M.; Chung, J.L. A two-stage localization for copy-move forgery detection. Inf. Sci.
**2018**, 463–464, 33–55. [Google Scholar] [CrossRef] - Li, Y. Image copy-move forgery detection based on polar cosine transform and approximate nearest neighbor searching. Forensic Sci. Int.
**2013**, 224, 59–67. [Google Scholar] [CrossRef] [PubMed] - Dixit, R.; Nakar, R. Copy–move forgery detection utilizing Fourier–Mellin transform log-polar features. J. Electron. Imaging.
**2018**, 27, 023007. [Google Scholar] [CrossRef] - Bayram, S.; Sencar, H.T.; Memon, N. An efficient and robust method for detecting copy-move forgery. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, Taipei, Taiwan, 9–24 April 2009; pp. 1053–1056. [Google Scholar] [CrossRef]
- Hosny, K.M.; Hamza, H.M.; Lashin, N.A. Copy-move forgery detection of duplicated objects using accurate PCET moments and morphological operators. Imag. Sci. J.
**2018**, 66, 330–345. [Google Scholar] [CrossRef] - Fridrich, A.J.; Soukal, B.D.; Lukáš, A.J. Detection of copy-move forgery in digital images. In Proceedings of the Digital Forensic Research Workshop, Cleveland, OH, USA, 5–8 August 2003; pp. 55–61. [Google Scholar]
- Alkawaz, M.H.; Sulong, G.; Saba, T.; Rehman, A. Detection of copy-move image forgery based on discrete cosine transform. Neural Comput. Appl.
**2018**, 30, 183–192. [Google Scholar] [CrossRef] - Kumar, S.; Desai, J.; Mukherjee, S. A fast DCT based method for copy move forgery detection. In Proceedings of the 2013 IEEE Second International Conference on Image Information Processing, Shimla, India, 9–11 December 2013; pp. 649–654. [Google Scholar] [CrossRef]
- Muhammad, G.; Hussain, M.; Bebis, G. Passive copy move image forgery detection using undecimated dyadic wavelet transform. Digit. Investig.
**2012**, 9, 49–57. [Google Scholar] [CrossRef] - Mahmood, T.; Irtaza, A.; Mehmood, Z.; Mahmood, M.T. Copy–move forgery detection through stationary wavelets and local binary pattern variance for forensic analysis in digital images. Forensic Sci. Int.
**2017**, 279, 8–21. [Google Scholar] [CrossRef] [PubMed] - Hashmi, M.F.; Hambarde, A.R.; Keskar, A.G. Copy move forgery detection using DWT and SIFT features. In Proceedings of the 13th International conference on intelligent systems design and applications, Bangi, Malaysia, 8–10 December 2013; pp. 188–193. [Google Scholar] [CrossRef]
- Wang, Y.; Tian, L.; Li, C. LBP-SVD based copy move forgery detection algorithm. In Proceedings of the 2017 IEEE International Symposium on Multimedia, Taichung, Taiwan, 11–13 December 2017; pp. 553–556. [Google Scholar] [CrossRef]
- Uliyan, D.M.; Jalab, H.A.; Wahab, A.W.A. Copy move image forgery detection using Hessian and center symmetric local binary pattern. In Proceedings of the 2015 IEEE Conference on Open Systems, Bandar Melaka, Malaysia, 24–26 August 2015; pp. 7–11. [Google Scholar] [CrossRef]
- Huang, H.; Guo, W.; Zhang, Y. Detection of copy-move forgery in digital images using SIFT algorithm. In Proceedings of the Pacific-Asia Workshop on Computational Intelligence and Industrial Application, Wuhan, China, 19–20 December 2008; pp. 272–276. [Google Scholar] [CrossRef]
- Muzaffer, G.; Ulutas, G. A fast and effective digital image copy move forgery detection with binarized SIFT. In Proceedings of the 2017 40th International Conference on Telecommunications and Signal Processing, Barcelona, Spain, 5–7 July 2017; pp. 595–598. [Google Scholar] [CrossRef]
- Jin, G.; Wan, X. An improved method for SIFT-based copy-move forgery detection using non-maximum value suppression and optimized J-Linkage. Signal Process. Image Commun.
**2017**, 57, 113–125. [Google Scholar] [CrossRef] - Shahroudnejad, A.; Rahmati, M. Copy-move forgery detection in digital images using affine-SIFT. In Proceedings of the 2016 2nd International Conference of Signal Processing and Intelligent Systems, Tehran, Iran, 14–15 December 2016. [Google Scholar] [CrossRef]
- Mahdian, B.; Saic, S. Detection of copy–move forgery using a method based on blur moment invariants. Forensic Sci. Int.
**2007**, 171, 180–189. [Google Scholar] [CrossRef] [PubMed] - Ryu, S.J.; Kirchner, M.; Lee, M.J.; Lee, H.K. Rotation invariant localization of duplicated image regions based on Zernike moments. IEEE Trans. Inf. Forensics Secur.
**2013**, 8, 1355–1370. [Google Scholar] [CrossRef] - Zandi, M.; Mahmoudi-Aznaveh, A.; Talebpour, A. Iterative copy-move forgery detection based on a new interest point detector. IEEE Trans. Inf. Forensics Secur.
**2016**, 11, 2499–2512. [Google Scholar] [CrossRef] - Amerini, I.; Ballan, L.; Caldelli, R.; Del Bimbo, A.; Serra, G. A sift-based forensic method for copy-move attack detection and transformation recovery. IEEE Trans. Inf. Forensics Secur.
**2011**, 6, 1099–1110. [Google Scholar] [CrossRef] - Malviya, A.V.; Ladhake, S.A. Pixel based image forensic technique for copy-move forgery detection using auto color correlogram. Procedia Comput. Sci.
**2016**, 79, 383–390. [Google Scholar] [CrossRef] [Green Version] - Uliyan, D.M.; Jalab, H.A.; Wahab, A.W.A.; Sadeghi, S. Image region duplication forgery detection based on angular radial partitioning and Harris key-points. Symmetry
**2016**, 8, 62. [Google Scholar] [CrossRef] [Green Version] - Warif, N.B.A.; Wahab, A.W.A.; Idris, M.Y.I.; Salleh, R.; Othman, F. SIFT-symmetry: A robust detection method for copy-move forgery with refection attack. J. Vis. Commun. Image Represent.
**2017**, 46, 219–232. [Google Scholar] [CrossRef] - Lee, J.C. Copy-move image forgery detection based on Gabor magnitude. J. Vis. Commun. Image Represent.
**2015**, 31, 320–334. [Google Scholar] [CrossRef] - Pan, X.; Lyu, S. Region duplication detection using image feature matching. IEEE Trans. Inf. Forensics Secur.
**2010**, 5, 857–867. [Google Scholar] [CrossRef] - Li, Y.; Zhou, J. Fast and effective image copy-move forgery detection via hierarchical feature point matching. IEEE Trans. Inf. Forensics Secur.
**2019**, 14, 1307–1322. [Google Scholar] [CrossRef] - Ardizzone, E.; Bruno, A.; Mazzola, G. Copy–move forgery detection by matching triangles of keypoints. IEEE Trans. Inf. Forensics Secur.
**2015**, 10, 2084–2094. [Google Scholar] [CrossRef] [Green Version] - Wen, B.; Zhu, Y.; Subramanian, R.; Ng, T.; Shen, X.; Winkler, S. COVERAGE—A novel database for copy-move forgery detection. In Proceedings of the IEEE International Conference on Image Processing, Phoenix, AZ, USA, 25–28 September 2016; pp. 161–165. [Google Scholar] [CrossRef]
- Guo, G.M.; Liu, Y.F.; Wu, Z.J. Duplication forgery detection using improved DAISY descriptor. J. Comput. Inf. Syst.
**2014**, 10, 9369–9377. [Google Scholar] [CrossRef] - Vaishnavi, D.; Subashini, T.S. Application of local invariant symmetry features to detect and localize image copy move forgeries. J. Inf. Secur. Appl.
**2019**, 44, 23–31. [Google Scholar] [CrossRef] - Abdel-Basset, M.; Manogaran, G.; Fakhry, A.E.; El-Henawy, I. 2-levels of clustering strategy to detect and locate copy-move forgery in digital images. Multimed. Tools Appl.
**2018**, 1–19. [Google Scholar] [CrossRef] - Cozzolino, D.; Poggi, G.; Verdoliva, L. Efficient dense-field copy-move forgery detection. IEEE Trans. Inf. Forensics Secur.
**2015**, 10, 2284–2297. [Google Scholar] [CrossRef] - Elhaminia, B.; Harati, A.; Taherinia, A. A probabilistic framework for copy-move forgery detection based on Markov Random Field. Multimed. Tools Appl.
**2019**, 78, 25591–25609. [Google Scholar] [CrossRef] - Huang, H.Y.L.; Ciou, A.J. Copy-move forgery detection for image forensics using superpixel segmentation and the Helmert transformation. EURASIP J. Image Video Proc.
**2019**, 68, 1–16. [Google Scholar] [CrossRef] - Pun, C.M.; Yuan, X.C.; Bi, X.L. Image forgery detection using adaptive oversegmentation and feature point matching. IEEE Trans. Inf. Forensics Secur.
**2015**, 10, 1705–1716. [Google Scholar] [CrossRef]

**Figure 1.**Typical process of scale invariant feature transform (SIFT)-based copy-move forgery detection.

**Figure 3.**Example of keypoint matching error using the conventional SIFT descriptor [36].

**Figure 4.**Local binary pattern (LBP) generation method from all the pixels at (p,q) ∈ Ω(x

_{i},y

_{i}).

**Table 1.**Affine parameter estimation results using the conventional SIFT algorithm [36] and the proposed methods for the MICC-F220 dataset. The averaged rotation angles and scales for various attacks are presented in this table.

Attack: (Θ°, s_{x}, s_{y}) ^{1} | Conventional SIFT [36] | Proposed Method |
---|---|---|

0°, 1.0, 1.0 | 1.364°, 1.042, 0.974 | 0.252°, 1.007, 0.997 |

10°, 1.0, 1.0 | 9.186°, 1.051, 0.973 | 9.975°, 1.024, 0.974 |

20°, 1.0, 1.0 | 19.494°, 1.501, 0.941 | 19.756°, 1.013, 0.991 |

30°, 1.0, 1.0 | 30.581°, 1.228, 0.894 | 30.183°, 1.012, 0.992 |

40°, 1.0, 1.0 | 39.682°, 1.043, 0.960 | 39.906°, 1.013, 0.990 |

0°, 1.2, 1.2 | 1.004°, 1.190, 1.128 | 0.179°, 1.145, 1.031 |

0°, 1.3, 1.3 | 0.792°, 1.415, 1.278 | 0.183°, 1.264, 1.252 |

0°, 1.4, 1.2 | 0.525°, 1.327, 1.124 | 0.236°, 1.358, 1.154 |

10°, 1.2, 1.2 | 9.580°, 1.245, 1.143 | 10.280°, 1.146, 1.127 |

20°, 1.4, 1.2 | 21.038°, 1.317, 0.992 | 19.816°, 1.359, 1.167 |

^{1}Θ° is a rotation angle, s

_{x}and s

_{y}are the scaling parameter for x and y coordinates, respectively.

Dataset | Description |
---|---|

MICC-F2200 | 110 tampered images and 110 untampered images with resolutions ranging from 722 × 480 to 800 × 600 |

CMH | 216 forged images with resolutions varying from 845 × 634 to 1296 × 972 (108 uncompressed and 108 compressed images) |

D | 970 tampered images and 50 authentic images with resolutions of 700 × 1000 (D_{0}, D_{1}, D_{2}: tampered, D_{3}: authentic) |

COVERAGE | 100 tampered images and 100 authentic images with resolutions of 400 × 486 |

**Table 3.**True positive rate (TPR) (%), false positive rate (FPR) (%), and accuracy (%) on the MICC-F220 dataset at an image level. The number in bold indicates the highest performance, and the number in italics represents the second place.

Method | TPR | FPR | ACC |
---|---|---|---|

DAISY [45] | 85.91 | 9.09 | 88.41 |

LSIF [46] | 83.64 | 5.45 | 89.09 |

Clustering [47] | 97.87 | 7.63 | 95.12 |

DF [48] | 84.55 | 17.27 | 75.36 |

MRF [49] | 62.00 | 40.00 | 61.00 |

ICMFD [35] | 78.18 | 48.18 | 69.08 |

GDCMFD [8] | 64.00 | 57.00 | 53.50 |

HFPM [42] | 100 | 1.82 | 99.09 |

Proposed | 99.10 | 5.45 | 96.82 |

**Table 4.**TPR (%), FPR (%), and ACC (%) on the uncompressed CMH (CMH1, CMH2, CMH3, and CMH4) datasets at the pixel level. Bold numbers indicate the highest performance, and italic numbers represent the second place.

Method | TPR | FPR | ACC |
---|---|---|---|

LSIF [46] | 80.68 | 0.28 | 90.02 |

SIFT [36] | 50.45 | 0.19 | 75.13 |

HT [50] | 91.71 | 1.98 | 94.86 |

SIFTJL [31] | 81.96 | 4 | 88.98 |

ZM [34] | 34.43 | 5.35 | 64.47 |

GDCMFD [8] | 71.23 | 1.54 | 84.84 |

AO [51] | 81.18 | 3.23 | 88.97 |

SCMFD [15] | 73.11 | 48.74 | 62.18 |

Proposed | 95.68 | 0.35 | 97.66 |

**Table 5.**TPR (%), FPR (%), and ACC (%) on the compressed CMH5 dataset at the pixel level. Bold numbers indicate the highest performance, and italic numbers represent the second place.

Method | TPR | FPR | ACC |
---|---|---|---|

LSIF [46] | 79.38 | 0.52 | 89.43 |

SIFT [36] | 48.29 | 0.2 | 74.04 |

HT [50] | 68.51 | 1.22 | 83.64 |

SIFTJL [31] | 45.59 | 3.65 | 70.97 |

ZM [34] | 28.6 | 0.23 | 63.72 |

GDCMFD [8] | 31.19 | 0.65 | 65.27 |

AO [51] | 45.76 | 2 | 71.88 |

SCMFD [15] | 31.36 | 48.8 | 41.27 |

Proposed | 95.8 | 0.36 | 97.72 |

**Table 6.**TPR (%), FPR (%), and ACC (%) on D dataset at pixel level. Bold numbers indicate the highest performance, and italic numbers represent the second place.

Method | D_{0} | D_{1} and D_{2} | D_{3} | ||||
---|---|---|---|---|---|---|---|

TPR | FPR | ACC | TRR | FPR | ACC | FPR | |

HT [50] | 84.88 | 3.39 | 90.75 | 85.64 | 1.06 | 92.29 | 0.05 |

SIFTJL [31] | 73.41 | 2.42 | 85.5 | 71.39 | 0.47 | 84.96 | 0.12 |

GDCMFD [8] | 64.14 | 1.89 | 81.13 | 46.39 | 0.62 | 72.89 | 0.39 |

AO [51] | 62.08 | 1.72 | 80.18 | 46.39 | 0.49 | 72.95 | 2.34 |

SCMFD [15] | 77.09 | 49.42 | 63.84 | 25.45 | 52.24 | 36.61 | 48.97 |

Proposed | 97.65 | 1.25 | 98.2 | 92.31 | 0.65 | 95.83 | 0 |

**Table 7.**TPR (%), FPR (%), and ACC (%) on the COVERAGE dataset at image level. Bold numbers indicate the highest performance, and italic numbers represent the second place.

Method | TPR | FPR | ACC |
---|---|---|---|

DF [46] | 59.34 | 21.98 | 68.68 |

SCMFD [15] | 87.91 | 63.74 | 62.09 |

ZM [34] | 46.15 | 15.38 | 65.39 |

GDCMFD [8] | 91.21 | 70.33 | 59.94 |

ICMFD [35] | 76.92 | 71.43 | 52.75 |

HFPM [42] | 80.22 | 41.76 | 69.23 |

Proposed | 78 | 43 | 67.5 |

Dataset | Rank | |||
---|---|---|---|---|

1st | 2nd | 3rd | ||

MICC-F220 | HFPM | Proposed | Clustering | |

CMH | CHM_{1, 2, 3, 4} | Proposed | HT | LISF |

CMH_{5} | Proposed | LISF | HT | |

D | D_{0} | Proposed | HT | SIFTJL |

D_{1,2} | Proposed | HT | SIFTJL | |

D_{3} | Proposed | HT | SIFTJL | |

COVERAGE | GDCMFD | DF | Proposed |

© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Park, J.Y.; Kang, T.A.; Moon, Y.H.; Eom, I.K.
Copy-Move Forgery Detection Using Scale Invariant Feature and Reduced Local Binary Pattern Histogram. *Symmetry* **2020**, *12*, 492.
https://doi.org/10.3390/sym12040492

**AMA Style**

Park JY, Kang TA, Moon YH, Eom IK.
Copy-Move Forgery Detection Using Scale Invariant Feature and Reduced Local Binary Pattern Histogram. *Symmetry*. 2020; 12(4):492.
https://doi.org/10.3390/sym12040492

**Chicago/Turabian Style**

Park, Jun Young, Tae An Kang, Yong Ho Moon, and Il Kyu Eom.
2020. "Copy-Move Forgery Detection Using Scale Invariant Feature and Reduced Local Binary Pattern Histogram" *Symmetry* 12, no. 4: 492.
https://doi.org/10.3390/sym12040492