Next Article in Journal
Self-Supervised Contextual Data Augmentation for Natural Language Processing
Next Article in Special Issue
On the Inverse Degree Polynomial
Previous Article in Journal
Investigations of Laser Produced Plasmas Generated by Laser Ablation on Geomaterials. Experimental and Theoretical Aspects
Previous Article in Special Issue
A Multi-Stage Homotopy Perturbation Method for the Fractional Lotka-Volterra Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improved Image Splicing Forgery Detection by Combination of Conformable Focus Measures and Focus Measure Operators Applied on Obtained Redundant Discrete Wavelet Transform Coefficients

by
Thamarai Subramaniam
1,
Hamid A. Jalab
1,*,
Rabha W. Ibrahim
2,3 and
Nurul F. Mohd Noor
1
1
Faculty of Computer Science and Information Technology, University of Malaya, Kuala Lumpur 50603, Malaysia
2
Informetrics Research Group, Ton Duc Thang University, Ho Chi Minh 758307, Vietnam
3
Faculty of Mathematics & Statistics, Ton Duc Thang University, Ho Chi Minh 758307, Vietnam
*
Author to whom correspondence should be addressed.
Symmetry 2019, 11(11), 1392; https://doi.org/10.3390/sym11111392
Submission received: 8 October 2019 / Revised: 2 November 2019 / Accepted: 7 November 2019 / Published: 10 November 2019
(This article belongs to the Special Issue Recent Advances in Discrete and Fractional Mathematics)

Abstract

:
The image is the best information carrier in the current digital era and the easiest to manipulate. Image manipulation causes the integrity of this information carrier to be ambiguous. The image splicing technique is commonly used to manipulate images by fusing different regions in one image. Over the last decade, it has been confirmed that various structures in science and engineering can be demonstrated more precisely by fractional calculus using integrals or derivative operators. Many fractional-order-based techniques have been used in the image-processing field. Recently, a new specific fractional calculus, called conformable calculus, was delivered. Herein, we employ the combination of conformable focus measures (CFMs), and focus measure operators (FMOs) in obtaining redundant discrete wavelet transform (RDWT) coefficients for improving the image splicing forgery detection. The process of image splicing disorders the content of tampered image and causes abnormality in the image features. The spliced region’s boundaries are usually blurring to avoid detection. To make use of the blurred information, both CFMs and FMOs are used to calculate the degree of blurring of the tampered region’s boundaries for image splicing detection. The two public image datasets IFS-TC and CASIA TIDE V2 are used for evaluation of the proposed method. The obtained results of the proposed method achieved accuracy rate 98.30% for Cb channel on IFS-TC image dataset and 98.60% of the Cb channel on CASIA TIDE V2 with 24-D feature vector. The proposed method exhibited superior results compared with other image splicing detection methods.

1. Introduction

Information disseminated in the form of images are increasing in recent years. “A picture is worth a thousand words”, complex information is understood quickly with images. Currently, the digital image’s truthfulness and integrity are sometime very ambiguous and radically challenged. “Seeing is believing” is no longer pertained with current digital images. Advent of image processing technologies allows easy image manipulation that leaves no visible or little evidence of the alterations. When an image is manipulated to conceal the truth state of the image, it is known as image tampering or forgery.
Image tampering apart being innocuous could lead to adverse consequences in various sectors such as health care, legal evidence in court cases, journalism, or social online media. It virtually impacts every facet of our society. Image forgeries can be achieved through using various techniques such as copy-move and splicing methods as shown in Figure 1 [1]. Copy-move refers to a forgery where a region(s) from an image is cloned in the same original image. The intention of copy-move is to duplicate, conceals or emphasizes the pasted region on the tampering image [2,3]. Discovering copy-move tampering is more challenging because there are no significant visible alterations in the forged image texture. Various researchers are actively exploring in term of the Key-Points and Block-based approach [4]. Finally, the image splicing tampering refers to a forged digital image created by fusing different region(s) from a couple or more images together. When manipulation expertly executed the spliced regions on the forged image is visually undetectable [5,6].
Image tampering can be detected using active and passive authentication methods. The active detection method needs the original image. A specific digital key with a digital signature is embedded into the original image, which is used to check if the digital watermark has been changed [7]. The main drawback with active methods is a specialized camera or expert is required to embed the watermark or fingerprint onto the image. The passive authentication, known as blind method requires no knowledge of the original image whereby anomalies artifacts on the tampered image resulting inconsistencies are used to discover tampering. Tampered image contains modified underlying statistics that not visible to human eyes.
  • Various passive methods are implemented to negate the image tampering and can be broadly categorized into the following methods.
  • Pixel-based: used to detect the irregularities in the image at the pixel’s level. The simplest and commonly use in copy-move and splicing detection [8].
  • Format-based: applied in images with JPEG format. JPEG compression makes tampering detection of an image difficult. However, some traces of tampering are left behind or distributed across the entire image that can be manipulated in the detection process [9]. Techniques such as double JPEG, JPEG blocks and JPEG quantization are exploited for detection in compressed images.
  • Camera-based: used unique signature left by during the image acquisition and image storage in term of the lens, and sensor noises [10]. Some of the artifacts studied are color correlation, white balancing, quantization tables, filtering and JPEG compression.
  • Physical-based: used the inconsistencies in light source across the image. Discovery anomalies on the light direction in 2D and light direction 3D as well as the light environment [11].
  • Geometry-based: focused on objects and their position. Metric measurements and principal point techniques used to detection [11].
Forgery detection by human and machine is increasingly difficult as image forgery techniques evolve. Therefore, an efficient forgery detection method for authenticating the image originality is a paramount priority.
Image splicing is considered to be one of the most general operations to create forged images. Some post-processing operations, such as the blurring spliced boundary, are used to make the forged image like a real image, and to make forgery detection difficult. To solve this issue, Conformable Focus Measures (CFMs) features and the focus measure operators (FMOs) formulated to obtain the RDWT domain for effectively detect the blurred splicing pieces in the tampered image for splicing detection. This study is organized as follows: Section 2 deals with the related works. Section 3 describes the proposed method. The experimental results are discussed in Section 4, and Section 5 presents the conclusions.

2. Related Works

In recent years, various methods have been proposed by researchers and particularly in the field of image splicing forgery.
Moghaddasi et al. [12] applied the Run Length Run Number (RLRN) as a texture analysis scheme for splicing images detection. The principal component analysis (PCA) and kernel PCA reduction methods used to reduce features, dimensions in order to increase the detection accuracy. However, this method tested on a single color channel without testing all the three color channels together and moderate accuracy rate. Furthermore, Local Binary Patterns LBP [13,14,15], which is a local spatial structure, is effective in representing image texture are used for detection. LBP may result in very high feature dimensionality therefore requires the use of dimensionality reduction methods such as PCA or KPCA.
There are various methods that have been deployed to detect the image splicing tampering of the image discrete wavelet transform (DWT). A wavelet decomposition with the block matching method proposed by Kashyap et al. [16] to detect splicing forgery. The method achieved 87.75% accuracy. Thresholding operation used in this method causes information loss and may require a larger feature vector to compensate the degrading detection performance. Nevertheless, it is able to observe the smaller discrepancies on the tampered images.
Zhao et al. [17] proposed a 2-D non-causal Markov model whereby the image is modelled as a 2-D non-causal signal. The underlying dependencies are captured between the current node and its neighbors in Block DCT and Discrete Meyer Wavelet Transform domain. This method 93.36% accuracy rate on DVMM dataset of 128 × 128 size and BMP format images and could detect images in real time in IFS-TC challenge (0.901 score). The dependencies between adjacent pixels due to image splicing can be captured by using Markov transition probability. The method used up to 14,240 features resulting in very high feature vector thus increasing computational cost.
Isaac and Wilscy [18] proposed a splicing detection method based on Gabor wavelet transform and Local Phase Quantization (LPQ). Gabor wavelet decomposed the different scales and orientation, which applied at Chroma component of an image and LPQ values are obtained from the wavelet sub-bands to concatenate to generate a single feature vector. Gabor wavelet used to capture the texture features and LPQ for blur invariant local texture information. The method obtained very good results on both CASIA v1 99.80% and DVMM (color) 99.50%. However, they have indicated that their method has a higher processing time.
Zhang et al. [19] proposed the Markov-based approach in DCT and Contourlet transform domain. Features are extracted from different frequency bands of each block DCT coefficients. Two data sets were used to evaluate their methods and achieved 94.10% for DVMM and 91.80% for IFS_TC accuracy respectively.
Agarwal and Chand [20] used Markov features in undecimated wavelet transform (UWT) domain with 2-D Markovian domain to classify spliced image. YCbCr color space images are decomposed to obtain high-frequency subbands using UWT and further generates a 2-D First Order Difference (FOD) of the images. With the 1029 D feature vector their method obtained an accuracy on DVMM (93.71%) and CASIA v1 (95.35%).
Park et al. [21] used the inter-scale co-occurrence matrix in the wavelet domain. They have exploited the inter-scale co-occurrence matrix in the DWT domain with five levels of decomposition. The experiment yields a 96.20% accuracy for the Columbia gray dataset.
The existing detection system exhibits some limitation in the term of feature extraction complexity, and higher feature dimensionality. Higher feature dimensionality requires feature reduction methods such as PCA or KPCA that increase the complexity of the detection process. The use of feature reduction methods also indication of poor feature qualities as it selects only the most discriminating features for better accuracy rate.
Therefore, we propose the combination of Conformable Focus Measures (CFMs) and focus measure operators (FMOs) in obtaining redundant discrete wavelet transform (RDWT) coefficients to capture the splicing details with low-feature dimension.

3. Proposed Method

The proposed method as illustrated in Figure 2. Focus measures calculate the sharpness degree of the edge which mainly used to capture the splicing details [22,23].
The motive of using the focus measure as the main feature extraction due to its ability to measure the quality of the degree of blurring of the tampered image. Moreover, the shift invariance of the RDWT is able to capture the high frequency and low frequency artifacts at different sub-band.

3.1. Image Pre-Processing

The color space of YCbCr is considered for this study as YCbCr color space is widely used in the digital image processing field. The Y component is luminance that represents the brightness of the pixel. The Cb is the Chrominance-blue and Cr is Chrominance-red components. Tampered images edge irregularities are easily detectable in the YCbCr color scheme [24], because YCbCr can separate luminance from chrominance more effectively than RGB, therefore source images are converted into YCbCr color space from RGB color space.

3.2. Feature Extraction

The main art of detecting forgeries is to extract the discriminating image features that able to identify the tampering. Feature extraction is an important process in digital image forgery [25]. Features extraction process is crucial and generally implemented in spatial domain or transform domain.
Qureshi and Deriche [26] showed that the spatial domain describes the content of an image directly from the pixel location, while the transform domain used the transform of the image [27]. The transform domain transforms the image from one domain into another, such as frequency, and also able to reduce the feature dimensionality due to its fewer coefficients for image blocks [21].
  • Redundant Discrete Wavelets Transform (RDWT)
RDWT is a shift invariance unlike DWT, which is shift variance, maintain the original size of approximation and detail coefficient at each decomposition level. RDWT are widely used in steganography as its more robust and imperceptible than spatial domain techniques [28,29,30]. Even though the RDWT contains redundant frame expansion (wavelet coefficients), which can determine the exact variance in the original sample.
RDWT is selected in this study because it has redundant properties and multiresolution analysis features that retain more information at each sub-band of the original image. Given an image f(x,y) of size (M, N) RDWT decomposition at level j is represented as following [29]:
L j + 1 ( k x , k y ) = l x = +   l y = + h ( l x ) h ( l y ) C j , k + 2 j ( l x , l y ) ,
where k x = 1 , 2 , 3 , . , M and k y = 1 , 2 , 3 , . , N and L j + 1 ( k x , k y ) represented the coefficients of approximation (LL), L j + 1 and C j , k + 2 j are the low frequency sub band of the level j. h is a low-pass filter and g is a high-pass filter.
ω j + 1 h ( k x , k y ) = l x = +   l y = + g ( l x ) h ( l y ) C j , k + 2 j ( l x , l y ) ,
ω j + 1 v ( k x , k y ) = l x = +   l y = + h ( l x ) g ( l y ) C j , k + 2 j ( l x , l y ) ,
ω j + 1 d ( k x , k y ) = l x = +   l y = + g ( l x ) g ( l y ) C j , k + 2 j ( l x , l y ) ,
The sub images for horizontal (LH), vertical (HL) and diagonal (HH) details coefficients are obtained using ω j + 1 h ( k x , k y ) , ω j + 1 v ( k x , k y ) and ω j + 1 d ( k x , k y ) . The process of RDWT decomposition is illustrated in Figure 3.
  • Focus Measures
The focus measures are values that measure the quality of the degree of blurring [31], and high frequency information about the image. The spliced region’s boundaries are usually blurred to avoid detection; consequently, the focus measure apparatus allows the measuring of these blurred edges. The basic characteristics of focus measures are [32]:
  • Content-independent of any image structures.
  • Large variations with respect to the degree of blurring.
  • Minimal computational complexity.
Two focus measure operators (FMOs) and two Conformable focus measures (CFMS) features are used for the proposed method due to their ability to handle the gradients and the degree of blurring in the forged image [32].
FMOs:
  • Energy of Gradient (EOG) is the sum of squared of horizontal and vertical directional gradients. The spliced region(s) edges on tampered images are more blurred than the authentic images which is more focus; therefore, EOG is selected to measure the degree of focus. EOG based on first derivatives and is computed as follows [23]:
    E O G =   x = 1 M 1   y = 1 N 1   [ l x ( x , y ) 2 +   l y ( x , y ) 2 ] .
    where
    l x ( x , y ) = l ( x + 1 , y ) l ( x , y ) .
    and
    l y ( x , y ) = l ( x , y   + 1 ) l ( x , y ) .
  • Energy of Laplacian (EOL) analyzes the border sharpness and measures the amount of edges present on the analyzed region [33] of an image. The EOL is based on the second derivative and obtained as follows [32]:
    E O L =   x = 2 M 1   y = 2 N 1   ( x 2 l ( x , y ) +   y 2 l ( x , y ) ) 2 .
    where
    x 2 l ( x , y ) +   y 2 l ( x , y ) = l ( x + 1 , y ) + l ( x 1 , y ) + l ( x , y + 1 ) + l ( x , y 1 ) 4 l ( x , y ) .
Conformable Focus Measure (CFM):
The conformable calculus (CC) was originally mentioned to as a conformable fractional calculus [33,34]. Though, it demands certain of the settled upon possessions for fractional calculus (derivatives of non-integer power). Conformable calculus operator, in general takes the following presentation: let α ∊ [0, 1]. A differential operator Δ α is conformable if and only if Δ 0 is the identity operator and Δ 1 is the classical differential operator. Precisely, Δ α is conformable if and only if for differentiable function φ = φ (x),
Δ 0   φ ( x ) = φ ( x ) , Δ 1 φ ( x ) = φ   ( x ) .
Recently, Anderson and Ulness [34] introduced a new formula of CC based on the control theory to describe the behavior of proportional-differentiation controller corresponding to error function. The formula is given in the next definition.
Definition 1.
Let α ∊ [0, 1] then CC is given in the following formal
Δ α φ ( x ) = μ 1 ( α , x ) φ ( x ) + μ 0   ( α , x ) φ ( x ) ,
where the functions μ 1 and μ 0 achieve the limits
lim α   0 μ 1 ( α , x ) = 1 ,   lim α   1 μ 1 ( α , x ) = 0 , lim α   0 μ 0 ( α , x ) = 0 ,   lim α   1 μ 0 ( α , x ) = 1 .
To achieve the above definition, we shall consider
μ 1 ( α , x ) = ( 1 α ) x α   and   μ 0 ( α , x ) = α x 1 α
where x is represented to the pixel of the image φ , Δ α φ ( x ) is called the conformable differential operator for the image φ ( x ) . Therefore, μ 1 , μ 0 are the fractional tuning coefficients of the image and its derivative respectively. We need the following result in next study. Moreover, the term φ ( x ) represent the first derivative for the image, which supposedly has a discrete form. The mechanism of the conformable differential operator is as follows: enhance the original image φ by operating it with the fractional term μ 1 ( α , x ) , and enhance the first derivative of the image φ ( x ) by connecting with the second fractional term μ 0 ( α , x ) ; as a result, we add these two enhancements to deliver the conformable enhancement image. The fractional calculus in this case comes from the fractional terms in (8). The difference between usual fractional calculus and conformable calculus is that the coefficients in fractional calculus (connection terms by gamma function) deal not with the pixel of image (x) directly, while the coefficients in CC can do it.
Proposition 1.
Let α ∊ [0, 1] then φ ( x ) has the following second CC derivative
Δ 2 α φ ( x ) = ( μ 1 2 + α 2 ( 1 α ) ) φ ( x ) + μ 0 μ 1   ( 2 μ 1 2 + α ( 1 α ) 2 ) φ ( x ) +   μ 0 2 φ ( x ) .
Proof 1.
The second conformable derivative satisfies:
Δ 2 α φ ( x ) = Δ α [ Δ α φ ( x ) ] = μ 1   [ μ 1 ( α , x ) φ ( x ) + μ 0   ( α , x ) φ ( x ) ] + μ 0   [ ( μ 1 ( α , x ) φ ( x ) + μ 0   ( α , x ) φ ( x ) ) ] .
since μ 1 ( α , x ) = ( 1 α ) x α then μ 1 ( α , x ) = ( 1 α ) α 2 μ 0 ( α , x ) .
Also, we have
μ 0 ( α , x ) = α x 1 α
then
μ 0 ( α , x ) = ( 1 α ) 2 α μ 1 ( α , x ) .
Therefore, a computation in Equation (14) implies Equation (13). This completes the proof. □
For multi-focus image synthesis, numerous characteristic specifically focus measures have been suggested in [35]. Since most of these focus measures are considered by using the first or second order differential equations or formulas, which are not described deeply the historical information of the image; therefore, we present first and second CC to enhance the texture details. Furthermore, the order of the CC can be changed agreeing the image comfortable.
Our aim is to introduce a modification of Gradient (conformable gradient) and its energy in terms of the CC in Equation (15). Moreover, we analyze the image by using the modified Laplacian energy (conformable Laplacian energy) using Equation (15).
We extend Equation (15) into two-dimensional formula to satisfy the study of image processing. By using the definitions of μ 1 and μ 0 , we have
Δ α φ ( x ) =   ( 1 α ) x α φ ( x ) + α x 1 α φ ( x ) ,
Now, by using the Partial Conformable Calculus (PCC), we obtain on x-axis operator,
Δ α x   φ ( x , y ) =   ( 1 α ) x α φ ( x , y ) + α x 1 α   φ x ( x , y ) ,
and on y-axis, we get
Δ α y   φ ( x , y ) =   ( 1 α ) y α φ ( x , y ) + α y 1 α   φ y ( x , y ) .

3.2.1. Conformable Gradient of 2D-Images (CG)

The gradient of any image is defined as a vector of the first derivative in the direction of x and y respectively. By employing the definitions (11) and (12), we have the following conformable gradient of 2D images:
Λ α = ( Δ α x   φ ( x , y )   , Δ α y   φ ( x , y ) ) T .
It is clear that when α = 1, we have the normal gradient. Consequently, The CG of an image φ can be viewed as follows:
Δ α x   φ ( x , y ) = ( 1 α ) φ ( x + 1 , y ) α + α φ x ( x , y ) 1 α .
Δ α y   φ ( x , y ) = ( 1 α ) φ ( x , y + 1 ) α + α φ x ( x , y ) 1 α .
where Δ α x   φ ( x , y )   , Δ α y   φ ( x , y ) are the horizontal and vertical gradients of image pixel.
The image gradients used to measure the image sharpness by the derivative of the values of neighbouring pixels.

3.2.2. Conformable Laplacian of 2D-Images (CL)

The Laplacian of any image is defined as a vector of the second derivative in the direction of x and y respectively. It analyzes the border, edges and corners in images.
By employing Proposition 1, we have the following CL of 2D images:
Υ α = ( Λ α ) 2 = Λ α ( Λ α ) =   Λ 2 α
= ( α   (   α x   φ ( x , y )   )   ,   α (   α y   φ ( x , y ) ) ) T .
It is clear that when α = 1, we have the normal Laplacian operator. Consequently, we get the CL
x 2 φ ( x , y ) = ( μ 1 2 + α 2 ( 1 α ) ) φ ( x 1 , y ) + μ 0 μ 1   ( 2 μ 1 2 + α ( 1 α ) 2 ) φ x ( x + 1 , y ) + μ 0 2 φ x x ( x , y ) .
y 2 φ ( x , y ) = ( μ 1 2 + α 2 ( 1 α ) ) φ ( x , y 1 ) + μ 0 μ 1   ( 2 μ 1 2 + α ( 1 α ) 2 ) φ y ( x + 1 , y ) + μ 0 2 φ y y ( x , y ) .
where x 2 φ ( x , y )   , y 2 φ ( x , y ) are the horizontal and vertical Laplacian operator of image pixel.
The decomposition of RDWT into low and high frequency sub-bands provides details of image information. Using focus measures to obtain RDWT coefficients allows evaluation of gradients clarity and blurring of the source images. The feature distributions of focus measures provide a statistical basis for distinction between authentic and spliced images. The advantage of using the focus measures as a feature extraction is illustrated in terms of the Mean and Standard deviation behavior as shown in Figure 4.
The vertical axis of Figure 4 represents the value of the Mean of the extracted features, while the horizontal axis represents the Standard deviation of the extracted features. It can be seen from Figure 4 that the features are well clustered into two different classes which are the Authentic, and Tampered classes.
Based on the method proposed, the feature extraction process is as follows:
  • Source images are converted into YCbCr color space from RGB. Y, Cb, and Cr channels are extracted separately.
  • Source image for each color space channel is further divided into non-overlapping blocks of n × n whereby n = 128, 64, 32, 16, 8.
  • One level RDWT decomposition applied on each partitioned image blocks to obtain four sub-bands (LL, LH, HL, and HH).
  • Focus measures operators based on (5) (6), and Conformable focus measures (14) (15) are calculated and combined for LH which emphasizes vertical edges, HL emphasizes horizontal edges and HH emphasizes diagonal edges individually. The LH, HL and HH are selected to use the high frequency sub-band, the detailed coefficients.
  • The Mean, and the Standard deviation are calculated for each RDWT-Focus measure detailed coefficients that form features vector.
  • These processes are the same for extracting authentic and spliced image features.
  • Authentic features are labeled as (1) and spliced features as (0) then combined to create the feature vector for classification.
  • SVM is used to classify the images into authentic and spliced.

4. The Experimental Results

The experimental conditions such as color space selection, dataset and classifier and classification performance, compared with other methods are discussed. The image datasets are publicly available datasets for researchers are used for testing the proposed algorithm [36,37]. The DVMM dataset is not selected because images are cut and paste without any processing on the edges and the dataset itself is old compared to other most current data set for splicing. Table 1 shows the descriptions of these two datasets. Figure 5 and Figure 6 display some samples from each dataset.
  • CASIA TIDE V2 is a color images dataset created by Institute of Automation Chinese Academy of Sciences. It consists of 7408 authentic and 5122 spliced images with different resolution. The CASIA TIDE V2 images are more realistic and challenging as it underwent post-processing. These images are of various sizes ranging from 240 × 160 to 900 × 600 resolutions [36]. It is publicly available datasets (https://www.kaggle.com/sophatvathana/casia-dataset.)
  • IFS-TC phase-1 is an open color image dataset is created by IEEE Information Forensics and Security Technical Committee (IFS-TC). The dataset includes 1006 authentic and 450 spliced images with resolution from 1024 × 575 to 1024 × 768 [37]. It is publicly available dataset, https://signalprocessingsociety.org/newsletter/2014/01/ieee-ifs-tc-image-forensics-challenge-website-new-submissions).
The classification is performed on a SVM classifier to identify authentic and spliced images. Image splicing detection is a two-class classification procedure. It needs an appropriate binary classifier in order to make difference between spliced and authenticated images. Support Vector Machines (SVM) have been used to different applications of machine vision. SVM algorithm does not depend on the feature dimension of the classification task for this reason the result the solution is of type of globally optimal. In this study, Quadratic SVM kernel with 10-fold cross-verification is applied as a classifier which is a complete SVM toolbox in MATLAB. The SVM classifier was trained separately for each dataset. After the training process, a testing image set is used to prove the efficiency of the proposed method by using the SVM classifier.
The detection of image splicing performance of the proposed study is measured using True Positive Rate (TPR), True Negative Rate (TNR), and Accuracy. TP is the rate of correctly identified authentic images and TN for identified spliced images, False positives (FP) are those images which are actually authentic, but detected as forged, while false negatives (FN) are those images which are forged but detected as authentic. The detection rate is denoted as accuracy. These metrics are defined as follows:
T P R =   T P T P + F N .
T N R =   T N T N + F P .
A c c u r a c y =   T P + T N T N + F N + F P + T P   .
The proposed algorithm was implemented using MATLAB 2019b on Windows platform [38]. Two public datasets CASIAv2 and IFS-TC is used to evaluate the proposed method.
Experiments are conducted to evaluate the detection ability of the proposed algorithm, and the detailed results are illustrated in Table 2 and Table 3.
The results for the first 20 observations of the testing sample are shown in Figure 7. We have chosen 20 images, 10 authentic, and 10 tampered. The trained SVM has classified each of these images either authentic or tampered. The result of the row 13 of the table shows an incorrect predication due to the complexity of the testing image.
Table 2 shows the results obtained from the IFS-TC dataset. Y, the luminance channel, displays the least accuracy rate of 95.50%. The Cr color channel exhibited detection accuracy rate of 98.10% and Cb the Chrominance-blue color channel achieved the highest detection rate of 98.30% with 24-D features. Cb and Cr channels exhibit better accuracy individually; however, when combined they do not improve the detection accuracy rate, which is at 97.30% and 97.60% accuracy for all three-color channels.
Table 3 shows the result on CASIA TIDE V2 dataset. Cb color space achieved 98.6% higher detection accuracy rate. Similar to IFS_TC dataset, Cb produced the highest accuracy since Chroma channels contain more information on edge irregularities that easily detectable [12]. The accuracy rate for all three-color space combined is 98.10%, which is 0.5% lower than the highest accuracy rate.
A comparison study is done to further evaluate the accuracy rate with other proposed methods. Table 4 shows result obtained from comparing proposed method with other methods. Zhao et al. (2015) method [17], achieved 90.10% accuracy. The proposed method achieved a better result compare two methods by Zhang et al. (2016) [19] and Vega et al. (2018) [39], with an accuracy rate of 96.69% and 96.19%, respectively. Zhang et al., 2016) [19] and (Wang et al., 2018) [40] methods have larger feature vector of 16524, and 11664-D with an accuracy rate of 96.69, and 95.94% respectively.
The comparison results CASIA TIDE V2 dataset with other methods is stated in Table 5. The proposed method reached the higher accuracy, while the other methods contained higher feature dimension and undergone feature reduction process. Most of the image splicing detection methods shown in Table 5 were used high dimensional features, which results of high computational costs of processing of these features. While the proposed method is able to produce a better detection accuracy rate with smaller feature vector and without the use of the feature reduction process.
The focus measures from RDWT coefficients are success to reveal the blurred splicing pieces in the tampered image. The experiment results proved the ability of the proposed algorithm to improve the accuracy of image splicing detection by more than 98% by making use of color information about forged images as compared with other splicing detection methods.
In summary, chrominance channels exhibited better detection results compared to luminance due to anomalies caused by splicing are more detectible in these channels. The proposed method is able to produce a better detection accuracy rate with smaller feature vector and without the use of the feature reduction process.

5. Conclusions

Images are accepted to represent truthfulness apart from the images that manipulated for artistic reasons. The credibility and authenticity of digital image has become difficult to ascertain as digital technology advance. Digital images can be easily manipulated even without any professional knowledge of image editing software. Detecting the manipulated images are increasingly vital in many fields, primarily journalism, social media and so forth. Thus, the current study improved the classification accuracy with low-dimensional of extracting feature vectors by using the combination of conformable focus measures (CFMs), and focus measure operators (FMOs) to obtain redundant discrete wavelet transform (RDWT) coefficients. The proposed method demonstrated an accuracy rate 98.3% on IFS-TC dataset and 98.6% on CASIA TIDE V2 with fewer feature dimension, 24-D features. The computational complexity was reduced due to smaller feature vector. The method has achieved the best result compared to other methods. The future study will focus on modifying the proposed algorithm to detect other types of image forgeries.

Author Contributions

Methodology and software, T.S.; Supervision and Software H.A.J.; Formal analysis, R.W.I.; Supervision, N.F.M.N.

Funding

This research was funded by RU Geran-Fakulti Program of University of Malaya, Malaysia. Grant no: GPF015D-2019.

Acknowledgments

We would like to thank the anonymous reviewers for their insightful comments and suggestions.

Conflicts of Interest

The authors declare that they have no competing interests.

References

  1. Uliyan, D.M.; Jalab, H.A.; Wahab, A.W.A.; Sadeghi, S. Image Region Duplication Forgery Detection Based on Angular Radial Partitioning and Harris Key-Points. Symmetry 2016, 8, 62. [Google Scholar] [CrossRef]
  2. Uliyan, D.M.; Jalab, H.A.; Wahab, A.W.A.; Shivakumara, P.; Sadeghi, S. A novel forged blurred region detection system for image forensic applications. Expert Syst. Appl. 2016, 64, 1–10. [Google Scholar] [CrossRef]
  3. Sadeghi, S.; Dadkhah, S.; Jalab, H.A.; Mazzola, G.; Uliyan, D. State of the art in passive digital image forgery detection: Copy-move image forgery. Pattern Anal. Appl. 2018, 21, 291–306. [Google Scholar] [CrossRef]
  4. Sadeghi, S.; Jalab, H.A.; Wong, K.; Uliyan, D.; Dadkhah, S. Keypoint based authentication and localization of copy-move forgery in digital image. Malays. J. Comput. Sci. 2017, 30, 117–133. [Google Scholar] [CrossRef]
  5. Moghaddasi, Z.; Jalab, H.A.; Noor, R.M. Image splicing forgery detection based on low-dimensional singular value decomposition of discrete cosine transform coefficients. Neural Comput. Appl. 2019, 31, 7867–7877. [Google Scholar] [CrossRef]
  6. Ibrahim, R.; Moghaddasi, Z.; Jalab, H.; Noor, R. Fractional differential texture descriptors based on the machado entropy for image splicing detection. Entropy 2015, 17, 4775–4785. [Google Scholar] [CrossRef]
  7. Yahya, A.-N.; Jalab, H.A.; Wahid, A.; Noor, R.M. Robust watermarking algorithm for digital images using discrete wavelet and probabilistic neural network. J. King Saud Univ. Comput. Inf. Sci. 2015, 27, 393–401. [Google Scholar] [Green Version]
  8. Prakash, C.S.; Maheshkar, S.; Maheshkar, V. Detection of copy-move image forgery with efficient block representation and discrete cosine transform. J. Intell. Fuzzy Syst. 2018, 35, 1–13. [Google Scholar] [CrossRef]
  9. Zheng, L.; Zhang, Y.; Thing, V.L. A survey on image tampering and its detection in real-world photos. J. Vis. Commun. Image Represent. 2019, 58, 380–399. [Google Scholar] [CrossRef]
  10. Lin, X.; Li, J.-H.; Wang, S.-L.; Cheng, F.; Huang, X.-S. Recent advances in passive digital image security forensics: A brief review. Engineering 2018, 4, 29–39. [Google Scholar] [CrossRef]
  11. Ansari, M.D.; Ghrera, S.; Tyagi, V. Pixel-based image forgery detection: A review. IETE J. Educ. 2014, 55, 40–46. [Google Scholar] [CrossRef]
  12. Moghaddasi, Z.; Jalab, H.A.; Md Noor, R.; Aghabozorgi, S. Improving RLRN image splicing detection with the use of PCA and kernel PCA. Sci. World J. 2014, 1–10. [Google Scholar] [CrossRef] [PubMed]
  13. Hakimi, F.; Hariri, M.; GharehBaghi, F. Image splicing forgery detection using local binary pattern and discrete wavelet transform. In Proceedings of the 2015 2nd International Conference on Knowledge-Based Engineering and Innovation (KBEI), Tehran, Iran, 5–6 November 2015; pp. 1074–1077. [Google Scholar]
  14. Kaur, M.; Gupta, S. A Passive Blind Approach for Image Splicing Detection Based on DWT and LBP Histograms. In Security in Computing and Communications, Sscc 2016; Mueller, P., Thampi, S.M., Bhuiyan, M.Z.A., Ko, R., Doss, R., Calero, J.M.A., Eds.; Springer: Berlin/Heidelberg, Germany, 2016; Volume 625, pp. 318–327. [Google Scholar]
  15. Alhussein, M. Image Tampering Detection Based on Local Texture Descriptor and Extreme Learning Machine. In Proceedings of the 2016 UKSim-AMSS 18th International Conference on Computer Modelling and Simulation (UKSim), Cambridge, UK, 6–8 April 2016; pp. 196–199. [Google Scholar] [CrossRef]
  16. Kashyap, A.; Suresh, B.; Agrawal, M.; Gupta, H.; Joshi, S.D. Detection of splicing forgery using wavelet decomposition. In Proceedings of the 2015 International Conference on Computing, Communication & Automation (ICCCA), Greater Noida, India, 15–16 May 2015; pp. 843–848. [Google Scholar]
  17. Zhao, X.; Wang, S.; Li, S.; Li, J. Passive image-splicing detection by a 2-D noncausal Markov model. IEEE Trans. Circuits Syst. Technol. 2015, 25, 185–199. [Google Scholar] [CrossRef]
  18. Isaac, M.M.; Wilscy, M. Image Forgery Detection Based on Gabor Wavelets and Local Phase Quantization. Procedia Comput. Sci. 2015, 58, 76–83. [Google Scholar] [CrossRef] [Green Version]
  19. Zhang, Q.; Lu, W.; Weng, J. Joint image splicing detection in DCT and Contourlet transform domain. J. Vis. Commun. Image Represent. 2016, 40, 449–458. [Google Scholar] [CrossRef]
  20. Agarwal, S.; Chand, S. Image forgery detection using Markov features in undecimated wavelet transform. In Proceedings of the 2016 Ninth International Conference on Contemporary Computing (IC3), Noida, India, 1–13 August 2016; pp. 1–6. [Google Scholar]
  21. Park, T.H.; Han, J.G.; Moon, Y.H.; Eom, I.K. Image splicing detection based on inter-scale 2D joint characteristic function moments in wavelet domain. EURASIP J. Image Video Process. 2016, 2016, 30. [Google Scholar] [CrossRef]
  22. Mir, H.; Xu, P.; Van Beek, P. An extensive empirical evaluation of focus measures for digital photography. In Proceedings of the SPIE, San Francisco, CA, USA, 3–5 February 2014; Volume 9023. [Google Scholar]
  23. Chun, M.G.; Kong, S.G. Focusing in thermal imagery using morphological gradient operator. Pattern Recognit. Lett. 2014, 38, 20–25. [Google Scholar] [CrossRef]
  24. Jalab, H.A.; Subramaniam, T.; Ibrahim, R.; Kahtan, H.; Noor, N. New Texture Descriptor Based on Modified Fractional Entropy for Digital Image Splicing Forgery Detection. Entropy 2019, 21, 371. [Google Scholar] [CrossRef]
  25. Jalab, H.A.; Hasan, A.M.; Moghaddasi, Z.; Wakaf, Z. Image Splicing Detection Using Electromagnetism-Like Based Descriptor. In Proceedings of the SAI Intelligent Systems Conference, London, UK, 21–22 September 2016; pp. 59–66. [Google Scholar]
  26. Qureshi, M.A.; Deriche, M. A bibliography of pixel-based blind image forgery detection techniques. Signal Process. Image Commun. 2015, 39, 46–74. [Google Scholar] [CrossRef]
  27. Sharma, S.; Kumar, U. Review of Transform Domain Techniques for Image Steganography. Int. J. Sci. Res. 2015, 4, 194–197. [Google Scholar]
  28. Subhedar, M.S.; Mankar, V.H. Image steganography using redundant discrete wavelet transform and QR factorization. Comput. Electr. Eng. 2016, 54, 406–422. [Google Scholar] [CrossRef]
  29. Gaur, S.; Srivastava, V.K. A hybrid RDWT-DCT and SVD based digital image watermarking scheme using Arnold transform. In Proceedings of the 2017 4th International Conference on Signal Processing and Integrated Networks (SPIN), Noida, Delhi-NCR, India, 2–3 February 2017; pp. 399–404. [Google Scholar]
  30. Bajaj, A. Robust and reversible digital image watermarking technique based on RDWT-DCT-SVD. In Proceedings of the 2014 International Conference on Advances in Engineering and Technology Research (ICAETR), Singapore, 29–30 March 2014; pp. 1–5. [Google Scholar]
  31. Guo, L.; Liu, L.; Sun, H. Focus Measure Based on the Image Moments. In Proceedings of the 2018 IEEE International Conference on Mechatronics and Automation (ICMA), Changchun, Jilin, China, 5–8 August 2018; pp. 1151–1156. [Google Scholar]
  32. Faundez-Zanuy, M.; Mekyska, J.; Espinosa-Duró, V. On the focusing of thermal images. Pattern Recognit. Lett. 2011, 32, 1548–1557. [Google Scholar] [CrossRef]
  33. Pertuz, S.; Puig, D.; Garcia, M.A. Analysis of focus measure operators for shape-from-focus. Pattern Recognit. 2013, 46, 1415–1432. [Google Scholar] [CrossRef]
  34. Anderson, D.R.; Camrud, E.; Ulness, D.J. On the nature of the conformable derivative and its applications to physics. J. Fract. Calc. Appl. 2018, 10, 92–135. [Google Scholar]
  35. Huang, W.; Jing, Z. Evaluation of focus measures in multi-focus image fusion. Pattern Recognit. Lett. 2007, 28, 493–500. [Google Scholar] [CrossRef]
  36. Dong, J.; Wang, W.; Tan, T. CASIA image tampering detection evaluation database. In Proceedings of the 2013 IEEE China Summit and International Conference on Signal and Information Processing, Beijing, China, 6–10 July 2013; pp. 422–426. [Google Scholar]
  37. IEEE IFS-TC Image Forensics Challenge: Image Corpus. Available online: http://ifc.recod.ic.unicamp.br/ (accessed on 7 October 2019).
  38. Image Processing Toolbox; The Mathworks, Inc.: Natick, MA, USA; Available online: http://www.mathworks (accessed on 7 October 2019).
  39. Armas Vega, E.A.; Sandoval Orozco, A.L.; Garcia Villalba, L.J.; Hernandez-Castro, J. Digital Images Authentication Technique Based on DWT, DCT and Local Binary Patterns. Sensors 2018, 18, 3372. [Google Scholar] [CrossRef]
  40. Wang, R.; Lu, W.; Li, J.; Xiang, S.; Zhao, X.; Wang, J. Digital image splicing detection based on Markov features in QDCT and QWT domain. Int. J. Digit. Crime Forensics 2018, 10, 90–107. [Google Scholar] [CrossRef]
  41. Li, C.; Ma, Q.; Xiao, L.; Li, M.; Zhang, A. Image splicing detection based on Markov features in QDCT domain. Neurocomputing 2017, 228, 29–36. [Google Scholar] [CrossRef]
  42. Shen, X.; Shi, Z.; Chen, H. Splicing image forgery detection using textural features based on the grey level co-occurrence matrices. IET Image Process. 2017, 11, 44–53. [Google Scholar] [CrossRef]
Figure 1. Examples of image forgery techniques. (a) Copy-move, (b) Splicing.
Figure 1. Examples of image forgery techniques. (a) Copy-move, (b) Splicing.
Symmetry 11 01392 g001
Figure 2. Proposed method.
Figure 2. Proposed method.
Symmetry 11 01392 g002
Figure 3. RDWT decomposition at level 1.
Figure 3. RDWT decomposition at level 1.
Symmetry 11 01392 g003
Figure 4. The proposed feature distribution for authentic and spliced images detection.
Figure 4. The proposed feature distribution for authentic and spliced images detection.
Symmetry 11 01392 g004
Figure 5. CASIA TIDE V2 Image sample (Authentic and Tampered) [36].
Figure 5. CASIA TIDE V2 Image sample (Authentic and Tampered) [36].
Symmetry 11 01392 g005
Figure 6. IFS-TC Phase 1 Training (Authentic and Tampered) [37].
Figure 6. IFS-TC Phase 1 Training (Authentic and Tampered) [37].
Symmetry 11 01392 g006
Figure 7. Sample results using CASIA V2 Images. (A): Authentic images, (B): Tampered images.
Figure 7. Sample results using CASIA V2 Images. (A): Authentic images, (B): Tampered images.
Symmetry 11 01392 g007
Table 1. Public Image Dataset.
Table 1. Public Image Dataset.
DatasetAuthenticSplicedTotalFormatSizeMethod of Tampering
CASIA V2 [36]7408512212,530JPEG, TIFF
BMP
320 × 240 to 900 × 600Splicing with pre-processing and post-processing
IFS-TC [37] 10064501456PNG1024 × 575 to 1024 × 768Splicing with pre-processing and post-processing
Table 2. Result obtained on IFS-TC dataset.
Table 2. Result obtained on IFS-TC dataset.
ChannelDimensionalityTP (%)TN (%)Accuracy (%)
Y24-D959695.50
Cb24-D989898.30
Cr24-D989898.10
CbCr48-D979897.30
YCbCr72-D989897.60
Table 3. Results obtained on CASIA TIDE V2 dataset are cited.
Table 3. Results obtained on CASIA TIDE V2 dataset are cited.
ChannelDimensionalityTP (%)TN (%)Accuracy (%)
Y24-D968091.40
Cb24-D999998.60
Cr24-D989697.70
CbCr48-D999697.90
YCbCr72-D999698.10
Table 4. Comparison result between proposed and other methods on IFS-TC dataset.
Table 4. Comparison result between proposed and other methods on IFS-TC dataset.
MethodsFeature ReductionDimensionalityTP (%)TN (%)Accuracy (%)
DMWT + Markov + BDCT [17]SVM-RBFNANANA90.10
DCT + Contourlet Transform [19]LIBSVM + RBF16,52498.0695.3196.69
Markov + QDCT + QWT [40]None11,66496.7595.1395.94
DCT + DWT + LBP [39]SVM-RBFNANANA96.19
Proposed (Cb)None24-D 989898.30
Proposed (YCbCr)None72-D989797.40
Table 5. Comparison result between proposed and other methods for CASIAV2 dataset.
Table 5. Comparison result between proposed and other methods for CASIAV2 dataset.
MethodFeature ReductionDimensionalityTP (%)TN (%)Accuracy (%)
LBP + DWT [13]PCANANANA97.21
Markov features in QDCT domain [41]None972NANA92.38
DWT and LBP Histogram [14]NoneNANANA94.09
co-occurrence matrices in wavelet domain [21]PCA100NANA95.40
TF-GLCM (CbCr) [42]LIBSVM + RBF96 97.7297.8097.70
Proposed (Cb)None24-D999998.60
Proposed (YCbCr)None72-D999698.10

Share and Cite

MDPI and ACS Style

Subramaniam, T.; Jalab, H.A.; Ibrahim, R.W.; Mohd Noor, N.F. Improved Image Splicing Forgery Detection by Combination of Conformable Focus Measures and Focus Measure Operators Applied on Obtained Redundant Discrete Wavelet Transform Coefficients. Symmetry 2019, 11, 1392. https://doi.org/10.3390/sym11111392

AMA Style

Subramaniam T, Jalab HA, Ibrahim RW, Mohd Noor NF. Improved Image Splicing Forgery Detection by Combination of Conformable Focus Measures and Focus Measure Operators Applied on Obtained Redundant Discrete Wavelet Transform Coefficients. Symmetry. 2019; 11(11):1392. https://doi.org/10.3390/sym11111392

Chicago/Turabian Style

Subramaniam, Thamarai, Hamid A. Jalab, Rabha W. Ibrahim, and Nurul F. Mohd Noor. 2019. "Improved Image Splicing Forgery Detection by Combination of Conformable Focus Measures and Focus Measure Operators Applied on Obtained Redundant Discrete Wavelet Transform Coefficients" Symmetry 11, no. 11: 1392. https://doi.org/10.3390/sym11111392

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop