Improved Image Splicing Forgery Detection by Combination of Conformable Focus Measures and Focus Measure Operators Applied on Obtained Redundant Discrete Wavelet Transform Coefficients

The image is the best information carrier in the current digital era and the easiest to manipulate. Image manipulation causes the integrity of this information carrier to be ambiguous. The image splicing technique is commonly used to manipulate images by fusing different regions in one image. Over the last decade, it has been confirmed that various structures in science and engineering can be demonstrated more precisely by fractional calculus using integrals or derivative operators. Many fractional-order-based techniques have been used in the image-processing field. Recently, a new specific fractional calculus, called conformable calculus, was delivered. Herein, we employ the combination of conformable focus measures (CFMs), and focus measure operators (FMOs) in obtaining redundant discrete wavelet transform (RDWT) coefficients for improving the image splicing forgery detection. The process of image splicing disorders the content of tampered image and causes abnormality in the image features. The spliced region’s boundaries are usually blurring to avoid detection. To make use of the blurred information, both CFMs and FMOs are used to calculate the degree of blurring of the tampered region’s boundaries for image splicing detection. The two public image datasets IFS-TC and CASIA TIDE V2 are used for evaluation of the proposed method. The obtained results of the proposed method achieved accuracy rate 98.30% for Cb channel on IFS-TC image dataset and 98.60% of the Cb channel on CASIA TIDE V2 with 24-D feature vector. The proposed method exhibited superior results compared with other image splicing detection methods.


Introduction
Information disseminated in the form of images are increasing in recent years."A picture is worth a thousand words", complex information is understood quickly with images.Currently, the digital image's truthfulness and integrity are sometime very ambiguous and radically challenged."Seeing is believing" is no longer pertained with current digital images.Advent of image processing technologies allows easy image manipulation that leaves no visible or little evidence of the alterations.When an image is manipulated to conceal the truth state of the image, it is known as image tampering or forgery.
Image tampering apart being innocuous could lead to adverse consequences in various sectors such as health care, legal evidence in court cases, journalism, or social online media.It virtually impacts every facet of our society.Image forgeries can be achieved through using various techniques such as copy-move and splicing methods as shown in Figure 1 [1].Copy-move refers to a forgery where a region(s) from an image is cloned in the same original image.The intention of copy-move is to duplicate, conceals or emphasizes the pasted region on the tampering image [2,3].Discovering copy-move tampering is more challenging because there are no significant visible alterations in the forged image texture.Various researchers are actively exploring in term of the Key-Points and Block-based approach [4].Finally, the image splicing tampering refers to a forged digital image created by fusing different region(s) from a couple or more images together.When manipulation expertly executed the spliced regions on the forged image is visually undetectable [5,6].
Image tampering apart being innocuous could lead to adverse consequences in various sectors such as health care, legal evidence in court cases, journalism, or social online media.It virtually impacts every facet of our society.Image forgeries can be achieved through using various techniques such as copy-move and splicing methods as shown in Figure 1 [1].Copy-move refers to a forgery where a region(s) from an image is cloned in the same original image.The intention of copy-move is to duplicate, conceals or emphasizes the pasted region on the tampering image [2,3].Discovering copy-move tampering is more challenging because there are no significant visible alterations in the forged image texture.Various researchers are actively exploring in term of the Key-Points and Blockbased approach [4].Finally, the image splicing tampering refers to a forged digital image created by fusing different region(s) from a couple or more images together.When manipulation expertly executed the spliced regions on the forged image is visually undetectable [5,6].Image tampering can be detected using active and passive authentication methods.The active detection method needs the original image.A specific digital key with a digital signature is embedded into the original image, which is used to check if the digital watermark has been changed [7].The main drawback with active methods is a specialized camera or expert is required to embed the watermark or fingerprint onto the image.The passive authentication, known as blind method requires no knowledge of the original image whereby anomalies artifacts on the tampered image resulting inconsistencies are used to discover tampering.Tampered image contains modified underlying statistics that not visible to human eyes.
1. Various passive methods are implemented to negate the image tampering and can be broadly categorized into the following methods.2. Pixel-based: used to detect the irregularities in the image at the pixel's level.The simplest and commonly use in copy-move and splicing detection [8]. 3. Format-based: applied in images with JPEG format.JPEG compression makes tampering detection of an image difficult.However, some traces of tampering are left behind or distributed across the entire image that can be manipulated in the detection process [9].Techniques such as double JPEG, JPEG blocks and JPEG quantization are exploited for detection in compressed images 4. Camera-based: used unique signature left by during the image acquisition and image storage in term of the lens, and sensor noises [10].Some of the artifacts studied are color correlation, white balancing, quantization tables, filtering and JPEG compression.5. Physical-based: used the inconsistencies in light source across the image.Discovery anomalies on the light direction in 2D and light direction 3D as well as the light environment [11].Image tampering can be detected using active and passive authentication methods.The active detection method needs the original image.A specific digital key with a digital signature is embedded into the original image, which is used to check if the digital watermark has been changed [7].The main drawback with active methods is a specialized camera or expert is required to embed the watermark or fingerprint onto the image.The passive authentication, known as blind method requires no knowledge of the original image whereby anomalies artifacts on the tampered image resulting inconsistencies are used to discover tampering.Tampered image contains modified underlying statistics that not visible to human eyes.

1.
Various passive methods are implemented to negate the image tampering and can be broadly categorized into the following methods.

2.
Pixel-based: used to detect the irregularities in the image at the pixel's level.The simplest and commonly use in copy-move and splicing detection [8].

3.
Format-based: applied in images with JPEG format.JPEG compression makes tampering detection of an image difficult.However, some traces of tampering are left behind or distributed across the entire image that can be manipulated in the detection process [9].Techniques such as double JPEG, JPEG blocks and JPEG quantization are exploited for detection in compressed images.

4.
Camera-based: used unique signature left by during the image acquisition and image storage in term of the lens, and sensor noises [10].Some of the artifacts studied are color correlation, white balancing, quantization tables, filtering and JPEG compression.

5.
Physical-based: used the inconsistencies in light source across the image.Discovery anomalies on the light direction in 2D and light direction 3D as well as the light environment [11].

6.
Geometry-based: focused on objects and their position.Metric measurements and principal point techniques used to detection [11].Forgery detection by human and machine is increasingly difficult as image forgery techniques evolve.Therefore, an efficient forgery detection method for authenticating the image originality is a paramount priority.
Image splicing is considered to be one of the most general operations to create forged images.Some post-processing operations, such as the blurring spliced boundary, are used to make the forged image like a real image, and to make forgery detection difficult.To solve this issue, Conformable Focus Measures (CFMs) features and the focus measure operators (FMOs) formulated to obtain the RDWT domain for effectively detect the blurred splicing pieces in the tampered image for splicing detection.This study is organized as follows: Section 2 deals with the related works.Section 3 describes the proposed method.The experimental results are discussed in Section 4, and Section 5 presents the conclusions.

Related Works
In recent years, various methods have been proposed by researchers and particularly in the field of image splicing forgery.
Moghaddasi et al. [12] applied the Run Length Run Number (RLRN) as a texture analysis scheme for splicing images detection.The principal component analysis (PCA) and kernel PCA reduction methods used to reduce features, dimensions in order to increase the detection accuracy.However, this method tested on a single color channel without testing all the three color channels together and moderate accuracy rate.Furthermore, Local Binary Patterns LBP [13][14][15], which is a local spatial structure, is effective in representing image texture are used for detection.LBP may result in very high feature dimensionality therefore requires the use of dimensionality reduction methods such as PCA or KPCA.
There are various methods that have been deployed to detect the image splicing tampering of the image discrete wavelet transform (DWT).A wavelet decomposition with the block matching method proposed by Kashyap et al. [16] to detect splicing forgery.The method achieved 87.75% accuracy.Thresholding operation used in this method causes information loss and may require a larger feature vector to compensate the degrading detection performance.Nevertheless, it is able to observe the smaller discrepancies on the tampered images.
Zhao et al. [17] proposed a 2-D non-causal Markov model whereby the image is modelled as a 2-D non-causal signal.The underlying dependencies are captured between the current node and its neighbors in Block DCT and Discrete Meyer Wavelet Transform domain.This method 93.36% accuracy rate on DVMM dataset of 128 × 128 size and BMP format images and could detect images in real time in IFS-TC challenge (0.901 score).The dependencies between adjacent pixels due to image splicing can be captured by using Markov transition probability.The method used up to 14,240 features resulting in very high feature vector thus increasing computational cost.
Isaac and Wilscy [18] proposed a splicing detection method based on Gabor wavelet transform and Local Phase Quantization (LPQ).Gabor wavelet decomposed the different scales and orientation, which applied at Chroma component of an image and LPQ values are obtained from the wavelet sub-bands to concatenate to generate a single feature vector.Gabor wavelet used to capture the texture features and LPQ for blur invariant local texture information.The method obtained very good results on both CASIA v1 99.80% and DVMM (color) 99.50%.However, they have indicated that their method has a higher processing time.
Zhang et al. [19] proposed the Markov-based approach in DCT and Contourlet transform domain.Features are extracted from different frequency bands of each block DCT coefficients.Two data sets were used to evaluate their methods and achieved 94.10% for DVMM and 91.80% for IFS_TC accuracy respectively.
Agarwal and Chand [20] used Markov features in undecimated wavelet transform (UWT) domain with 2-D Markovian domain to classify spliced image.YCbCr color space images are decomposed to obtain high-frequency subbands using UWT and further generates a 2-D First Order Difference (FOD) of the images.With the 1029 D feature vector their method obtained an accuracy on DVMM (93.71%) and CASIA v1 (95.35%).
Park et al. [21] used the inter-scale co-occurrence matrix in the wavelet domain.They have exploited the inter-scale co-occurrence matrix in the DWT domain with five levels of decomposition.The experiment yields a 96.20% accuracy for the Columbia gray dataset.
The existing detection system exhibits some limitation in the term of feature extraction complexity, and higher feature dimensionality.Higher feature dimensionality requires feature reduction methods such as PCA or KPCA that increase the complexity of the detection process.The use of feature reduction methods also indication of poor feature qualities as it selects only the most discriminating features for better accuracy rate.
Therefore, we propose the combination of Conformable Focus Measures (CFMs) and focus measure operators (FMOs) in obtaining redundant discrete wavelet transform (RDWT) coefficients to capture the splicing details with low-feature dimension.

Proposed Method
The proposed method as illustrated in Figure 2. Focus measures calculate the sharpness degree of the edge which mainly used to capture the splicing details [22,23].Difference (FOD) of the images.With the 1029 D feature vector their method obtained an accuracy on DVMM (93.71%) and CASIA v1 (95.35%).Park et al. [21] used the inter-scale co-occurrence matrix in the wavelet domain.They have exploited the inter-scale co-occurrence matrix in the DWT domain with five levels of decomposition.The experiment yields a 96.20% accuracy for the Columbia gray dataset.
The existing detection system exhibits some limitation in the term of feature extraction complexity, and higher feature dimensionality.Higher feature dimensionality requires feature reduction methods such as PCA or KPCA that increase the complexity of the detection process.The use of feature reduction methods also indication of poor feature qualities as it selects only the most discriminating features for better accuracy rate.
Therefore, we propose the combination of Conformable Focus Measures (CFMs) and focus measure operators (FMOs) in obtaining redundant discrete wavelet transform (RDWT) coefficients to capture the splicing details with low-feature dimension.

Proposed Method
The proposed method as illustrated in Figure 2. Focus measures calculate the sharpness degree of the edge which mainly used to capture the splicing details [22,23].
The motive of using the focus measure as the main feature extraction due to its ability to measure the quality of the degree of blurring of the tampered image.Moreover, the shift invariance of the RDWT is able to capture the high frequency and low frequency artifacts at different sub-band.The motive of using the focus measure as the main feature extraction due to its ability to measure the quality of the degree of blurring of the tampered image.Moreover, the shift invariance of the RDWT is able to capture the high frequency and low frequency artifacts at different sub-band.

Image Pre-Processing
The color space of YCbCr is considered for this study as YCbCr color space is widely used in the digital image processing field.The Y component is luminance that represents the brightness of the pixel.The Cb is the Chrominance-blue and Cr is Chrominance-red components.Tampered images edge irregularities are easily detectable in the YCbCr color scheme [24], because YCbCr can separate luminance from chrominance more effectively than RGB, therefore source images are converted into YCbCr color space from RGB color space.

Feature Extraction
The main art of detecting forgeries is to extract the discriminating image features that able to identify the tampering.Feature extraction is an important process in digital image forgery [25].Features extraction process is crucial and generally implemented in spatial domain or transform domain.
Qureshi and Deriche [26] showed that the spatial domain describes the content of an image directly from the pixel location, while the transform domain used the transform of the image [27].The transform domain transforms the image from one domain into another, such as frequency, and also able to reduce the feature dimensionality due to its fewer coefficients for image blocks [21].
RDWT is a shift invariance unlike DWT, which is shift variance, maintain the original size of approximation and detail coefficient at each decomposition level.RDWT are widely used in steganography as its more robust and imperceptible than spatial domain techniques [28][29][30].Even though the RDWT contains redundant frame expansion (wavelet coefficients), which can determine the exact variance in the original sample.
RDWT is selected in this study because it has redundant properties and multiresolution analysis features that retain more information at each sub-band of the original image.Given an image f (x,y) of size (M, N) RDWT decomposition at level j is represented as following [29]: where k x = 1, 2, 3, . . . . . .., M and k y = 1, 2, 3, . . . . . .., N and L j+1 k x , k y represented the coefficients of approximation (LL), L j+1 and C j,k+2 j are the low frequency sub band of the level j. h is a low-pass filter and g is a high-pass filter.
The sub images for horizontal (LH), vertical (HL) and diagonal (HH) details coefficients are obtained using ω h j+1 k x , k y , ω v j+1 k x , k y and ω d j+1 k x , k y .The process of RDWT decomposition is illustrated in Figure 3.

Focus Measures
The focus measures are values that measure the quality of the degree of blurring [31], and high frequency information about the image.The spliced region's boundaries are usually blurred to avoid detection; consequently, the focus measure apparatus allows the measuring of these blurred edges.The basic characteristics of focus measures are [32]:

•
Content-independent of any image structures.
• Large variations with respect to the degree of blurring.

• Focus Measures
The focus measures are values that measure the quality of the degree of blurring [31], and high frequency information about the image.The spliced region's boundaries are usually blurred to avoid detection; consequently, the focus measure apparatus allows the measuring of these blurred edges.The basic characteristics of focus measures are [32]: • Content-independent of any image structures.

•
Large variations with respect to the degree of blurring.• Minimal computational complexity.Two focus measure operators (FMOs) and two Conformable focus measures (CFMS) features are used for the proposed method due to their ability to handle the gradients and the degree of blurring in the forged image [32].
Energy of Gradient (EOG) is the sum of squared of horizontal and vertical directional gradients.The spliced region(s) edges on tampered images are more blurred than the authentic images which is more focus; therefore, EOG is selected to measure the degree of focus.EOG based on first derivatives and is computed as follows [23]: where ( , ) = ( , + 1) − ( , ). II.
Energy of Laplacian (EOL) analyzes the border sharpness and measures the amount of edges present on the analyzed region [33] of an image.The EOL is based on the second derivative and obtained as follows [32]: where Conformable Focus Measure (CFM): Two focus measure operators (FMOs) and two Conformable focus measures (CFMS) features are used for the proposed method due to their ability to handle the gradients and the degree of blurring in the forged image [32].
FMOs: I Energy of Gradient (EOG) is the sum of squared of horizontal and vertical directional gradients.
The spliced region(s) edges on tampered images are more blurred than the authentic images which is more focus; therefore, EOG is selected to measure the degree of focus.EOG based on first derivatives and is computed as follows [23]: where and l y (x, y) = l(x, y + 1) − l(x, y).
II Energy of Laplacian (EOL) analyzes the border sharpness and measures the amount of edges present on the analyzed region [33] of an image.The EOL is based on the second derivative and obtained as follows [32]: where Conformable Focus Measure (CFM): The conformable calculus (CC) was originally mentioned to as a conformable fractional calculus [33,34].Though, it demands certain of the settled upon possessions for fractional calculus (derivatives of non-integer power).Conformable calculus operator, in general takes the following presentation: let α ∈ [0, 1].A differential operator ∆α is conformable if and only if ∆ 0 is the identity operator and ∆ 1 is the classical differential operator.Precisely, ∆α is conformable if and only if for differentiable function ϕ = ϕ (x), Recently, Anderson and Ulness [34] introduced a new formula of CC based on the control theory to describe the behavior of proportional-differentiation controller corresponding to error function.The formula is given in the next definition.
where the functions µ 1 and µ 0 achieve the limits To achieve the above definition, we shall consider where x is represented to the pixel of the image ϕ, ∆ α ϕ(x) is called the conformable differential operator for the image ϕ(x).Therefore, µ 1 , µ 0 are the fractional tuning coefficients of the image and its derivative respectively.We need the following result in next study.Moreover, the term ϕ (x) represent the first derivative for the image, which supposedly has a discrete form.The mechanism of the conformable differential operator is as follows: enhance the original image φ by operating it with the fractional term µ 1 (α, x), and enhance the first derivative of the image ϕ (x) by connecting with the second fractional term µ 0 (α, x); as a result, we add these two enhancements to deliver the conformable enhancement image.The fractional calculus in this case comes from the fractional terms in (8).The difference between usual fractional calculus and conformable calculus is that the coefficients in fractional calculus (connection terms by gamma function) deal not with the pixel of image (x) directly, while the coefficients in CC can do it.
For multi-focus image synthesis, numerous characteristic specifically focus measures have been suggested in [35].Since most of these focus measures are considered by using the first or second order differential equations or formulas, which are not described deeply the historical information of the image; therefore, we present first and second CC to enhance the texture details.Furthermore, the order of the CC can be changed agreeing the image comfortable.
Our aim is to introduce a modification of Gradient (conformable gradient) and its energy in terms of the CC in Equation ( 15).Moreover, we analyze the image by using the modified Laplacian energy (conformable Laplacian energy) using Equation (15).
We extend Equation ( 15) into two-dimensional formula to satisfy the study of image processing.By using the definitions of µ 1 and µ 0 , we have Now, by using the Partial Conformable Calculus (PCC), we obtain on x-axis operator, and on y-axis, we get

Conformable Gradient of 2D-Images (CG)
The gradient of any image is defined as a vector of the first derivative in the direction of x and y respectively.By employing the definitions (11) and ( 12), we have the following conformable gradient of 2D images: It is clear that when α = 1, we have the normal gradient.Consequently, The CG of an image φ can be viewed as follows: where ∆ α x ϕ(x, y) , ∆ α y ϕ(x, y) are the horizontal and vertical gradients of image pixel.The image gradients used to measure the image sharpness by the derivative of the values of neighbouring pixels.

Conformable Laplacian of 2D-Images (CL)
The Laplacian of any image is defined as a vector of the second derivative in the direction of x and y respectively.It analyzes the border, edges and corners in images.
By employing Proposition 1, we have the following CL of 2D images: Symmetry 2019, 11, 1392 9 of 16 It is clear that when α = 1, we have the normal Laplacian operator.Consequently, we get the CL where ∇ 2 x ϕ(x, y) , ∇ 2 y ϕ(x, y) are the horizontal and vertical Laplacian operator of image pixel.The decomposition of RDWT into low and high frequency sub-bands provides details of image information.Using focus measures to obtain RDWT coefficients allows evaluation of gradients clarity and blurring of the source images.The feature distributions of focus measures provide a statistical basis for distinction between authentic and spliced images.The advantage of using the focus measures as a feature extraction is illustrated in terms of the Mean and Standard deviation behavior as shown in Figure 4.  Based on the method proposed, the feature extraction process is as follows: 1. Source images are converted into YCbCr color space from RGB. Y, Cb, and Cr channels are extracted separately.2. Source image for each color space channel is further divided into non-overlapping blocks of n × n whereby n = 128, 64, 32, 16, 8. 3.One level RDWT decomposition applied on each partitioned image blocks to obtain four subbands (LL, LH, HL, and HH). 4. Focus measures operators based on (5) (6), and Conformable focus measures (14) (15) are calculated and combined for LH which emphasizes vertical edges, HL emphasizes horizontal edges and HH emphasizes diagonal edges individually.The LH, HL and HH are selected to use the high frequency sub-band, the detailed coefficients.5.The Mean, and the Standard deviation are calculated for each RDWT-Focus measure detailed coefficients that form features vector.6.These processes are the same for extracting authentic and spliced image features Based on the method proposed, the feature extraction process is as follows: 1.
Source images are converted into YCbCr color space from RGB. Y, Cb, and Cr channels are extracted separately.

2.
Source image for each color space channel is further divided into non-overlapping blocks of n × n whereby n = 128, 64, 32, 16, 8.

3.
One level RDWT decomposition applied on each partitioned image blocks to obtain four sub-bands (LL, LH, HL, and HH).

4.
Focus measures operators based on (5) (6), and Conformable focus measures (14) (15) are calculated and combined for LH which emphasizes vertical edges, HL emphasizes horizontal edges and HH emphasizes diagonal edges individually.The LH, HL and HH are selected to use the high frequency sub-band, the detailed coefficients.

5.
The Mean, and the Standard deviation are calculated for each RDWT-Focus measure detailed coefficients that form features vector.6.
These processes are the same for extracting authentic and spliced image features.7.
Authentic features are labeled as (1) and spliced features as (0) then combined to create the feature vector for classification.8.
SVM is used to classify the images into authentic and spliced.

The Experimental Results
The experimental conditions such as color space selection, dataset and classifier and classification performance, compared with other methods are discussed.The image datasets are publicly available datasets for researchers are used for testing the proposed algorithm [36,37].The DVMM dataset is not selected because images are cut and paste without any processing on the edges and the dataset itself is old compared to other most current data set for splicing.Table 1 shows the descriptions of these two datasets.Figures 5 and 6 display some samples from each dataset.

•
CASIA TIDE V2 is a color images dataset created by Institute of Automation Chinese Academy of Sciences.It consists of 7408 authentic and 5122 spliced images with different resolution.
The CASIA TIDE V2 images are more realistic and challenging as it underwent post-processing.These images are of various sizes ranging from 240 × 160 to 900 × 600 resolutions [36].It is publicly available datasets (https://www.kaggle.com/sophatvathana/casia-dataset.) • IFS-TC phase-1 is an open color image dataset is created by IEEE Information Forensics and Security Technical Committee (IFS-TC).The dataset includes 1006 authentic and 450 spliced images with resolution from 1024 × 575 to 1024 × 768 [37].
It is publicly available dataset, https://signalprocessingsociety.org/newsletter/2014/01/ieee-ifs-tc-imageforensics-challenge-website-new-submissions).7. Authentic features are labeled as (1) and spliced features as (0) then combined to create the feature vector for classification.8. SVM is used to classify the images into authentic and spliced.

The Experimental Results
The experimental conditions such as color space selection, dataset and classifier and classification performance, compared with other methods are discussed.The image datasets are publicly available datasets for researchers are used for testing the proposed algorithm [36,37].The DVMM dataset is not selected because images are cut and paste without any processing on the edges and the dataset itself is old compared to other most current data set for splicing.Table 1 shows the descriptions of these two datasets.Figures 5 and 6 display some samples from each dataset.The classification is performed on a SVM classifier to identify authentic and spliced images.Image splicing detection is a two-class classification procedure.It needs an appropriate binary classifier in order to make difference between spliced and authenticated images.Support Vector Machines (SVM) have been used to different applications of machine vision.SVM algorithm does not depend on the feature dimension of the classification task for this reason the result the solution is of type of globally optimal.In this study, Quadratic SVM kernel with 10-fold cross-verification is applied as a classifier which is a complete SVM toolbox in MATLAB.The SVM classifier was trained separately for each dataset.After the training process, a testing image set is used to prove the efficiency of the proposed method by using the SVM classifier.
The detection of image splicing performance of the proposed study is measured using True Positive Rate (TPR), True Negative Rate (TNR), and Accuracy.TP is the rate of correctly identified authentic images and TN for identified spliced images, False positives (FP) are those images which are actually authentic, but detected as forged, while false negatives (FN) are those images which are forged but detected as authentic.The detection rate is denoted as accuracy.These metrics are defined as follows: = .
The proposed algorithm was implemented using MATLAB 2019b on Windows platform [38].Two public datasets CASIAv2 and IFS-TC is used to evaluate the proposed method.
Experiments are conducted to evaluate the detection ability of the proposed algorithm, and the detailed results are illustrated in Tables 2 and 3.
The results for the first 20 observations of the testing sample are shown in Figure 7.We have chosen 20 images, 10 authentic, and 10 tampered.The trained SVM has classified each of these images either authentic or tampered.The result of the row 13 of the table shows an incorrect predication due to the complexity of the testing image.The classification is performed on a SVM classifier to identify authentic and spliced images.Image splicing detection is a two-class classification procedure.It needs an appropriate binary classifier in order to make difference between spliced and authenticated images.Support Vector Machines (SVM) have been used to different applications of machine vision.SVM algorithm does not depend on the feature dimension of the classification task for this reason the result the solution is of type of globally optimal.In this study, Quadratic SVM kernel with 10-fold cross-verification is applied as a classifier which is a complete SVM toolbox in MATLAB.The SVM classifier was trained separately for each dataset.After the training process, a testing image set is used to prove the efficiency of the proposed method by using the SVM classifier.
The detection of image splicing performance of the proposed study is measured using True Positive Rate (TPR), True Negative Rate (TNR), and Accuracy.TP is the rate of correctly identified authentic images and TN for identified spliced images, False positives (FP) are those images which are actually authentic, but detected as forged, while false negatives (FN) are those images which are forged but detected as authentic.The detection rate is denoted as accuracy.These metrics are defined as follows: TPR = TP TP + FN .
The proposed algorithm was implemented using MATLAB 2019b on Windows platform [38].Two public datasets CASIAv2 and IFS-TC is used to evaluate the proposed method.
Experiments are conducted to evaluate the detection ability of the proposed algorithm, and the detailed results are illustrated in Tables 2 and 3.The results for the first 20 observations of the testing sample are shown in Figure 7.We have chosen 20 images, 10 authentic, and 10 tampered.The trained SVM has classified each of these images either authentic or tampered.The result of the row 13 of the table shows an incorrect predication due to the complexity of the testing image.Table 2 shows the results obtained from the IFS-TC dataset.Y, the luminance channel, displays the least accuracy rate of 95.50%.The Cr color channel exhibited detection accuracy rate of 98.10% and Cb the Chrominance-blue color channel achieved the highest detection rate of 98.30% with 24-D features.Cb and Cr channels exhibit better accuracy individually; however, when combined they do not improve the detection accuracy rate, which is at 97.30% and 97.60% accuracy for all three-color channels.Table 2 shows the results obtained from the IFS-TC dataset.Y, the luminance channel, displays the least accuracy rate of 95.50%.The Cr color channel exhibited detection accuracy rate of 98.10% and Cb the Chrominance-blue color channel achieved the highest detection rate of 98.30% with 24-D features.Cb and Cr channels exhibit better accuracy individually; however, when combined they do not improve the detection accuracy rate, which is at 97.30% and 97.60% accuracy for all three-color channels.
Table 3 shows the result on CASIA TIDE V2 dataset.Cb color space achieved 98.6% higher detection accuracy rate.Similar to IFS_TC dataset, Cb produced the highest accuracy since Chroma channels contain more information on edge irregularities that easily detectable [12].The accuracy rate for all three-color space combined is 98.10%, which is 0.5% lower than the highest accuracy rate.
A comparison study is done to further evaluate the accuracy rate with other proposed methods.Table 4 shows result obtained from comparing proposed method with other methods.Zhao et al. (2015) method [17], achieved 90.10% accuracy.The proposed method achieved a better result compare two methods by Zhang et al. (2016) [19] and Vega et al. (2018) [39], with an accuracy rate of 96.69% and 96.19%, respectively.Zhang et al., 2016) [19] and (Wang et al., 2018) [40] methods have larger feature vector of 16524, and 11664-D with an accuracy rate of 96.69, and 95.94% respectively.The comparison results CASIA TIDE V2 dataset with other methods is stated in Table 5.The proposed method reached the higher accuracy, while the other methods contained higher feature dimension and undergone feature reduction process.Most of the image splicing detection methods shown in Table 5 were used high dimensional features, which results of high computational costs of processing of these features.While the proposed method is able to produce a better detection accuracy rate with smaller feature vector and without the use of the feature reduction process.The focus measures from RDWT coefficients are success to reveal the blurred splicing pieces in the tampered image.The experiment results proved the ability of the proposed algorithm to improve the accuracy of image splicing detection by more than 98% by making use of color information about forged images as compared with other splicing detection methods.
In summary, chrominance channels exhibited better detection results compared to luminance due to anomalies caused by splicing are more detectible in these channels.The proposed method is able to produce a better detection accuracy rate with smaller feature vector and without the use of the feature reduction process.

Conclusions
Images are accepted to represent truthfulness apart from the images that manipulated for artistic reasons.The credibility and authenticity of digital image has become difficult to ascertain as digital technology advance.Digital images can be easily manipulated even without any professional knowledge of image editing software.Detecting the manipulated images are increasingly vital in many fields, primarily journalism, social media and so forth.Thus, the current study improved the classification accuracy with low-dimensional of extracting feature vectors by using the combination of conformable focus measures (CFMs), and focus measure operators (FMOs) to obtain redundant discrete wavelet transform (RDWT) coefficients.The proposed method demonstrated an accuracy rate 98.3% on IFS-TC dataset and 98.6% on CASIA TIDE V2 with fewer feature dimension, 24-D features.The computational complexity was reduced due to smaller feature vector.The method has achieved the best result compared to other methods.The future study will focus on modifying the proposed algorithm to detect other types of image forgeries.
, ) , ∇ ( , ) are the horizontal and vertical Laplacian operator of image pixel.The decomposition of RDWT into low and high frequency sub-bands provides details of image information.Using focus measures to obtain RDWT coefficients allows evaluation of gradients clarity and blurring of the source images.The feature distributions of focus measures provide a statistical basis for distinction between authentic and spliced images.The advantage of using the focus measures as a feature extraction is illustrated in terms of the Mean and Standard deviation behavior as shown in Figure 4.

Figure 4 .
Figure 4.The proposed feature distribution for authentic and spliced images detection.

Figure 4 .
Figure 4.The proposed feature distribution for authentic and spliced images detection.The vertical axis of Figure 4 represents the value of the Mean of the extracted features, while the horizontal axis represents the Standard deviation of the extracted features.It can be seen from Figure 4 that the features are well clustered into two different classes which are the Authentic, and Tampered classes.Based on the method proposed, the feature extraction process is as follows:

•
[37]A TIDE V2 is a color images dataset created by Institute of Automation Chinese Academy of Sciences.It consists of 7408 authentic and 5122 spliced images with different resolution.The CASIA TIDE V2 images are more realistic and challenging as it underwent post -processing.These images are of various sizes ranging from 240 × 160 to 900 × 600 resolutions[36].It is publicly available datasets (https://www.kaggle.com/sophatvathana/casia-dataset.)•IFS-TCphase-1 is an open color image dataset is created by IEEE Information Forensics and Security Technical Committee (IFS -TC).The dataset includes 1006 authentic and 450 spliced images with resolution from 1024 × 575 to 1024 × 768[37].It is publicly available dataset,

Table 2 .
Result obtained on IFS-TC dataset.

Table 3 .
Results obtained on CASIA TIDE V2 dataset are cited.

Table 2 .
Result obtained on IFS-TC dataset.

Table 4 .
Comparison result between proposed and other methods on IFS-TC dataset.

Table 5 .
Comparison result between proposed and other methods for CASIAV2 dataset.