Next Article in Journal
Dynamic Multi-Product Integrated Scheduling Algorithm with Urgent Delivery Date
Previous Article in Journal
Research on a Network Diagnosis Method for a Train Control Center and Interlocking Integrated System Based on a Fuzzy Broad Learning System Model
Previous Article in Special Issue
An Inversion Method Based on Prior Knowledge for Deep Cascading Pipeline Defects
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Synergy of Internet of Things and Software Engineering Approach for Enhanced Copy–Move Image Forgery Detection Model

Department of Computer Science, College of Computer Engineering and Sciences, Prince Sattam Bin Abdulaziz University, Building No. 3963, Al-Kharj 16273, Saudi Arabia
Electronics 2025, 14(4), 692; https://doi.org/10.3390/electronics14040692
Submission received: 10 January 2025 / Revised: 6 February 2025 / Accepted: 8 February 2025 / Published: 11 February 2025
(This article belongs to the Special Issue Signal and Image Processing Applications in Artificial Intelligence)

Abstract

:
The fast development of digital images and the improvement required for security measures have recently increased the demand for innovative image analysis methods. Image analysis identifies, classifies, and monitors people, events, or objects in images or videos. Image analysis significantly improves security by identifying and preventing attacks on security applications through digital images. It is crucial in diverse security fields, comprising video analysis, anomaly detection, biometrics, object recognition, surveillance, and forensic investigations. By integrating advanced software engineering models with IoT capabilities, this technique revolutionizes copy–move image forgery detection. IoT devices collect and transmit real-world data, improving software solutions to detect and analyze image tampering with exceptional accuracy and efficiency. This combination enhances detection abilities and provides scalable and adaptive solutions to reduce cutting-edge forgery models. Copy–move forgery detection (CMFD) has become possibly a major active research domain in the blind image forensics area. Between existing approaches, most of them are dependent upon block and key-point methods or integration of them. A few deep convolutional neural networks (DCNN) techniques have been implemented in image hashing, image forensics, image retrieval, image classification, etc., that have performed better than the conventional methods. To accomplish robust CMFD, this study develops a fusion of soft computing with a deep learning-based CMFD approach (FSCDL-CMFDA) to secure digital images. The FSCDL-CMFDA approach aims to integrate the benefits of metaheuristics with the DL model for an enhanced CMFD process. In the FSCDL-CMFDA method, histogram equalization is initially performed to improve the image quality. Furthermore, the Siamese convolutional neural network (SCNN) model is used to learn complex features from pre-processed images. Its hyperparameters are chosen by the golden jackal optimization (GJO) model. For the CMFD process, the FSCDL-CMFDA technique employs the regularized extreme learning machine (RELM) classifier. Finally, the detection performance of the RELM method is improved by the beluga whale optimization (BWO) technique. To demonstrate the enhanced performance of the FSCDL-CMFDA method, a comprehensive outcome analysis is conducted using the MNIST and CIFAR datasets. The experimental validation of the FSCDL-CMFDA method portrayed a superior accuracy value of 98.12% over existing models.

1. Introduction

Incorporating software engineering and IoT in copy–move image forgery recognition influences the strengths of both fields to improve efficiency and accuracy. Software engineering models, like advanced method design and machine learning (ML), are united with IoT-enabled devices and sensors that deliver real data on image changes and usage. This interaction allows for the growth of robust recognition methods that can identify and analyze tampered images with higher precision, adapting to novel forgery methods while using distributed computing sources to progress data more efficiently. Recently, the domain of digital image forensics has noticed the improved consideration among different forensics fields for validation and verification of the contents existing in the images, such as medical images, journalistic images, etc., at the forensic exploration and legal methods [1]. The fast development in the accessibility of image processing tools for creating methods like simple image editing. In the information technology (IT) period, videos and images are the most prevalent information-sharing channels [2]. Images are frequently utilized for diverse purposes, such as evidence in numerous legal actions, military, security, media misinformation, social media, newspapers, etc. Because of the great importance of images and improved features of transmitting images by users through the internet, validating the authentication of images becomes a significant task in offering security [3]. In most scenarios, the images are transferred between the users without some alteration or adapted with the aim of performance [4]. Generating forged images has been commonly executed employing diverse techniques such as retouching, colorization, copy–move forgery (CMF), image splicing, re-sampling, morphing, and watermarking [5].
Generally, the image splicing technique is described as the cut–paste method, which employs an image by changing a part of a single image with one or many images [6]. Alternatively, CMF employs an image via changing patches in one position with alternative positions of a similar image. This type of forgery makes an image with homogeneous contexts without some heterogeneous features like differences in color or distortion or noise, shadows, edges, etc., making it extremely challenging to recognize the tampering. Moreover, in image forgery localization, the attention is on designing a model instead of considering only specific features or fields [7]. The model automatically identifies precise components dependent upon a system of advanced deep neural networks (DNNs). Instances of such categories of networks comprise convolutional neural networks (CNN), deep autoencoder (DAE), and deep belief networks (DBN). Among three neural networks (NNs), CNNs are popular in vision applications [8]. Such methods exploit local neighborhood pooling functions and trainable filters while testing raw input images, thus making hierarchies (from concrete to intellectual) of the features in the exploration [9]. Since the computer vision (CV) and image analysis in the CNN approach should be very highly advanced, CNN typically offers exceptional performance in image forgery identification over the combination of simple linear and non-linear filtering functions (for example, convolution and rectification) [10].
This study develops a fusion of soft computing with a deep learning-based CMFD approach (FSCDL-CMFDA) to secure digital images. The FSCDL-CMFDA approach aims to integrate the benefits of metaheuristics with the DL model for an enhanced CMFD process. In the FSCDL-CMFDA method, histogram equalization is initially performed to improve the image quality. Furthermore, the Siamese convolutional neural network (SCNN) model is used to learn complex features from pre-processed images. Its hyperparameters are chosen by the golden jackal optimization (GJO) model. For the CMFD process, the FSCDL-CMFDA technique employs the regularized extreme learning machine (RELM) classifier. Finally, the detection performance of the RELM method is improved by the beluga whale optimization (BWO) technique. To demonstrate the enhanced performance of the FSCDL-CMFDA method, a comprehensive outcome analysis is conducted using the MNIST and CIFAR datasets. The key contributions of the FSCDL-CMFDA method are listed below.
  • Histogram equalization enhances the contrast of input images, improving their quality for enhanced feature extraction. This pre-processing step ensures that relevant details in the images are more prominent, assisting the detection of subtle manipulations. Improving image visibility assists in attaining more accurate results in subsequent forgery detection tasks.
  • The SCNN technique learns complex features from pre-processed images, enabling the model to distinguish between authentic and forged regions. This methodology allows for the effectual comparison of image pairs, improving the detection of subtle forgeries. SCNN enhances the model’s accuracy in identifying image manipulations by focusing on feature similarity.
  • The GJO model is utilized to fine-tune the SCNN’s hyperparameters, optimizing its performance and improving its learning efficiency. By adjusting the hyperparameters, the model attains improved accuracy in detecting image forgeries. This optimization ensures that the SCNN operates at its full potential, resulting in enhanced detection results.
  • The RELM classifier is applied to the CMFD process, efficiently identifying forged areas in images. Its regularization improves the robustness and generalization of the model, resulting in more accurate forgery detection. The method can reliably distinguish between authentic and manipulated image regions by implementing the RELM classifier.
  • The proposed FSCDL-CMFDA model uniquely incorporates the SCNN with GJO methods for hyperparameter tuning and BWO for further refinement. This integration provides a robust CMFD solution, improving efficiency and accuracy compared to conventional methods. The novelty is the hybrid use of advanced optimization techniques to fine-tune the model, significantly improving its performance in detecting complex forgeries.

2. Literature Survey

In [11], a DL-CMFD model was presented in this study that categorizes images as forged or authentic, employing a CNN and contrast-limited adaptive histogram equalization (CLAHE). The CLAHE method produces the unseen features of the noticeable image, as a few were difficult to identify in CMF. The efficiency of the developed architecture was evaluated by employing benchmark databases. In [12], an innovative, quick, and proficient CMF detection (CMFD) technique was made and tested in this study. The developed algorithm implements a Deep-CNN (DCNN) that incorporates ResNet-50. The ResNet employs skip connection, which can extract the difficulty of exploding and vanishing gradients that arise in the standard CNN method. In [13], a DL-CNN method was designed employing multi-scale input with numerous phases of convolutional layers. Such layers are separated into two blocks: a decoder and an encoder. In the encoder block, removed feature maps from convolutional layers of numerous phases could be integrated and down-sampled. Likewise, mined feature maps could be incorporated and up-sampled at the decoder block. A sigmoid activation function was employed to categorize images into non-forged or forged images through the ultimate feature mapping. In [14], a transfer learning (TL)-based technique to recognize CMF employing a DCNN was developed in the introduced work. DCNN was trained with factors of the pre-trained GoogLeNet and implemented for recognizing several forgeries. Additionally, an innovative optimizer technique, Fractional Leader Harris Hawks Optimizer (FLHHO), was made to change the bias and weights of the DCNN. The authors [15] designed an Advanced Fake Image-Feature Network (AFIFN) dependent upon DL techniques. A Discrete Cosine Transformation (DCT) and Y Cr Cb-based image pre-processing were exploited. The model should be trained to distinguish the features among the real and forged images. Similarly, a classification layer was included with the outlined AFIFN to identify whether the input images were fake. In [16], a DL-based technique was introduced to categorize the CMF images. A DL-hybrid approach called VI-NET was introduced to organize the fake images, and a combination of 2 DL methods, InceptionV3 and VGG16, was applied. Further, the output of the two models is integrated and interconnected with two added convolutional layers. In [17], a hybrid optimizer method and algorithm for CMFD was projected. Moreover, an innovative DL stacked sparse denoising-AE (SSDAE) was developed to classify the images as fake and authentic. Furthermore, the bias and weight parameters of the SSDAE technique have been enhanced by employing the Grasshopper Optimizer Algorithm (GOA) and Spotted Hyena optimizer (SHO) methods. Krishnaraj et al. [18] provided an automatic DL-based fusion method to identify and find CMF (DLFM-CMDFC). This method integrates systems of DenseNets and generative adversarial networks (GAN). An ELM model was employed, and the ELM method bias and weight values must be correctly changed by employing the artificial fish swarm algorithm (AFSA). The network outcomes will be provided in the fusion component as inputs. In conclusion, faked images recognize the variance among the target and input fields.
Eltoukhy et al. [19] proposed a CNN-based method to detect copy–move video forgeries by analyzing individual frames. The methodology extracts features from pre-trained CNN methods to classify each frame as genuine or forged, attaining higher accuracy and efficiency than existing techniques. Timothy and Santra [20] presented a DL methodology by incorporating GNNs, CNNs, VGG16, MobileNet, and ResNet50 to detect image forgeries, focusing on compression discrepancies for accurate binary classification. Prakash, Jaiprakash, and Kumar [21] explored techniques for image authentication, focusing on ML and DL methods to enhance image forensics. Ghai, Kumar, and Gupta [22] developed a DL approach for detecting image forgeries, focusing on copy–move and splicing techniques, utilizing a pre-trained CNN to evaluate performance on public benchmark datasets. Suresh, Krishna, and Chaitanya [23] improved image forgery detection by employing advanced pre-processing, feature extraction, classification, and encryption techniques. Singh et al. [24] introduce a DL-based method by employing MobileNet, ResNet, and Mask R-CNN to detect and localize copy–move and splicing forgeries in images by extracting features and quantifying manipulated areas. Kaur et al. [25] explored Chaotic Watermarking for tamper detection, focusing on securing digital content from unauthorized modifications and copyright violations through real-time watermark embedding and chaotic systems. Arivazhagan, Russel, and Saranyaa [26] propose two DL models to detect copy–move forgery in images, utilizing custom architecture and TL, demonstrating superior accuracy and robustness against distortions, with applications in digital forensics and copyright protection. Fatoni et al. [27] developed a DL method to distinguish real and fake images for improved accuracy. Kuznetsov et al. [28] introduced a novel CNN architecture for enhanced copy–move forgery detection by implementing dataset heterogeneity and improving detection in complex image manipulations, which is significant for legal, journalistic, and scientific integrity.
The existing studies on copy–move forgery detection portray significant advancements, but diverse limitations remain. Many models face difficulty detecting complex forgeries in highly manipulated images, particularly in diverse datasets with varying image characteristics. Existing techniques often require substantial computational resources and are limited in their capability to handle large-scale datasets. Furthermore, while DL methods have been utilized, the integration of dataset heterogeneity and improvements in detection accuracy in real-world applications still need additional exploration. There is a requirement for more effectual architectures that can balance high accuracy with computational feasibility in real-time applications.

3. The Proposed Method

This study introduces an FSCDL-CMFDA approach to securing digital images. The method aims to integrate the benefits of metaheuristics with the DL model for an enhanced CMFD process. Figure 1 represents the entire procedure of the presented FSCDL-CMFDA method.

3.1. Image Pre-Processing

Initially, the FSCDL-CMFDA applies histogram equalization to improve the image quality [29]. This is chosen to enhance the contrast of images, which is significant for improving feature extraction in forgery detection tasks. By redistributing the intensity levels of an image, histogram equalization enhances the visibility of details in both light and dark regions, making it easier to detect subtle manipulations. This technique is computationally effectual and does not require complex model training or extensive data, making it appropriate for pre-processing in real-time applications. Compared to other contrast enhancement methods like adaptive histogram equalization or contrast-limited adaptive histogram equalization, histogram equalization provides a simpler, faster, and more effective enhancement in overall image quality. Its simplicity and efficiency in improving global contrast without over-complicating the pre-processing pipeline give it an edge for forgery detection tasks where clarity is essential.
The image contrast states slight changes in brightness between the high and low images. The contrast of input images is increased during histogram equalization.
Assume input image im as a m × n matrix of integer pixel intensity at interval [0, L−1]:
h n = N p n T n
In Equation (1), n = 0, 1, …, L−1. The overall number of pixels with n intensity is expressed as Np_n and T_n. L denotes the potential intensity quantity, which can be 1 or 256. h_n describes the normalized histogram of images for the potential intensity. Based on the previous definition, the histogram equalization is given as follows:
e q ( m , n ) = f l o o r L m = 0 i m m , n h n
In Equation (2), floor(∙) represents the round number of the following nearby integer. The technique depended on the X and Y expression, which are constant variables at random, and h and eq^’s intensities lie at an interval [0, L−1]. The expression of Y is as follows:
Y = d X = L 1 0 X h i X d x
In Equation (3), the pixel intensity and the amount of pixels with that intensity are represented as the x-axis and y-axis. d denotes the cumulative distributive function of X times (L−1), and D indicates the invertible and differentiable. hi represents the histogram of im. The different ranges of pixel intensity characterize different colors in the histogram.

3.2. Feature Extraction

The SCNN architecture learns complex features from pre-processed images [30]. This model was chosen due to its excellence in learning and comparing complex features from pre-processed images, specifically for tasks like forgery detection. SCNNs are designed to work with pairs of images, making them ideal for detecting subtle differences between original and forged regions. This architecture allows the model to learn the similarity or dissimilarity between two input images directly, which is significant for detecting manipulations. Compared to conventional CNNs, SCNNs have the advantage of better performance in tasks where the relationship between two images (e.g., forged and original) needs to be learned explicitly. Furthermore, SCNNs need less labeled data for training related to other architectures, making them more efficient and effective in detecting complex image forgeries. Their capability to generalize across diverse image types and forgeries provides a significant advantage over simpler models. Figure 2 demonstrates the infrastructure of SCNN.
The architecture of the SCNN includes a similar dual CNN and a module. The access to the method is a set of images, one of which is considered. The images are delivered over subnets, producing dual outputs for the power unit that compute the degree of similarity among the outputs of the subnets. Set the degree of similarity; the loss function executes the calculation error over the label and the grade of resemblance. The outcome of the loss function, characterized by the descent of the gradient, is spread similarly between the subnets, and the weights are upgraded by passing over all vectors of the system parameters through backpropagation (BP).
Generally, the weights work as follows in this model: the parameter vector is upgraded with the stochastic gradient model utilizing the sum of the gradients delivered by the dual subnets in the same unit utilizing feedforward. Then, the weights are united among the networks in BP so that the behavior among the feature extraction is the same, thus gaining the finest performance of this model. The traditional method to produce a technique from a CNN is to allocate possibility to the probable outlines of the demonstrated variables. Energy-based methods (EBM) allocate non-standard energy to these outlines. EBM utilizes a set of data entries to create contrasts and check the grade of resemblance or energy. In this method, it is probable to execute the classification of elements over the EBM output consistent with the energy value.
The output is a dual value, whereas Y = 0 is the set of images of a similar class (genuine pair), and Y = 1 is a dissimilar class (composite pair). The weights are then united among the SCNN by mapping the design of the input sets and removing the chief features to be equated in the last flow of the method. The Euclidean distance is utilized to measure the distance among the outputs of every system is given as follows:
E w X 1 , X 2 = w = 0 n G w X 1 G W X 2 2 .
By considering the norm of the equation, the value will be between zero and one, owing to the usage of the sigmoid activation function in the output layer of the neural networks. If the energy value is less than 0.5, it is genuine or impossible, as given below:
V g e n u i n e = E w X 1 ,   X 2 < 0.5 , e
V i m p o s i t e = E w X 1 , X 2 > 0.5 .
Next, the GJO model chooses hyperparameter tuning. The GJO information is presented in this section [31]. This approach is parallel to other swarm procedures, which begins by creating a population of agents utilizing Equation (7):
X i = L + r × U L , 1 = 1,2 , , N .
Then, the fitness value of X_i is evaluated; the finest jackal is called male jackal X_b, and the second is the female jackal for prey location. Thereafter, the matrix prey (X) is created by employing the below-mentioned formulation as an initial value.
X P = X 11 X 12 X 1 d X 21 X 22 X 2 d X N 1 X N 2 X N d .                          

3.2.1. Steps of Exploration

Following this, hunting is the responsibility of the male jackal (X), whereas the female (X_FM) follows it. This conveys the exploration procedure of GJO to discover the prey, and it is assumed as follows:
X 1 t = X M t E × X M t r l × X P t ,
X 2 t = X F M t E × X F M t r l × X P t ,
where X_1 and X_2 are the improved values of X_M and X_FM, individually. Furthermore, E signifies the energy of prey that is calculated as follows:
E = E 0 × E 1 ,
where E 0 and E 1 denote the initial value and reduced energy value, respectively, which are intended as follows:
E 0 = 2 × r 1 ,
E 1 = c 1 × 1 t T ,
where T refers to the total iteration number. Moreover, r l indicates the random value achieved using the Levy distribution and is indicated as follows:
r l = L e v y × 0.05 .
Lastly, the jackal’s values are amended by utilizing Equation (15):
X t + 1 = X 1 ( t ) + X 2 ( t ) 2 .

3.2.2. Steps of Exploitation

In this phase, the exploitation behavior of GJO is presented. Generally, this procedure is conveyed by pretending the hunting of female and male jackals as follows:
X 1 t = X M t E × r l × X M t X P t ,
X 2 t = X F M t E × r l × X F M t X P t .

3.2.3. Switching Between Exploration and Exploitation

The target’s escape energy makes the transition method among search phases probable. The results begin their exploration phase if the value |E| > 1; otherwise, they initiate their exploitation phase.

3.3. RELM-Based Classification Model

For the CMFD process, the FSCDL-CMFDA technique uses the RELM classifier [32]. This classifier was chosen for its capability to handle massive datasets efficiently while maintaining high accuracy. RELM integrates the advantages of ELMs, such as fast training speed and simplicity, with regularization techniques that help prevent overfitting and improve generalization. Unlike conventional classifiers that require complex optimization, RELM can achieve competitive results with fewer training parameters. This makes it particularly well-suited for tasks like CMFD, where the classifier needs to process complex features while ensuring robust performance. Additionally, the capability of the RELM model to adapt to diverse types of image manipulations gives it an edge over other classifiers, such as support vector machines (SVMs) or decision trees (DTs), which may face difficulty with high-dimensional data and complex feature spaces. RELM gives a significant advantage in forgery detection tasks by presenting a good balance between accuracy, efficiency, and computational cost.
ELM is an improved single hidden layer (HL)-FFNN, including M-training samples.
x j ,   t j , j = 1 ,   ,   M , x j = { x 1 ,   x 2 ,   ,   x m } T , t j = { t 1 ,   t 2 ,   ,   t n } T
x j   a n d   t j signify the input and output of the j sample, correspondingly. Set the nodes of HL as L and the activation function as g w , b , x . The ELM network contains three neurons: m input, L hidden, and n output. The formulation is as follows:
t j = i = 1 L β i g i w i x j + b i   j = 1 ,   ,   M
β i = [ β i 1 ,   β 2 l ,   ,   β i L ] T and W i = { W i 1 ,   W 2 l ,   ,   W i L } T represents the weight vectors relating i hidden neuron to the output and input layer, respectively; and b i specifies the bias of the i hidden node, which is produced at random. The objective of the ELM will be given below:
T = H β
Formulation:
H = g ( ω 1 , b 1 , x 1 ) g ω 2 , b 2 , x 1 g ω L , b L , x 1 g ( ω 1 , b 1 , x 2 ) g ω 2 , b 2 , x 2 g ( ω L , b L , x 2 ) g ( ω 1 , b 1 , x N ) g ω 2 , b 2 , x N g ( ω L , b L , x N )
Replacing Equation (20) with Equation (19) is gained by minimum squares and single value decomposition:
β = ( H T H ) 1 H T T
The summary of the regularization coefficient increases the essential constancy of the ELM and outputs RELM:
β = H T H + C I 1 H T T
In Equation (22), C specifies the factor of regularization, and I represents the unit matrix.

3.4. Hyperparameter Tuning Using BWO Model

Finally, the detection of the RELM technique is performed by the BWO method [33]. This model was chosen due to its exceptional ability to navigate complex, high-dimensional search spaces and optimize multiple parameters simultaneously. Unlike conventional optimization algorithms, BWO is inspired by the intelligent behaviors of beluga whales, enabling it to balance exploration and exploitation effectively. This is valuable in DL models, where finding optimal hyperparameters is significant for optimizing performance. The robustness of the BWO model to local optima ensures that the tuned hyperparameters result in improved convergence and higher accuracy. Moreover, it needs fewer iterations to find the optimal solution for conventional methods like grid or random search, making it computationally effectual. BWO outperforms many other optimization techniques by giving faster convergence and better accuracy in tasks such as forgery detection, where hyperparameter fine-tuning significantly impacts model performance.
BWO becomes an innovative optimizer hypothesized at beluga whales’ movement, preying, and performance (BWs) in the ocean. BWs refer to social life that shares the data and communicates the search for food places. Primarily, the fitness function (FF) has been demonstrated as follows:
F X = f x 1,1 , x 1,2 , , x 1 , d f x 2,1 , x 2,2 , , x 2 , d f x n , 1 , x n , 2 , , x n , d
The motion of swimming of the 2 BW pairs signifies the exploration stage that is mathematically defined as follows:
X i , j t + 1 = X i , p j t + X r , p 1 t X i , p j t 1 + r 1 s i n 2 π r 2 , j = e v e n X i , p j t + X r , p 1 t X i , p j t 1 + r 1 c o s ( 2 π r 2 ) , j = o d d
Meanwhile, X r , p 1 t is a whale elected arbitrarily in the created B W s . The BWO exploitation stage has been hypothesized in the preying and hunting procedure of B W s . They upgrade their places depending on better performance by utilizing the Levy flight approach as follows:
X i t + 1 = r 3 X b e s t t r 4 X i t + C 1 L F X r t X i t
C 1 = 2 r 4 1 t t m a x
whereas X b e s t t stands for the best place, X r t defines the arbitrarily elected BW, and L F denotes the function of Lévy flight that is expressed as follows:
L F = 0.05 × u × σ | v | 1 / β
σ = sin π β 2 × Γ 1 + β β × Γ 1 + β 2 × 2 β 1 2 1 / β
u and v stand for the random variables, and β refers to the adaptive variable utilized for enabling the transition between the exploitation as well as exploration stages; it is measured as follows:
B f = B 0 1 t 2 t m a x
Meanwhile, B 0 refers to the random value within the interval of zero and one. If B f > 0.5 , the BWs upgrade their places in the exploration stage; then, they can upgrade their places from the exploitation mode. The last phase of BWO depends on the whale drop of BWs once the killer whales attack it. The deceased BWs will be placed on a deep seabed. It is defined as follows:
X i t + 1 = r 5 X i t r 6 X r t + r 7 X s t e p
X s t e p = U b L b e x p C 2 t t m a x
C 2 = 2 W f × n
w f = 0.1 0.05 t t m a x
In this case, r 5 ,   r 6 , and r 7 represent the random variables between zero and one. The BWO pseudocode is exposed in Algorithm 1.
Algorithm 1: BWO pseudocode
Start BWO
     Input: Choose the parameters of BWO N ,   t m a x ,   U B ,   a n d   L B .
     Output: The best location ( X b e s t ) of the populations and the equivalent FF.
     While t < i m a x
     The utilizing C 1 ,   B f , and w f values depend on Equations (26), (29), and (33).
     If B f > 0.5
          Upgrade the places of the BWs according to Equation (24).
     Otherwise
          Upgrade the gorilla’s places the utilizing in Equation (25).
     end
          Calculate the FFs for the novel places and choose the optimum solution.
     If B f W f
          Upgrade the places of the BWs utilizing Equation (30).
     End
          Calculate the FFs for the novel locations and choose the optimum solution.
     End while
End BWO
The BWO method develops an FF to increase classifier effectiveness. It describes a positive number to signify the outstanding proficiencies of the candidate solutions. The decrease in a classifier’s error rate is regarded as the FF to be specified in Equation (34):
f i t n e s s x i = C l a s s i f i e r E r r o r R a t e x i = n u m b e r   o f   m i s c l a s s i f i e d   s a m p l e s T o t a l   n u m b e r   o f   s a m p l e s × 100

4. Result Analysis and Discussion

The outcome validation of the FSCDL-CMFDA method is examined by utilizing two open-access datasets: MNIST [34] and CIFAR [35]. The suggested technique is simulated using the Python 3.6.5 tool on a PC with an i5-8600k, 250 GB SSD, GeForce 1050Ti 4 GB, 16 GB RAM, and 1 TB HDD. The parameter settings are provided: learning rate: 0.01, activation: ReLU, epoch count: 50, dropout: 0.5, and batch size: 5.
The overall results obtained by the FSCDL-CMFDA technique under the MNIST dataset are represented in Table 1 and Figure 3. These accomplished results specify that the FSCDL-CMFDA technique gains improving performance under all runs. With run-1, the FSCDL-CMFDA technique provides p r e c n of 95.50%, r e c a l of 96.27%, a c c u y of 97.36%, and F s c o r e of 97.19%. Also, based on run-4, the FSCDL-CMFDA method presents p r e c n of 97.05%, r e c a l of 98.41%, a c c u y of 96.95%, and F s c o r e of 97.70%. In line with, based on run-6, the FSCDL-CMFDA method attains p r e c n of 95.97%, r e c a l of 97.57%, a c c u y of 97.49%, and F s c o r e of 97.12%. With run-8, the FSCDL-CMFDA method gains p r e c n of 97.88%, r e c a l of 96.69%, a c c u y of 96.24%, and F s c o r e of 97.29%. Lastly, with run-10, the FSCDL-CMFDA technique offers p r e c n of 98.19%, r e c a l of 96.30%, a c c u y of 97.73%, and F s c o r e of 96.14%.
The a c c u y curves for training (TRA) and validation (VL) shown in Figure 4 for the FSCDL-CMFDA method under the MNIST dataset provide valuable insights into its effectiveness in multiple epochs. Mainly, it is a reliable upgrade under both TRA and TES a c c u y with increased epochs, specifying the proficiencies of the model for recognizable and learnable forms in the data of TRA and TES. The increased trends in TES a c c u y emphasize the model’s adaptability for the TRA dataset and the capability to produce exact predictions on unnoticed data, highlighting the ability for strong generalization.
Figure 5 illustrates an extensive result of the TRA and TES loss values for the FSCDL-CMFDA technique with the MNIST dataset in diverse epochs. The TRA loss is continuously minimized as a model refines the weights to decrease classifier errors under both datasets. The loss curves show the arrangement of the model with the TRA data, accentuating its ability to capture patterns successfully. Significant is the incessant improvement of parameters in the FSCDL-CMFDA model, targeted at diminishing differences among predictions and actual TRA labels.
Similarly, ROC curves created by the FSCDL-CMFDA approach are denoted under the MNIST dataset in Figure 6, signifying its proficiency in differentiating among classes. These curves offer valuable insights into how the FPR and TPR trade-offs change at diverse classification epochs and thresholds. These results underscore the model’s exact classification effectiveness in different class labels, emphasizing its efficiency in overcoming several classification challenges.
Table 2 and Figure 7 exhibit the comprehensive results of the FSCDL-CMFDA method on the CIFAR-10 dataset. These results indicate the FSCDL-CMFDA approach’s boosted performance under all runs. Based on run-1, the FSCDL-CMFDA approach obtains a p r e c n of 98.17%, r e c a l of 98.08%, a c c u y of 96.59%, and F s c o r e of 98.74%.
Moreover, with run-4, the FSCDL-CMFDA approach gains p r e c n of 98%, r e c a l of 98.03%, a c c u y of 96.63%, and F s c o r e of 96.62%. Meanwhile, based on run-6, the FSCDL-CMFDA approach offers p r e c n of 99.14%, r e c a l of 97.33%, a c c u y of 97.82%, and F s c o r e of 98.72%. Similarly, with run-8, the FSCDL-CMFDA technique obtains p r e c n of 99.17%, r e c a l of 97.737%, a c c u y of 99.02%, and F s c o r e of 98.19%. Based on run-10, the FSCDL-CMFDA technique gives p r e c n of 97.94%, r e c a l of 98.14%, a c c u y of 97.36%, and F s c o r e of 97.21%.
The a c c u y curves for TRA and VL shown in Figure 8 for the FSCDL-CMFDA method under the CIFAR-10 dataset offer valued insights into its efficacy at multiple epochs. Primarily, it is a constant improvement under the TRA and TES a c c u y with raised epochs, specifying the proficiencies of the model in learning and recognizing patterns from both TRA and TES data. The increased trends in TES a c c u y emphasize the adaptability to the TRA dataset and the ability to generate precise predictions on undetected data, highlighting the skills of strong generalization.
Figure 9 displays an extensive outcome of the TRA and TES loss values for the FSCDL-CMFDA approach with the CIFAR-10 dataset at diverse epochs. The TRA loss persistently lessens as the model refines its weights to reduce classifier errors under both datasets. The loss curves indicate the model’s position with the TRA data, highlighting its ability to capture patterns effectively. The constant improvement of parameters in the FSCDL-CMFDA technique is significant and targeted at reducing differences between predictions and actual TRA labels.
Likewise, the ROC curves created by the FSCDL-CMFDA approach are illustrated with the CIFAR-10 dataset in Figure 10, signifying its proficiency in distinguishing between classes. These curves offer valued insights into how the trade-off between FPR and TPR differs at several classification thresholds and epochs. The results underscore the model’s precise classification performance on diverse class labels, underscoring its efficacy in overcoming various classification challenges.
The comparative results of the FSCDL-CMFDA technique are demonstrated in Table 3 and Figure 11 [36]. These achieved outcomes inferred the worse detection results of the CMFD, IFD-AOS-FPM, CMFD-BMIF, BB-KB-ICMFD, and CMFD-GAN-CNN models with the lowest classification performance. At the same time, the DLFM-CMDFC and RSADTL-CMFD models have appeared to perform slightly more effectively than earlier models. But the FSCDL-CMFDA technique is more effectual than existing ones with a maximum p r e c n of 98.12%, r e c a l of 97.88%, and F s c o r e of 98.14%. These results confirmed the effectual CMFD performance of the FSCDL-CMFDA technique in securing digital images.
Table 4 and Figure 12 illustrate the computational time (CT) analysis of the FSCDL-CMFDA technique with existing models. The methods listed comprise CMFD 11.10 s, IFD-AOS-FPM 9.39 s, CMFD-BMIF 9.31 s, BB-KB-ICMFD 7.56 s, CMFD-GAN-CNN 6.72 s, DLFM-CMDFC 8.59 s, RSADTL-CMFD 14.48 s, and the proposed FSCDL-CMFDA method, which illustrates the lowest CT of 5.11 s. This suggests that the FSCDL-CMFDA method presents the fastest processing time compared to the others, potentially improving its suitability for real-time applications where quick forgery detection is crucial.

5. Conclusions

In this article, an FSCDL-CMFDA technique to secure digital images is developed. The FSCDL-CMFDA technique aims to integrate the benefits of metaheuristics with the DL model for an enhanced CMFD process. In the FSCDL-CMFDA technique, histogram equalization is initially executed to improve the quality of images. Furthermore, the SCNN model learns complex features from pre-processed images, and the GJO model chooses its hyperparameters. For the CMFD process, the FSCDL-CMFDA technique employs the RELM classifier. Finally, the detection performance of the RELM technique was enriched by the BWO model. To demonstrate the enhanced performance of the FSCDL-CMFDA method, a comprehensive outcome analysis was conducted using the MNIST and CIFAR datasets. The experimental validation of the FSCDL-CMFDA method portrayed a superior accuracy value of 98.12% over existing models. The limitations of the FSCDL-CMFDA method comprise its reliance on a limited set of datasets, which may affect the generalizability of the outcomes to more diverse or real-world scenarios. Moreover, the model’s performance might degrade when faced with high levels of noise or extremely complex forgeries, which were not fully addressed in this study. Furthermore, the computational complexity and time required for training could affect real-time applications. Future work should concentrate on expanding the dataset diversity to comprise more challenging and varied image types. Additionally, exploring and integrating more advanced pre-processing techniques or investigating the model’s performance in videos could improve its applicability. Developing more efficient algorithms to mitigate computational overhead while maintaining accuracy would also be a valuable direction for future research. Lastly, the potential for integrating unsupervised learning approaches to detect unknown types of forgeries could be explored.

Funding

This study is supported via funding from Prince Sattam bin Abdulaziz University, project number (PSAU/2025/R/1446).

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The author declare no conflicts of interest.

References

  1. Parveen, A.; Khan, Z.H.; Ahmad, S.N. Block-based copy–move image forgery detection using DCT. Iran J. Comput. Sci. 2019, 2, 89–99. [Google Scholar] [CrossRef]
  2. Kang, L.; Cheng, X.-P. Copy-move forgery detection in digital image. In Proceedings of the 2010 3rd International Congress on Image and Signal Processing, Yantai, China, 16–18 October 2010; pp. 2419–2421. [Google Scholar]
  3. Alkawaz, M.H.; Sulong, G.; Saba, T.; Rehman, A. Detection of copy move image forgery based on discrete cosine transform. Neural Comput. Appl. 2018, 30, 183–192. [Google Scholar] [CrossRef]
  4. Touati, R.; Ferchichi, I.; Messaoudi, I.; Oueslati, A.E.; Lachiri, Z.; Kharrat, M. Pre-Cursor microRNAs from Different Species classification based on features extracted from the image. J. Cybersecur. Inform. Manag. 2021, 3, 5–13. [Google Scholar] [CrossRef]
  5. Abidin, A.B.Z.; Majid, H.B.A.; Samah, A.B.A.; Hashim, H.B. Copy-move image forgery detection using deep learning methods: A review. In Proceedings of the 2019 6th International Conference on Research and Innovation in Information Systems (ICRIIS), Johor Bahru, Malaysia, 2–3 December 2019; pp. 1–6. [Google Scholar] [CrossRef]
  6. Kumar, N.; Meenpal, T. Salient keypoint-based copy–move image forgery detection. Austral. J. Forensic Sci. 2023, 55, 331–354. [Google Scholar] [CrossRef]
  7. Sabeena, M.; Abraham, L. Convolutional block attention based network for copy-move image forgery detection. Multimedia Tools Appl. 2023, 83, 2383–2405. [Google Scholar] [CrossRef]
  8. Sadeghi, S.; Dadkhah, S.; Jalab, H.A.; Mazzola, G.; Uliyan, D. State of the art in passive digital image forgery detection: Copy move image forgery. Pattern Anal. Appl. 2017, 21, 291–306. [Google Scholar] [CrossRef]
  9. Zheng, J.; Liu, Y.; Ren, J.; Zhu, T.; Yan, Y.; Yang, H. Fusion of block and keypoints based approaches for effective copy-move image forgery detection. Multidimens. Syst. Signal Process. 2016, 27, 989–1005. [Google Scholar] [CrossRef]
  10. Abbas, M.N.; Ansari, M.S.; Asghar, M.N.; Kanwal, N.; O’Neill, T.; Lee, B. Lightweight deep learning model for detection of copy move image forgery with post-processed attacks. In Proceedings of the 2021 IEEE 19th World Symposium on Applied Machine Intelligence and Informatics (SAMI), Herl’any, Slovakia, 21–23 January 2021; pp. 125–130. [Google Scholar]
  11. Kaur, N.; Jindal, N.; Singh, K. A deep learning framework for copy-move forgery detection in digital images. Multimed. Tools Appl. 2023, 82, 17741–17768. [Google Scholar] [CrossRef]
  12. Sharma, V.; Singh, N. Deep convolutional neural network with ResNet-50 learning algorithm for copy-move forgery detection. In Proceedings of the 2021 7th International Conference on Signal Processing and Communication (ICSC), Noida, India, 25–27 November 2021; pp. 146–150. [Google Scholar]
  13. Jaiswal, A.K.; Srivastava, R. Detection of copy-move forgery in digital image using multi-scale, multi-stage deep learning model. Neural Process. Lett. 2022, 54, 75–100. [Google Scholar] [CrossRef]
  14. Chaitra, B.; Reddy, P.B. An approach for copy-move image multiple forgery detection based on an optimized pre-trained deep learning model. Knowl.-Based Syst. 2023, 269, 110508. [Google Scholar]
  15. Ananthi, M.; Rajkumar, P.; Sabitha, R.; Karthik, S. A secure model on Advanced Fake Image-Feature Network (AFIFN) based on deep learning for image forgery detection. Pattern Recognit. Lett. 2021, 152, 260–266. [Google Scholar]
  16. Kumar, S.; Gupta, S.K.; Kaur, M.; Gupta, U. VI-NET: A hybrid deep convolutional neural network using VGG and inception V3 model for copy-move forgery classification. J. Vis. Commun. Image Represent. 2022, 89, 103644. [Google Scholar] [CrossRef]
  17. Gupta, R.; Singh, P.; Alam, T.; Agarwal, S. A deep neural network with hybrid spotted hyena optimizer and grasshopper optimization algorithm for copy move forgery detection. Multimed. Tools Appl. 2023, 82, 24547–24572. [Google Scholar] [CrossRef]
  18. Krishnaraj, N.; Sivakumar, B.; Kuppusamy, R.; Teekaraman, Y.; Thelkar, A.R. Research Article Design of Automated Deep Learning-Based Fusion Model for Copy-Move Image Forgery Detection. Comput. Intell. Neurosci. 2022, 2022, 8501738. [Google Scholar] [CrossRef] [PubMed]
  19. Eltoukhy, M.M.; Alsubaei, F.S.; Mortda, A.M.; Hosny, K.M. An efficient convolution neural network method for copy-move video forgery detection. Alex. Eng. J. 2025, 110, 429–437. [Google Scholar] [CrossRef]
  20. Timothy, D.P.; Santra, A.K. Detecting Digital Image Forgeries with Copy-Move and Splicing Image Analysis using Deep Learning Techniques. Int. J. Adv. Comput. Sci. Appl. 2024, 15, 1299. [Google Scholar] [CrossRef]
  21. Prakash, C.S.; Jaiprakash, S.P.; Kumar, N. Toward Reliable Image Forensics: Deep Learning-Based Forgery Detection. In Emerging Threats and Countermeasures in Cybersecurity; Wiley: Hoboken, NJ, USA, 2025; pp. 23–37. [Google Scholar]
  22. Ghai, A.; Kumar, P.; Gupta, S. A deep-learning-based image forgery detection framework for controlling the spread of misinformation. Inf. Technol. People 2024, 37, 966–997. [Google Scholar] [CrossRef]
  23. Suresh, S.; Krishna, B.; Chaitanya, J. Enhancing Image Forgery Detection on Social Media via GrabCut Segmentation and RA Based MobileNet with MREA for Data Security. In Sustainable Development Using Private AI; CRC Press: Boca Raton, FL, USA, 2025; pp. 214–231. [Google Scholar]
  24. Singh, K.U.; Rao, A.; Kumar, A.; Varshney, N.; Chundawat, P.S.; Singh, T. Detecting and Locating Image Forgeries with Deep Learning. In Proceedings of the 2024 IEEE International Conference on Contemporary Computing and Communications (InC4), Bangalore, India, 15–16 March 2024; Volume 1, pp. 1–7. [Google Scholar]
  25. Kaur, H.; Singh, S.K.; Chhabra, A.; Bhardwaj, V.; Saini, R.; Kumar, S.; Arya, V. Chaotic Watermarking for Tamper Detection: Enhancing Robustness and Security in Digital Multimedia. In Digital Forensics and Cyber Crime Investigation; CRC Press: Boca Raton, FL, USA, 2025; pp. 101–112. [Google Scholar]
  26. Arivazhagan, S.; Russel, N.S.; Saranyaa, M. CNN-based approach for robust detection of copy-move forgery in images. Intel. Artif. 2024, 27, 80–91. [Google Scholar]
  27. Fatoni, F.; Kurniawan, T.B.; Dewi, D.A.; Zakaria, M.Z.; Muhayeddin, A.M.M. Fake vs Real Image Detection Using Deep Learning Algorithm. J. Appl. Data Sci. 2025, 6, 366–376. [Google Scholar] [CrossRef]
  28. Kuznetsov, O.; Frontoni, E.; Romeo, L.; Rosati, R. Enhancing copy-move forgery detection through a novel CNN architecture and comprehensive dataset analysis. Multimed. Tools Appl. 2024, 83, 59783–59817. [Google Scholar] [CrossRef]
  29. Alshehri, M. Breast Cancer Detection and Classification Using Hybrid Feature Selection and DenseXtNet Approach. Mathematics 2023, 11, 4725. [Google Scholar] [CrossRef]
  30. Santos, M.S.; Valadao, C.T.; Resende, C.Z.; Cavalieri, D.C. Predicting diabetic retinopathy stage using Siamese Convolutional Neural Network. Comput. Methods Biomech. Biomed. Eng. Imaging Vis. 2024, 12, 2297017. [Google Scholar] [CrossRef]
  31. Attiya, I.; Al-qaness, M.A.; Abd Elaziz, M.; Aseeri, A.O. Boosting task scheduling in IoT environments using an improved golden jackal optimization and artificial hummingbird algorithm. AIMS Math. 2024, 9, 847–867. [Google Scholar] [CrossRef]
  32. Li, J.; Zhang, X.; Yao, Y.; Qi, Y.; Peng, L. Regularized Extreme Learning Machine Based on Remora Optimization Algorithm for Printed Matter Illumination Correction. IEEE Access 2024, 12, 3718–3735. [Google Scholar] [CrossRef]
  33. Ali, H.H.; Ebeed, M.; Fathy, A.; Jurado, F.; Babu, T.S.; Mahmoud, A.A. A New Hybrid Multi-Population GTO-BWO Approach for Parameter Estimation of Photovoltaic Cells and Modules. Sustainability 2023, 15, 11089. [Google Scholar] [CrossRef]
  34. The MNIST Database of Handwritten Digits. Available online: http://yann.lecun.com/exdb/mnist (accessed on 24 February 2023).
  35. CIFAR. Available online: https://www.cs.toronto.edu/~kriz/cifar.html (accessed on 24 February 2023).
  36. Maashi, M.; Alamro, H.; Mohsen, H.; Negm, N.; Mohammed, G.P.; Ahmed, N.A.; Ibrahim, S.S.; Alsaid, M.I. Modeling of Reptile Search Algorithm with Deep Learning Approach for Copy Move Image Forgery Detection. IEEE Access 2023, 11, 87297–87304. [Google Scholar] [CrossRef]
Figure 1. Overall process of the FSCDL-CMFDA approach for image forgery detection.
Figure 1. Overall process of the FSCDL-CMFDA approach for image forgery detection.
Electronics 14 00692 g001
Figure 2. Structure of the SCNN model with convolution, pooling, and fully connected layers.
Figure 2. Structure of the SCNN model with convolution, pooling, and fully connected layers.
Electronics 14 00692 g002
Figure 3. Classifier outcome of FSCDL-CMFDA model with MNIST dataset (a) P r e c n , (b) R e c a l , (c) A c c u y , and (d) F s c o r e .
Figure 3. Classifier outcome of FSCDL-CMFDA model with MNIST dataset (a) P r e c n , (b) R e c a l , (c) A c c u y , and (d) F s c o r e .
Electronics 14 00692 g003
Figure 4. A c c u y curve of FSCDL-CMFDA model under MNIST dataset.
Figure 4. A c c u y curve of FSCDL-CMFDA model under MNIST dataset.
Electronics 14 00692 g004
Figure 5. Training and validation loss curves of the FSCDL-CMFDA model on the MNIST dataset.
Figure 5. Training and validation loss curves of the FSCDL-CMFDA model on the MNIST dataset.
Electronics 14 00692 g005
Figure 6. ROC curve of FSCDL-CMFDA technique under MNIST dataset.
Figure 6. ROC curve of FSCDL-CMFDA technique under MNIST dataset.
Electronics 14 00692 g006
Figure 7. Classifier outcome of FSCDL-CMFDA technique under CIFAR-10 dataset (a) P r e c n , (b) R e c a l , (c) A c c u y , and (d) F s c o r e .
Figure 7. Classifier outcome of FSCDL-CMFDA technique under CIFAR-10 dataset (a) P r e c n , (b) R e c a l , (c) A c c u y , and (d) F s c o r e .
Electronics 14 00692 g007
Figure 8. A c c u y curve of FSCDL-CMFDA model under CIFAR-10 dataset.
Figure 8. A c c u y curve of FSCDL-CMFDA model under CIFAR-10 dataset.
Electronics 14 00692 g008
Figure 9. Loss curve of FSCDL-CMFDA method with CIFAR-10 dataset.
Figure 9. Loss curve of FSCDL-CMFDA method with CIFAR-10 dataset.
Electronics 14 00692 g009
Figure 10. ROC curves showing classification performance on the CIFAR-10 dataset.
Figure 10. ROC curves showing classification performance on the CIFAR-10 dataset.
Electronics 14 00692 g010
Figure 11. Comparison of accuracy, precision, recall, and F1-score of FSCDL-CMFDA versus existing methods.
Figure 11. Comparison of accuracy, precision, recall, and F1-score of FSCDL-CMFDA versus existing methods.
Electronics 14 00692 g011
Figure 12. Computational time comparison of FSCDL-CMFDA and existing methods.
Figure 12. Computational time comparison of FSCDL-CMFDA and existing methods.
Electronics 14 00692 g012
Table 1. Classifier outcome of FSCDL-CMFDA model under MNIST dataset.
Table 1. Classifier outcome of FSCDL-CMFDA model under MNIST dataset.
MNIST Dataset
No. of Runs P r e c n R e c a l A c c u y F s c o r e
Run-195.5096.2797.3697.19
Run-296.2397.1195.5596.17
Run-395.8695.7598.0296.85
Run-497.0598.4196.9597.70
Run-595.9297.3797.6698.05
Run-695.9797.5797.4997.12
Run-796.9496.3796.0195.88
Run-897.8896.6996.2497.29
Run-996.1596.6698.2097.28
Run-1098.1996.3097.7396.14
Average96.5796.8597.1296.97
Table 2. Classifier outcome of FSCDL-CMFDA model under CIFAR-10 dataset.
Table 2. Classifier outcome of FSCDL-CMFDA model under CIFAR-10 dataset.
CIFAR-10 Dataset
No. of Runs P r e c n R e c a l A c c u y F s c o r e
Run-198.1798.0896.5998.74
Run-297.7596.8797.5698.85
Run-397.8599.0698.5598.87
Run-498.0098.0396.6396.62
Run-597.0597.5096.8398.95
Run-699.1497.3397.8298.72
Run-797.0799.1099.0597.16
Run-899.1797.73799.0298.19
Run-999.0297.00197.2198.09
Run-1097.9498.1497.3697.21
Average98.1297.8897.6698.14
Table 3. Comparative result of FSCDL-CMFDA approach with other models.
Table 3. Comparative result of FSCDL-CMFDA approach with other models.
Methods P r e c n R e c a l F s c o r e
CMFD68.2978.9865.06
IFD-AOS-FPM63.5383.3664.46
CMFD-BMIF65.0980.6969.43
BB-KB-ICMFD68.4179.6970.95
CMFD-GAN-CNN70.1180.7088.27
DLFM-CMDFC96.9796.9196.88
RSADTL-CMFD97.6397.4097.66
FSCDL-CMFDA98.1297.8898.14
Table 4. CT analysis of FSCDL-CMFDA methodology compared with other methods.
Table 4. CT analysis of FSCDL-CMFDA methodology compared with other methods.
MethodsCT (s)
CMFD 11.10
IFD-AOS-FPM9.39
CMFD-BMIF9.31
BB-KB-ICMFD7.56
CMFD-GAN-CNN6.72
DLFM-CMDFC8.59
RSADTL-CMFD14.48
FSCDL-CMFDA5.11
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Assiri, M. Synergy of Internet of Things and Software Engineering Approach for Enhanced Copy–Move Image Forgery Detection Model. Electronics 2025, 14, 692. https://doi.org/10.3390/electronics14040692

AMA Style

Assiri M. Synergy of Internet of Things and Software Engineering Approach for Enhanced Copy–Move Image Forgery Detection Model. Electronics. 2025; 14(4):692. https://doi.org/10.3390/electronics14040692

Chicago/Turabian Style

Assiri, Mohammed. 2025. "Synergy of Internet of Things and Software Engineering Approach for Enhanced Copy–Move Image Forgery Detection Model" Electronics 14, no. 4: 692. https://doi.org/10.3390/electronics14040692

APA Style

Assiri, M. (2025). Synergy of Internet of Things and Software Engineering Approach for Enhanced Copy–Move Image Forgery Detection Model. Electronics, 14(4), 692. https://doi.org/10.3390/electronics14040692

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop