Next Article in Journal
Semantic Scene Completion in Autonomous Driving: A Two-Stream Multi-Vehicle Collaboration Approach
Previous Article in Journal
Progressive and Asymmetrical Deadlift Loads Captured by Wearable Motion Tape Sensors
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Plug-and-Play PRNU Enhancement Algorithm with Guided Filtering

by
Yufei Liu
,
Yanhui Xiao
* and
Huawei Tian
School of National Security, People’s Public Security University of China, Beijing 100038, China
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(23), 7701; https://doi.org/10.3390/s24237701
Submission received: 10 October 2024 / Revised: 13 November 2024 / Accepted: 26 November 2024 / Published: 2 December 2024
(This article belongs to the Section Sensing and Imaging)

Abstract

:
As a weak high-frequency signal embedded in digital images, Photo Response Non-Uniformity (PRNU) is particularly vulnerable to interference from low-frequency components during the extraction process, which affects its reliability in real-world forensic applications. Previous studies have not successfully identified the effective frequency band of PRNU, leaving low-frequency interference insufficiently suppressed and impacting PRNU’s utility in scenarios such as source camera identification, image integrity verification, and identity verification. Additionally, due to differing operational mechanisms, current mainstream PRNU enhancement algorithms cannot be integrated to improve their performance further. To address these issues, we conducted a frequency-by-frequency analysis of the estimated PRNU and discovered that it predominantly resides in the frequency band above 10 Hz. Based on this finding, we propose a guided-filtering PRNU enhancement algorithm. This algorithm can function as a plug-and-play module, seamlessly integrating with existing mainstream enhancement techniques to further boost PRNU performance. Specifically, we use the PRNU components below 10 Hz as a guide image and apply guided filtering to reconstruct the low-frequency interference components. By filtering out these low-frequency components, we retain and enhance the high-frequency PRNU signal. By setting appropriate enhancement coefficients, the low-frequency interference is suppressed, and the high-frequency components are further amplified. Extensive experiments on publicly available Dresden and Daxing digital device forensics datasets confirm the efficiency and robustness of the proposed method, making it highly suitable for reliable forensic analysis in practical settings.

1. Introduction

With the rapid expansion of mobile internet and the widespread use of imaging devices like smartphones, digital images have become a crucial information medium on social media platforms. Meanwhile, accessible editing software such as Photoshop, Lightroom, Canva, and Meitu has lowered the barrier to image manipulation, enabling malicious actors to alter and distribute images for illicit purposes. Ensuring image authenticity and integrity has thus become a priority in areas like forensic identification and criminal investigation [1]. In the field of digital image forensics, Source Camera Identification (SCI) based on Photo Response Non-Uniformity (PRNU) has received considerable attention. PRNU is a unique, physical “fingerprint” embedded in digital images due to sensor manufacturing defects and silicon non-uniformity. Comparing PRNU allows for the verification of image sources, often referred to as the “fingerprint” of imaging devices due to their uniqueness, ubiquity, and stability [2]. PRNU has even been accepted as evidence in U.S. courts to verify the source and integrity of digital images [3,4,5] and holds promise for identity verification tasks as well [6]. This paper primarily explores PRNU’s application in SCI tasks.
PRNU manifests in digital images as a common high-frequency pattern noise [7], making it possible to estimate by calculating the common components of the image’s noise residuals [2]. According to [8], a more effective noise extraction algorithm can extract noise residuals that contain more PRNU components. Therefore, choosing a noise extraction algorithm is crucial for PRNU extraction. As a result, many studies have focused on developing noise extraction algorithms with better performance to extract the PRNU as comprehensively as possible [9,10,11,12]. Among these studies, Cortiana et al. [9] designed a PRNU extraction method based on the BM3D noise extraction algorithm, achieving promising results. To address the complexity and long computational time of BM3D, Zeng et al. [10] propose a PRNU extraction method based on content-adaptive guided image filtering. This method achieves performance comparable to the BM3D-based method but with significantly reduced complexity. Later, to alleviate the difficulty of noise extraction in regions around strong edges of images, Zeng et al. [11] introduce a method based on dual-tree complex wavelet transform to extract the PRNU from a given image. This method outperforms the BM3D algorithm in regions around strong edges. With the rise of deep learning technologies, noise extraction methods based on deep neural networks have achieved significant success, further advancing PRNU extraction techniques. One representative example is [12], which proposes an effective PRNU fingerprint extraction algorithm based on a densely connected hierarchical denoising network (DHDN). DHDN can more effectively capture real-world noise, so DHDN-based PRNU extraction methods have significantly outperformed BM3D-based methods in SCI tasks.
The aforementioned studies focus on improving noise extraction techniques to preserve the integrity of PRNU as much as possible. However, digital imaging post-processing pipelines inherently introduce various types of low-frequency interference that can obscure or distort the Photo Response Non-Uniformity (PRNU) signal, which is primarily high-frequency. These low-frequency interferences include sensor-induced artifacts, demosaicing and compression effects, et al. [6,13]. Such interferences often overlap with or even mask the PRNU signal, making it difficult to extract an accurate PRNU profile without interference. But, noise extraction algorithms are too coarse [14] and are unable to effectively filter out these low-frequency interference signals [15].
Therefore, many studies not only work on improving noise extraction algorithms but also explore methods to enhance PRNU [4,16,17]. In one of these studies, Chen et al. [4] propose a Removing the Sharing Components (RSC) method, which uses zero-mean and Wiener filtering operations to eliminate both periodic and non-periodic Non-Unique Artifacts (NUAs), resulting in a smoother PRNU. Based on the assumption that PRNU follows a Gaussian distribution, Lin et al. [16] propose the Spectrum Equalization Algorithm (SEA), which enhances the PRNU components by suppressing anomalous peaks in the Fourier transform domain. Due to their ease of use and effectiveness, RSC and SEA have become the most widely used PRNU enhancement algorithms. Furthermore, Rao et al. [17] introduce a Principal Component Analysis (PCA)-based method for suppressing random artifacts, naming it the DC method. This method shows improved performance over SEA. However, these enhancement methods overlook the fact that PRNU is a high-frequency signal and, therefore, retains low-frequency interference. Additionally, due to differences in their mechanisms, these methods cannot be integrated to further enhance PRNU performance. To address this limitation, Gupta et al. [15] propose a method that applies Discrete Cosine Transform (DCT) to PRNU after SEA processing. By setting appropriate thresholds in the DCT domain, the low-frequency components are directly filtered out, leading to further enhancement of PRNU. However, this method is not ideal as a flexible plug-and-play module because it does not clearly define the range of low-frequency components and simply removes all of them, requiring the selection of appropriate hyper-parameters based on specific circumstances. Moreover, directly subtracting low-frequency components to suppress them introduces a large sampling error.
Based on the above analysis, to address the issue of PRNU being affected by low-frequency interference and the limitations of current enhancement algorithms that cannot be integrated due to the unknown characteristics of PRNU itself, we propose a universal and efficient PRNU enhancement scheme, focusing on the intrinsic properties of PRNU. The main contributions of this paper can be summarized in the following two aspects:
  • We conduct a comprehensive frequency-by-frequency analysis of PRNU to identify its primary frequency range, offering new insights into the spectral characteristics of PRNU and its vulnerability to low-frequency interference;
  • We propose a novel guided-filtering PRNU enhancement algorithm that effectively reconstructs and eliminates low-frequency interference, enhancing the high-frequency PRNU components. This algorithm can be seamlessly integrated with existing mainstream enhancement techniques as a plug-and-play module, ensuring improved PRNU performance with low computational complexity.
The paper is organized as follows: Section 2 introduces the related works about the research object of this paper. Section 3 details the proposed guided-filtering PRNU enhancement algorithm. Section 4 conducts extensive experiments to evaluate the performance of the proposed algorithm. Finally, Section 5 concludes this paper.

2. Related Work

As a physically unclonable hardware fingerprint, PRNU is widely employed for SCI tasks at the individual device level. This section summarizes two key research areas: “hardware fingerprint” and “source camera identification”.

2.1. Hardware Fingerprint

Hardware fingerprint-based identification utilizes unique physical characteristics inherent in device components, which are universal, unique, permanent, and measurable [18]. Beyond PRNU, researchers have explored other hardware fingerprints, including RF, MEMS, and audio fingerprints. Each of these fingerprints leverages the physical variations generated during manufacturing, making them secure, unclonable, and difficult to tamper with.
For example, RF fingerprints exploit device-specific signal variations during wireless transmission [19,20], while MEMS fingerprints leverage minor discrepancies in sensor outputs (e.g., from gyroscopes and accelerometers) for mobile device verification [21]. Audio fingerprints similarly capture slight physical differences in microphones and speakers as identifiable features in recorded or played audio [22]. However, since these hardware fingerprints are indirectly analyzed through output signals, noise suppression remains critical for achieving reliable identification [23].

2.2. Source Camera Identification

Research on SCI can be categorized into device model-level and individual device-level identification. Model-level methods leverage unique hardware and software characteristics of device models, such as lens distortion, color filter array (CFA) patterns, and compression parameters [24,25,26,27,28,29,30]. Among these, CFA features are particularly effective due to their robustness and distinguishability [31]. Additionally, some research focuses on extracting statistical features from images to effectively distinguish device models [32,33].
Individual device-level SCI focuses on identifying specific devices within the same model, often using PRNU as the most reliable feature [31,34,35]. Other approaches, such as dark spots, dead pixels, and dark current noise, have been explored but face challenges in robustness and practicality [36,37,38]. Recently, deep learning-based SCI has shown promise, achieving notable improvements in both model and device-level tasks. However, these methods often rely on closed datasets, limiting their applicability in open environments, where PRNU continues to be a valuable tool [39,40].

3. Materials and Methods

The workflow of the PRNU guided-filter enhancement algorithm proposed is shown in Figure 1. It consists of three main modules: the PRNU extraction module, the guided-filtering high-frequency effective component enhancement module, and the similarity calculation module. Among them, the guided-filtering high-frequency effective component enhancement module is the core module in this paper.
In a real-world open environment SCI task, we begin by using the PRNU extraction module to obtain an initial estimated reference fingerprint. Next, the high-frequency component enhancement module suppresses various low-frequency noise introduced by the post-processing pipeline, further improving PRNU performance. Finally, the similarity calculation module assesses the similarity between the reference and query fingerprints, determining the final image attribution based on a defined threshold. The details of each module are as follows.

3.1. PRNU Extraction Module

3.1.1. Noise Extraction Stage

At this stage, noise extraction is performed on the original color or grayscale image to obtain the image noise residual. The noise extracted by the noise extraction algorithm can be explicitly modeled in the following form [2]:
W = I n F ( I n ) = I K + ε ,
where W represents the image noise residual, I n denotes the natural image containing various types of noise, F is the noise extraction algorithm, and I refers to the denoised image, K represents the estimated PRNU, ε encompasses other noise components (mainly additive noise) and random errors. It should be noted that, unless otherwise specified, all matrix operations mentioned in this paper are element-wise operations.
Since the query fingerprint can only be extracted from a single query image, we directly use the noise residual as a substitute. This allows us to approximate PRNU based on the noise information available in that specific image.

3.1.2. Combination Stage

PRNU is quite a weak high-frequency signal that can be easily affected by semantic information and other noise present in the image. Therefore, to extract a relatively pure reference fingerprint, a substantial number of images is needed. Typically, we assume that the additive noise and error terms follow a Gaussian distribution and use the maximum likelihood estimation method to derive the formula for estimating reference fingerprints [41]:
K ^ = k = 1 d I k W k k = 1 d I k 2 ,
where d represents the number of noise residuals used to calculate a reference fingerprint. Generally speaking, the larger the number of noise residuals, the higher the quality of the fingerprint. This is because having more samples helps to average out the noise and provides a more accurate estimate of the underlying PRNU.
The extracted noise residuals often contain significant semantic information, which can interfere with PRNU performance. Since PRNU is a type of multiplicative noise intertwined with this semantic content, separating them is challenging. As shown in Figure 2, ideal images for PRNU estimation should be smooth and bright, such as clear blue-sky images. In contrast, images with complex textures or exposure issues are less suitable for PRNU extraction. However, in real-world scenarios, it may be difficult to control the imaging device, making it challenging to obtain enough high-quality images.

3.1.3. Enhancement Stage

PRNU can be affected by low-frequency interference introduced during the post-processing pipeline, including sensor-induced artifacts, demosaicing, compression effects, etc. This is especially noticeable in devices from the same brand, where similar internal image processing leads to comparable fingerprints across different cameras.
Consequently, after the initial extraction of reference fingerprints, further processing is necessary to enhance the effective components and suppress interference. Popular enhancement algorithms include the Removing Shared Components (RSC) method [4] and the Spectrum Equalization Algorithm (SEA) [16]. This paper also analyzes the decorrelation method (DC) proposed in [17].
In the RSC algorithm, a zero-mean operation is applied to each row and column of the reference fingerprint to eliminate periodic artifacts introduced by demosaicing. Wiener filtering in the frequency domain further suppresses non-periodic artifacts. The SEA algorithm smooths the reference fingerprint by removing abnormal peaks in the frequency domain, while the DC method suppresses random artifacts by reducing principal components with eigenvalues exceeding the theoretical variance of the reference fingerprint in the PCA domain.

3.2. Guided-Filter High-Frequency Effective Component Enhancement Module

3.2.1. High-Frequency Enhancement Principle Based on Guided Filtering

Guided filtering [42] is a classical edge-preserving smoothing algorithm that reconstructs an image by applying linear transformations to local windows of a guiding image, ensuring that the filtered image closely approximates the target image. This technique has found widespread applications in tasks such as denoising, dehazing, deraining, detail enhancement, and image segmentation. Its fundamental principle is as follows.
For each filter window w k , perform the guided filtering operation such that the output image O remains as consistent as possible with the input image I :
O i = a k G i + b k , i w k ,
where O i and G i represents the pixel values at each position within window w k in the filtered output image and the guided image, respectively. The term a k refers to the filtering parameter corresponding to window w k , which determines how closely the filtered image aligns with the guide image, while b k is the bias parameter that adjusts the local intensity offset within the window. These parameters help in preserving the structural details of the image while performing smoothing or enhancement operations.
Based on the minimization of the distance between the filtered image and the target image, the optimization objective is set for each filtering window:
min i w k a k G i + b k I i 2 + ε a k 2 ,
where ε represents the regularization coefficient, which adjusts the coefficient a k to indirectly modify the filtering result; denotes L 2 norm.
By taking the partial derivative of the objective function with respect to parameter a k and b k and setting them to zero, the closed-form solution is obtained:
a k = C o v ( G , I ) w k V a r ( G ) w k + ε ,
b k = m e a n ( I ) w k a k m e a n ( G ) w k ,
where C o v ( ) w k , V a r ( ) w k , and m e a n ( ) w k represent the covariance, variance, and mean of the specified object within the filter window w k , respectively.
Substituting into the original equation, the output image for the filter window w k can be expressed as
O i = k : i w k ( m e a n ( I ) w k + a k ( G i m e a n ( G ) w k ) ) ,
where k : i w k denotes the i th pixel within the k th filter window w k .
Subsequently, the final output image is obtained by applying mean filtering to the computed results from all the windows, which will not be elaborated further. It can be observed that the filtered output image essentially represents a weighted average of the low-frequency components of the original image I and the high-frequency components of the guiding image G , with the weight a k being directly influenced by the regularization parameter ε .
Based on the mechanism of guided filtering, it is evident that both the parameter selection and the final mean filtering operation may lead to image smoothing. Therefore, an additional operation can be performed to enhance the high-frequency components of the image. The formula for this operation is as follows,
O ^ = λ ( I O ) + O ,
where O ^ represents the image after enhancing the high-frequency information and λ indicates the enhancement intensity.

3.2.2. PRNU High-Frequency Effective Component Enhancement

As shown in the experimental analysis in Section 4.3., PRNU contains negligible low-frequency components below 10 Hz, which we identify as low-frequency interference. Meanwhile, components above 10 Hz are recognized as high-frequency effective elements. Based on the guided filtering enhancement principle, we construct a low-frequency fingerprint K l o w with minimal PRNU content. The difference between the original K r a w and low-frequency K l o w reveals high-frequency components K h i g h that require enhancement K a d v , controlled by a parameter λ that adjusts the strength of the enhancement.
Using Equation (7), the filtered PRNU is a weighted average of the low-frequency portion of the original fingerprint and the high-frequency portion of the guided fingerprint. By using the fingerprint components below 10 Hz, denoted as K u 10 , as a guide image, guided filtering is applied to K raw to achieve low-frequency reconstruction, obtaining the desired low-frequency fingerprint K l o w . Since an ideal band-pass filter does not exist in practice, directly using a band-pass filtered result is not recommended; instead, the low-frequency fingerprint should be reconstructed from the original PRNU.
In summary, we propose a plug-and-play PRNU high-frequency enhancement method, with detailed steps in Equations (9)–(11) and a schematic diagram in Figure 3.
  • Step 1 Low-frequency interference component reconstruction
Based on guided filtering, PRNU components below the 10 Hz frequency band, denoted as K u 10 , are used as the guide image to reconstruct the original PRNU K r a w . This results in a low-frequency fingerprint K l o w , which more accurately reflects the true low-frequency inference components of PRNU:
K l o w i = k : i w k m e a n ( K r a w ) w k + a k ( K u 10 i m e a n ( K u 10 i ) w k ) ,
where a k = C o v ( K u 10 , K r a w ) w k V a r ( K u 10 ) w k + ε . It is important to note that K l o w contains almost no PRNU components and essentially represents low-frequency interference. For clarity in this paper, it is referred to as the “low-frequency fingerprint”.
  • Step 2 Low-frequency interference component filtering
By filtering out the low-frequency interference component K l o w , the high-frequency effective PRNU component K h i g h can be obtained:
K h i g h = K r a w K l o w .
  • Step 3 High-frequency effective component enhancement
By enhancing the high-frequency effective component K h i g h , the final high-frequency enhanced fingerprint K a d v can be obtained:
K h i g h = λ K h i g h + K r a w .

3.3. Similarity Calculation Module

The current mainstream solution in academia for addressing the image provenance problem based on PRNU is the generalized likelihood ratio test [43]. This is framed as a dual-channel hypothesis testing problem. The simplified statistic can be viewed as a form of cosine similarity, commonly referred to as Normalized Cross-Correlation (NCC).
H 0 : K 1 K 2 ,   H 1 : K 1 = K 2 .
N C C ( s 1 , s 2 ; X , Y ) = i = 1 m j = 1 n ( X ( i , j ) X ¯ ) ( Y ( i + s 1 , j + s 2 ) Y ¯ ) X X ¯ Y Y ¯ ,
where X = W , Y = I K ^ ; m and n represents the height and width of the query fingerprints (i.e., the noise residuals), respectively, s 1 and s 2 denote the horizontal and vertical step sizes used when comparing query fingerprints with reference fingerprints, respectively. In this paper, both s 1 and s 2 are set to a length of 1 pixel.
To further enhance the robustness of this SCI method, the Peak to Correlation Energy (PCE) has been proposed [44]:
P C E ( X , Y ) = s g n ( N C C ( s p e a k ; X , Y ) ) N C C ( s p e a k ; X , Y ) 2 1 m n N s Ν N C C ( s 1 , s 2 ; X , Y ) 2 ,
where N represents the narrow neighborhood that achieves the maximum value of NCC (i.e., N C C ( s p e a k ; X , Y ) ) and s g n ( ) denotes the sign function. In this paper, N is set to a rectangular area of size 11 × 11.
The calculation steps of PCE indicate that this statistic amplifies the practical significance of the correlation metric NCC. Specifically, for scenarios that conform to the H 1 hypothesis, where the images from the source device are compared with their corresponding PRNU fingerprints, the computed PCE value will be substantially high. Conversely, in scenarios that conform to the H 0 hypothesis, the calculated PCE value will approach zero.

4. Experiment and Discussion

In this section, we provide a detailed explanation and analysis of the experimental design and results. To measure the actual frequency range of PRNU and to validate the performance of the proposed enhancement module in various scenarios, we designed two main experiments: PRNU frequency band analysis experiments and PRNU enhancement experiments. Additionally, an operation time analysis of the algorithm is presented at the end.

4.1. Experimental Environment and Data Preparation

Regarding the experimental hardware configuration, we use an Intel Core i7-9750H CPU (Intel Corporation, Santa Clara, CA, USA) with a base frequency of 2.60 GHz. For the software configuration, we utilize MATLAB R2023a scientific computing tool.
In this paper, we select publicly available Dresden [45] and Daxing [46] digital image forensics datasets for our experiments. The Dresden dataset consists of 16,961 JPEG images and 1491 RAW images captured by 74 cameras across 25 models from 14 brands, making it one of the most widely used datasets for SCI experiments. The Daxing dataset includes 43,400 JPEG images captured by 90 commercial smartphones across 22 models from 5 brands, representing the largest smartphone forensics dataset to date. Given the differences in imaging characteristics between cameras and smartphones, using both datasets lends greater credibility to the experimental results. Additionally, both datasets include images from various real-world scenarios, accounting for different lighting conditions and levels of texture complexity. This helps simulate forensic tasks in realistic environments.
To facilitate the experiments and evaluate the algorithm’s performance across different image resolutions, each image in the two datasets is uniformly cropped from the top-left corner into three sizes: 128 × 128, 256 × 256, and 512 × 512. This effectively creates images with three distinct resolutions, forming the experimental dataset used in this study. As mentioned in Section 3.1, flat-field images like blue sky are ideal for extracting PRNU. However, in real-world scenarios, most images contain complex textures and may include overexposed or underexposed areas. Therefore, to closely simulate real-world conditions, for each imaging device in the two datasets, the first 150 images (in natural order) are selected. The first 50 images are used for reference fingerprint estimation, while the remaining 100 images are utilized for query fingerprint extraction.

4.2. Evaluation Metrics

We select the area under the ROC curve (AUC), the true positive rate at a false positive rate of 10−3 (TPR@FPR10−3), and the Kappa statistic as evaluation metrics to assess the performance of PRNU. These metrics are chosen to provide a comprehensive evaluation of the detection capabilities and consistency of the PRNU enhancement.
For each imaging device in two datasets, the first 50 images (in natural order) are selected to estimate the reference fingerprint for that device. Then, the extracted query fingerprints from each of the remaining 100 images for each device are compared individually with the corresponding reference fingerprints. PCE values are finally calculated for each comparison, resulting in a PCE matrix that reflects the correlation between the reference and query fingerprints for performance evaluation.

4.2.1. AUC and TPR@FPR10−3

Using the PCE matrix, we can calculate the True Positive Rate (TPR) and False Positive Rate (FPR) at various thresholds to generate the corresponding ROC curves. From these, we derive the AUC and TPR@FPR10−3 values.
T P R = T P T P + F N ,   F P R = F P F P + T N ,
where T P (True Positive), F N (False Negative), F P (False Positive), and T N (True Negative) represent the following: T P is the number of images correctly classified as belonging to the same devices (true matches), F N is the number of images incorrectly classified as belonging to different devices (missed matches), F P is the number of images incorrectly classified as belonging to the same devices (false matches), and T N is the number of images correctly classified as belonging to different devices (true non-matches).

4.2.2. Kappa Statistic

The Kappa statistic is a useful metric for assessing classification consistency. In the PCE matrix, the highest value in each group is designated as the positive sample, while the remaining entries are classified as negative. By comparing these classifications with expected outcomes, we can derive the TOP-1 confusion matrix. From this confusion matrix, the Kappa statistic is then calculated using the formula shown below:
κ = p o p e 1 p e ,
where p o is the observed agreement (the proportion of correctly classified images), p e is the expected agreement by chance. We can calculate them, respectively, by
p o = T P + T N T P + F P + T N + F N ,
p e = ( T P + F N ) ( T P + F P ) + ( F P + T N ) ( F N + T N ) ( T P + F P + T N + F N ) 2 .

4.3. PRNU Frequency Band Analysis Experiment

As a kind of pattern noise generated during the imaging process, PRNU is regarded as a high-frequency signal [2]. However, research to date has not publicly analyzed its actual frequency range. Therefore, we employ an ideal band-pass filter based on Fourier transform and conduct a band-by-band analysis of PRNU.

4.3.1. Visualization Analysis

PRNU appears as pattern noise within images, but due to its lack of semantic content, direct analysis is challenging. To visually examine its characteristics across different frequency bands, we selected an image containing both complex textured areas (high-frequency regions) and smooth areas (low-frequency regions), clearly separated from each other. We present a frequency-by-frequency visualization of the color image, grayscale image, and noise residuals extracted using noise extraction algorithms. To capture image noise that contains as many PRNU components as possible, we adopt the method from [12] using a Densely connected Hierarchical Denoising Network (DHDN) for noise extraction. The visualization results are displayed in Figure 4.
From the analysis of Figure 4, it is evident that the noise extraction algorithm primarily functions as a high-pass filter, removing the low-frequency semantic information below 10 Hz from the original image. Simultaneously, the 10–50 Hz frequency band in the noise image retains most of the high-frequency details of the image. All three types of images show that the high-frequency band above 50 Hz contains almost no semantic information. Based on the mechanism of PRNU generation, it is known that PRNU, as a high-frequency multiplicative noise, is embedded within the image. Therefore, it can be inferred that its frequency range is likely above 10 Hz.

4.3.2. Experimental Analysis

The PRNU frequency band analysis experiments are conducted on the Dresden and Daxing datasets, with each dataset using an image resolution of 128 × 128 pixels. The experiments comprised four groups: “Baseline”, “RSC”, “SEA”, and “DC”. During the noise residual extraction stage, the DHDN noise extraction algorithm is employed for the three experimental groups. In the enhancement stage, “Baseline” indicates no enhancement algorithm is applied, while “RSC”, “SEA”, and “DC” represent three of the most commonly used enhancement algorithms. The segmented frequency bands of the PRNU (i.e., reference fingerprint) are directly utilized in SCI experiments, and the presence of the PRNU components in each frequency band is indirectly inferred through the observation of performance metrics.
In the actual experiments, a step size of 5 Hz is employed for the frequently-by-frequently analysis. It is observed that the values of the corresponding metrics in the 0–5 Hz and 5–10 Hz frequency bands are extremely low, while each step interval within the 10–50 Hz band exhibited significantly higher corresponding metric values. In contrast, the overall metric values in the frequency band above 50 Hz are relatively low. To conserve space and highlight the key findings, the experimental results have been summarized and consolidated, as presented in Figure 5.
The experimental results presented in Figure 5 indicate that for both Dresden and Daxing datasets, as well as for the RSC, SEA, and DC enhancement schemes, the reference fingerprints within the 10–50 Hz frequency band achieve performance levels comparable to those of the full-frequency band ones. Furthermore, it is noteworthy that the metrics corresponding to the low-frequency band below 10 Hz are extremely low, while the metrics for the high-frequency band above 50 Hz are significantly lower than those in the 10–50 Hz band yet still higher than those in the sub-10 Hz band.
Combining the results of the experimental analysis, it is evident that the PRNU, as a high-frequency multiplicative pattern noise, primarily exists in the frequency band above 10 Hz in images. Therefore, enhancing this specific frequency band can lead to the design of a plug-and-play PRNU enhancement algorithm aimed at achieving further improvements in its performance.

4.4. PRNU Enhancement Experiment

Following the algorithm hyperparameter analysis in Section 4.4.4, the guided filtering window diameter r , regularization coefficient ε , and enhancement coefficient λ are set to 5, 0.01, and 5, respectively. To comprehensively evaluate the proposed algorithm’s effectiveness and robustness, we conduct experiments on non-JPEG and JPEG compression scenes, as well as an image texture complexity analysis experiment.

4.4.1. Non-JPEG Compression Scene Enhancement Experiments

To thoroughly evaluate the performance of the proposed algorithm, experiments are conducted in non-JPEG compression scenarios using Dresden and Daxing datasets. The analysis encompassed three image resolutions: 128 × 128, 256 × 256, and 512 × 512, along with three basic PRNU enhancement algorithms: RSC, SEA, and DC. Additionally, various experimental scenarios are established by combining these basic schemes with the approaches from [15] and the proposed methods, respectively.
Specifically, for each image resolution in both datasets, ten experimental setups are created: “Baseline”, “RSC”, “SEA”, “DC”, “RSC + HF”, “SEA + HF”, “DC + HF”, “RSC + Ours”, “SEA + Ours”, and “DC + Ours”. During the noise residual extraction stage, all ten experiments utilized the DHDN noise extraction algorithm. For the enhancement stage, “Baseline”, “RSC”, “SEA”, and “DC” retained the same meanings as previously defined. The experiments “RSC + HF”, “SEA + HF”, and “DC + HF” apply the algorithm from [15] on top of the RSC, SEA, and DC base enhancement schemes, respectively. Meanwhile, “RSC + Ours”, “SEA + Ours”, and “DC + Ours” incorporate the proposed algorithm as a plug-and-play module into the RSC, SEA, and DC enhancement schemes. The experimental results on the Dresden dataset are summarized in Table 1, while those on the Daxing dataset are placed in Table A1, Appendix A, for better readability.
Since the algorithm proposed in [15] does not clearly define the specific range of PRNU low-frequency components to be removed, its hyperparameters cannot be generalized to all scenarios. The results in Table 1 and Table A1 demonstrate that this approach still leads to the degradation of the high-frequency effective components of PRNU, thus negatively affecting its performance. In contrast, the enhancement algorithm proposed in this paper explicitly identifies the range of low-frequency interference components that need to be removed, resulting in improved PRNU performance across all cases.
Given that PRNU lacks semantic information, we further utilize spectrum visualization to demonstrate the effectiveness of the proposed algorithm. Using the Dresden experimental dataset, we take the imaging device named “Agfa_DC-504_0” (the first device selected in order) and its 128 × 128 resolution images as an example. The frequency spectrum of the PRNU is processed by all 10 different enhancement schemes. For better observation and analysis, the most effective frequency range of PRNU between 10–50 Hz is highlighted in red.
As shown in Figure 6a, the frequency spectrum of PRNU without enhancement displays multiple local peaks, distributed in both the low-frequency range (below 10 Hz) and the high-frequency range (above 10 Hz), with interference-induced peaks significantly affecting PRNU performance. Figure 6b demonstrates that the RSC enhancement scheme provides only limited suppression of low-frequency interference below 10 Hz.
Figure 6c,d shows that, while the SEA and DC enhancement schemes strengthen high-frequency PRNU components above 10 Hz, they do not effectively suppress low-frequency interference below this threshold. Therefore, each of these methods has room for improvement.
While the enhancement algorithm proposed in [15] enhancement algorithm recognizes this issue, it does not clearly define the frequency range of the low-frequency interference components. Instead, it treats the problem as a black-box algorithm requiring parameter tuning, which makes it unsuitable for all cases. As seen in Figure 6e–g, this approach removes the low-frequency interference but also damages the effective high-frequency components of the PRNU, resulting in a decrease in performance.
In contrast, our algorithm, based on extensive experiments, explicitly identifies the frequency ranges for both the low-frequency interference and the high-frequency effective components of the PRNU. As shown in Figure 6h–j, our algorithm not only suppresses the low-frequency interference but also significantly enhances the high-frequency components, further improving the performance of PRNU.
In summary, the proposed algorithm demonstrates effectiveness across images with different resolutions and various base enhancement algorithms, validating the utility of the module. However, the experimental results also indicate that not all performance metrics show improvement in every scenario.
Upon analysis, this can be attributed to two main factors: first, the inherent variability and fluctuation in the metrics themselves; second, the use of a single set of parameters across all scenarios, which may not be the optimal set for every case. Additionally, since most of the algorithm’s parameters are continuous variables, the step size used for parameter optimization may have been too large, potentially leading to missed optimal solutions.

4.4.2. JPEG Compression Scene Enhancement Experiments

Reference [47] points out that social platforms apply minimal compression to standard and small digital images, but for larger images, the compression rate exceeds 80%, significantly reducing PRNU performance. In this paper, images with a resolution of 128 × 128 are used to evaluate four JPEG compression scenarios with quality factors of 90, 80, 70, and 60. Ten sets of experiments are conducted for each scenario, with nine sets of experiments designed in the same manner as previously described. The experimental results on the Dresden dataset are summarized in Table 2, while those on the Daxing dataset are placed in Table A2, Appendix A, for better readability.
Analyzing the experimental results shown in Table 2 and Table A2, we observe that under different JPEG compression scenarios, the scheme proposed in [15] impaired the high-frequency effective components of PRNU, thus failing to enhance the PRNU. In contrast, the algorithm proposed in this paper consistently achieves performance enhancement of PRNU across all scenarios, thereby validating its robustness.
Additionally, we observe that in a few cases, the proposed algorithm does not improve all performance metrics of PRNU as well. Considering the impact of JPEG compression on the high-frequency components of PRNU, this could be another contributing factor, alongside the reasons discussed in Section 4.4.1.

4.4.3. The Effect of Image Texture Complexity Analysis Experiment

To confirm the effectiveness of our proposed algorithm in general scenarios, we conduct reference fingerprint enhancement experiments on natural images in Section 4.4.1. To further evaluate the algorithm’s sensitivity to image texture complexity, we conduct additional tests on both flat field and textured images in this subsection.
First, we reorganized the Daxing dataset with 128 × 128 resolution images. Specifically, we use Entropy as a measure of image texture complexity (higher Entropy indicates more complex textures, and vice versa) to rank the 150 images from each imaging device in the dataset. From each device’s set, we selected the 50 images with the most complex textures and the 50 smoothest images to form two sub-datasets, T for complex textures and F for flat fields, to be used for reference fingerprint extraction. The remaining 50 images formed dataset S , designated for SCI experiments.
For comparison, we use a non-enhanced PRNU extraction as the “Baseline” and the RSC enhancement method as a basic enhancement technique. The experimental results are shown in Table 3.
As shown in Table 3, the performance of reference fingerprints extracted from flat-field images is significantly better than that of fingerprints extracted from textured images, indicating that complex textures notably impact PRNU. However, our proposed algorithm improves PRNU performance in both scenarios. Specifically, for fingerprints extracted from textured images, the TPR@FPR10−3 increased from 0.3689 to 0.5551, significantly closing the performance gap with ideal fingerprints extracted from flat-field images, which achieved a metric value of 0.5613. These results demonstrate the robustness of the proposed algorithm against varying image textures.

4.4.4. Algorithm Hyper-Parameter Analysis Experiment

The proposed algorithm involves several key hyper-parameters, including the guided filter window diameter r , the regularization coefficient ε , and the enhancement factor λ . Upon analysis in Section 3.2.1, it can be observed that as r and ε increase, the high-frequency information contained in the resulting low-frequency fingerprint K l o w decreases. However, this comes at the cost of reduced accuracy in the reconstruction of K l o w . Since K l o w deviates from the actual low-frequency components of the original fingerprint K r a w , the corresponding high-frequency fingerprint K h i g h also deviates from the true high-frequency components of PRNU, thereby affecting the final enhancement effect. As for λ , a larger value leads to a higher degree of high-frequency enhancement for PRNU, but it also results in increased computational complexity. Beyond a certain point, the gain from enhancement diminishes and may even introduce interference factors, which can negatively impact the overall performance. Therefore, the specific hyper-parameter settings must be determined based on a comprehensive analysis of extensive experimental results.
We conducted hyper-parameter analysis experiments on 128 × 128 resolution images from the Dresden dataset, using the RSC enhancement scheme as the “Baseline”. Based on empirical insights from [42], we set λ to 5 and explored various values for r and ε . As shown in Figure 7a–c, when r and ε are set to 5 and 0.01, respectively, the best results are achieved in terms of AUC, TPR, and Kappa, with values of 0.8633, 0.2586, and 0.4449, respectively. Next, we fixed r and ε at 5 and 0.01 and experimented with different values of λ . As illustrated in Figure 7d–f, the optimal PRNU performance is achieved when λ is set to 5. Notably, when r and ε are set appropriately, PRNU performance improves consistently, regardless of the specific value of λ . This further supports the approach of reconstructing and selectively filtering PRNU’s confirmed low-frequency components rather than relying solely on hyperparameter tuning to filter them out directly [15]. This approach enhances the proposed algorithm’s ability to generalize effectively in out-of-distribution scenarios [48].
Ultimately, the optimal hyper-parameter combination is determined to be { r = 5 , ε = 0.01 , λ = 5 } .

4.5. Running Time Analysis

Due to the variability in algorithm execution time, we repeat each experiment 10 times and take the arithmetic mean to statistically analyze the average running time per reference fingerprint under different enhancement schemes. Analyzing the results in Table 4, it can be observed that the proposed enhancement algorithm is independent of the choice of image dataset and the underlying enhancement algorithm and is only related to image resolution. Since the proposed algorithm not only filters out the low-frequency interference components of PRNU but also enhances the high-frequency effective components, the execution time is slightly higher than that of the method presented in [15]. Considering various scenarios, the execution time of the proposed algorithm falls between the RSC and SEA enhancement algorithms and is significantly lower than that of the DC enhancement algorithm, thereby validating the efficiency of the proposed method.

5. Conclusions

In this paper, we conducted experimental analyses that reveal PRNU primarily resides in frequency bands above 10 Hz. Low-frequency components below 10 Hz are identified as interference, while components above this threshold represent effective high-frequency PRNU signals. Based on these insights, we designed a guided filtering PRNU enhancement algorithm as a plug-and-play module, which can be integrated with existing enhancement techniques. With low computational complexity, this algorithm enhances PRNU performance, even when affected by JPEG compression and complex image textures.
Beyond SCI tasks, our method holds promise for image integrity authentication and identity verification applications. It can be seamlessly integrated with future algorithms, demonstrating significant potential for use in multimedia forensics. Additionally, this algorithm shows a particular promise for improving image authenticity verification in cybersecurity systems, where accurate image authentication is critical.
However, given the diverse demands of real-world applications, the chosen hyper-parameters may not be universally optimal. A promising path for future work is the development of an adaptive PRNU high-frequency component enhancement approach tailored to different scenarios. This adaptive approach would ensure robust PRNU performance across various application contexts, further extending the utility of the proposed method in real-world cybersecurity and digital forensics.

Author Contributions

Conceptualization, Y.L.; methodology, Y.L.; software, Y.L.; validation, Y.L.; formal analysis, Y.X. and H.T.; investigation, Y.L.; resources, Y.X. and H.T.; data curation, Y.X. and H.T.; writing—original draft preparation, Y.L.; writing—review and editing, Y.X. and H.T.; visualization, Y.L.; supervision, Y.X. and H.T.; project administration, Y.X. and H.T.; funding acquisition, Y.X. and H.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Ministry of Public Security Science and Technology Plan Technical Research Project (grant NO. 2022JSYJC22) and the Fundamental Research Funds for the Central Universities (grant NO. 2022JKF02003).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The experimental datasets used in this article are all available for free and open access.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Table A1. Non-JPEG compression scene enhancement experiments on Daxing dataset.
Table A1. Non-JPEG compression scene enhancement experiments on Daxing dataset.
ResolutionEnhancement SchemeAUCTPR@FPR10−3Kappa
128 × 128Baseline0.73950.00310.3036
RSC0.87430.25400.4583
RSC + HF0.85820.20960.4028
RSC + Ours0.87630.25580.4662
SEA0.86820.26670.4691
SEA + HF0.84720.25130.4101
SEA + Ours0.86830.27410.4702
DC0.87880.23360.4735
DC + HF0.86800.20420.4139
DC + Ours0.88160.23940.4738
256 × 256Baseline0.72080.00390.3891
RSC0.92490.27040.6557
RSC + HF0.91700.21670.5983
RSC + Ours0.92750.27140.6639
SEA0.92680.35470.6748
SEA + HF0.91420.30200.6217
SEA + Ours0.92740.35140.6799
DC0.92960.27490.6666
DC + HF0.92660.22420.6197
DC + Ours0.93270.27780.6739
512 × 512Baseline0.69600.00460.4237
RSC0.95630.33720.8081
RSC + HF0.95120.26140.7691
RSC + Ours0.95760.33610.8137
SEA0.95870.41180.8245
SEA + HF0.95380.33040.7947
SEA + Ours0.95880.41120.8278
DC0.95700.32420.7879
DC + HF0.95860.24390.7655
DC + Ours0.96110.32790.8072
Table A2. JPEG compression scene enhancement experiments on the Daxing dataset.
Table A2. JPEG compression scene enhancement experiments on the Daxing dataset.
Quality FactorEnhancement SchemeAUCTPR@FPR10−3Kappa
90Baseline0.74350.00360.2863
RSC0.86720.23980.4393
RSC + HF0.84760.19960.3819
RSC + Ours0.86860.25000.4462
SEA0.86320.25900.4502
SEA + HF0.84020.22720.3864
SEA + Ours0.86290.25910.4524
DC0.87390.20780.4531
DC + HF0.86100.17970.3924
DC + Ours0.87620.21240.4536
80Baseline0.70790.00280.2761
RSC0.84440.24140.3880
RSC + HF0.81920.19570.3147
RSC + Ours0.84590.24570.3948
SEA0.84490.26220.4088
SEA + HF0.81550.23140.3310
SEA + Ours0.84460.26910.4099
DC0.85400.26160.4216
DC + HF0.83570.20810.3383
DC + Ours0.85640.26320.4215
70Baseline0.67460.00220.2449
RSC0.82460.19390.3384
RSC + HF0.79300.14310.2582
RSC + Ours0.82720.19670.3409
SEA0.83180.22570.3694
SEA + HF0.79800.18690.2800
SEA + Ours0.83210.22560.3704
DC0.83690.23270.3740
DC + HF0.81120.17240.2835
DC + Ours0.83990.23140.3758
60Baseline0.64610.00110.2148
RSC0.80220.15240.2819
RSC + HF0.76280.10020.2078
RSC + Ours0.80410.15320.2862
SEA0.81330.16540.3161
SEA + HF0.77050.14090.2291
SEA + Ours0.81360.16500.3191
DC0.81780.19820.3204
DC + HF0.77960.12900.2278
DC + Ours0.82000.19820.3198

References

  1. Bencherqui, A.; Amine Tahiri, M.; Karmouni, H.; Alfidi, M.; Motahhir, S.; Abouhawwash, M.; Askar, S.S.; Wen, S.; Qjidaa, H.; Sayyouri, M. Optimal algorithm for color medical encryption and compression images based on DNA coding and a hyperchaotic system in the moments. Eng. Sci. Technol. Int. J. 2024, 50, 101612. [Google Scholar] [CrossRef]
  2. Lukas, J.; Fridrich, J.; Goljan, M. Digital camera identification from sensor pattern noise. IEEE Trans. Inf. Forensics Secur. 2006, 1, 205–214. [Google Scholar] [CrossRef]
  3. Korus, P.; Memon, N. Computational sensor fingerprints. IEEE Trans. Inf. Forensics Secur. 2022, 17, 2508–2523. [Google Scholar] [CrossRef]
  4. Chen, M.; Fridrich, J.; Goljan, M.; Lukás, J. Determining image origin and integrity using sensor noise. IEEE Trans. Inf. Forensics Secur. 2008, 3, 74–90. [Google Scholar] [CrossRef]
  5. Mohanty, M.; Zhang, M.; Asghar, M.R.; Russello, G. e-PRNU: Encrypted Domain PRNU-Based Camera Attribution for Preserving Privacy. IEEE Trans. Dependable Secur. Comput. 2021, 18, 426–437. [Google Scholar] [CrossRef]
  6. Liu, L.; Fu, X.; Chen, X.; Wang, J.; Ba, Z.; Lin, F.; Lu, L.; Ren, K. Fits: Matching camera fingerprints subject to software noise pollution. In Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security, Copenhagen, Denmark, 26–30 November 2023; pp. 1660–1674. [Google Scholar] [CrossRef]
  7. Manisha; Li, C.-T.; Lin, X.; Kotegar, K.A. Beyond PRNU: Learning Robust Device-Specific Fingerprint for Source Camera Identification. Sensors 2022, 22, 7871. [Google Scholar] [CrossRef]
  8. Chierchia, G.; Parrilli, S.; Poggi, G.; Sansone, C.; Verdoliva, L. On the influence of denoising in prnu based forgery detection. In Proceedings of the 2nd ACM Workshop on Multimedia in Forensics, Security and Intelligence, Firenze, Italy, 29 October 2010; pp. 117–122. [Google Scholar]
  9. Cortiana, A.; Conotter, V.; Boato, G.; De Natale, F.G. Performance Comparison of Denoising Filters for Source Camera Identification. In Media Watermarking, Security, and Forensics III; SPIE: Bellingham, WA, USA, 2011; pp. 60–65. [Google Scholar] [CrossRef]
  10. Zeng, H.; Kang, X. Fast source camera identification using content adaptive guided image filter. J. Forensic Sci. 2016, 61, 520–526. [Google Scholar] [CrossRef]
  11. Zeng, H.; Wan, Y.; Deng, K.; Peng, A. Source camera identification with dual-tree complex wavelet transform. IEEE Access 2020, 8, 18874–18883. [Google Scholar] [CrossRef]
  12. Xiao, Y.; Tian, H.; Cao, G.; Yang, D.; Li, H. Effective PRNU extraction via densely connected hierarchical network. Multimed. Tools Appl. 2022, 81, 20443–20463. [Google Scholar] [CrossRef]
  13. Montibeller, A.; Pérez-González, F. An adaptive method for camera attribution under complex radial distortion corrections. IEEE Trans. Inf. Forensics Secur. 2023, 19, 385–400. [Google Scholar] [CrossRef]
  14. Fernández-Menduiña, S.; Pérez-González, F. On the information leakage quantification of camera fingerprint estimates. EURASIP J. Inf. Secur. 2021, 2021, 6. [Google Scholar] [CrossRef]
  15. Gupta, B.; Tiwari, M. Improving performance of source-camera identification by suppressing peaks and eliminating low-frequency defects of reference SPN. IEEE Signal Process. Lett. 2018, 25, 1340–1343. [Google Scholar] [CrossRef]
  16. Lin, X.; Li, C.-T. Preprocessing reference sensor pattern noise via spectrum equalization. IEEE Trans. Inf. Forensics Secur. 2016, 11, 126–140. [Google Scholar] [CrossRef]
  17. Rao, Q.; Wang, J. Suppressing random artifacts in reference sensor pattern noise via decorrelation. IEEE Signal Process. Lett. 2017, 24, 809–813. [Google Scholar] [CrossRef]
  18. Baldini, G.; Steri, G. A survey of techniques for the identification of mobile phones using the physical fingerprints of the built-in components. IEEE Commun. Surv. Tutor. 2017, 19, 1761–1789. [Google Scholar] [CrossRef]
  19. Suski II, W.C.; Temple, M.A.; Mendenhall, M.J.; Mills, R.F. Radio frequency fingerprinting commercial communication devices to enhance electronic security. Int. J. Electron. Secur. Digit. Forensics 2008, 1, 301–322. [Google Scholar] [CrossRef]
  20. Brik, V.; Banerjee, S.; Gruteser, M.; Oh, S. Wireless device identification with radiometric signatures. In Proceedings of the 14th ACM International Conference on Mobile Computing and Networking, San Francisco, CA, USA, 14–19 September 2008; pp. 116–127. [Google Scholar] [CrossRef]
  21. Bo, C.; Zhang, L.; Li, X.-Y.; Huang, Q.; Wang, Y. Silentsense: Silent user identification via touch and movement behavioral biometrics. In Proceedings of the 19th Annual International Conference on Mobile Computing & Networking, Miami, FL, USA, 30 September–4 October 2013; pp. 187–190. [Google Scholar] [CrossRef]
  22. Bojinov, H.; Michalevsky, Y.; Nakibly, G.; Boneh, D. Mobile device identification via sensor fingerprinting. arXiv 2014, arXiv:1408.1416. [Google Scholar]
  23. Lai, Y.; Qi, Y.; He, Y.; Mu, N. A survey of research on smartphone fingerprinting identification techniques. J. Inf. Secur. Res. 2019, 5, 865–878. [Google Scholar] [CrossRef]
  24. San Choi, K.; Lam, E.Y.; Wong, K.K. Automatic source camera identification using the intrinsic lens radial distortion. Opt. Express 2006, 14, 11551–11565. [Google Scholar] [CrossRef]
  25. San Choi, K.; Lam, E.Y.; Wong, K.K. Source camera identification by JPEG compression statistics for image forensics. In Proceedings of the TENCON 2006—2006 IEEE Region 10 Conference, Hong Kong, China, 14–17 November 2006; pp. 1–4. [Google Scholar] [CrossRef]
  26. Deng, Z.; Gijsenij, A.; Zhang, J. Source camera identification using auto-white balance approximation. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 57–64. [Google Scholar] [CrossRef]
  27. Long, Y.; Huang, Y. Image based source camera identification using demosaicking. In Proceedings of the 2006 IEEE Workshop on Multimedia Signal Processing, Victoria, BC, Canada, 3–6 October 2006; pp. 419–424. [Google Scholar] [CrossRef]
  28. Bayram, S.; Sencar, H.T.; Memon, N. Classification of digital camera-models based on demosaicing artifacts. Digit. Investig. 2008, 5, 49–59. [Google Scholar] [CrossRef]
  29. Chen, C.; Stamm, M.C. Camera model identification framework using an ensemble of demosaicing features. In Proceedings of the 2015 IEEE International Workshop on Information Forensics and Security (WIFS), Rome, Italy, 16–19 November 2015; pp. 1–6. [Google Scholar] [CrossRef]
  30. Van, L.T.; Emmanuel, S.; Kankanhalli, M.S. Identifying source cell phone using chromatic aberration. In Proceedings of the 2007 IEEE International Conference on Multimedia and Expo, Beijing, China, 2–5 July 2007; pp. 883–886. [Google Scholar] [CrossRef]
  31. Jiang, X.; Wei, S.; Zhao, R.; Zhao, Y.; Du, X.; Du, G. Survey of imaging device source identification. J. Beijing Jiaotong Univ. 2019, 43, 48–57. [Google Scholar] [CrossRef]
  32. Avcibas, I.; Sankur, B.; Sayood, K. Statistical evaluation of image quality measures. J. Electron. Imaging 2001, 1, 206–223. [Google Scholar] [CrossRef]
  33. Holub, V.; Fridrich, J. Low-complexity features for JPEG steganalysis using undecimated DCT. IEEE Trans. Inf. Forensics Secur. 2014, 10, 219–228. [Google Scholar] [CrossRef]
  34. Martín-Rodríguez, F.; Isasi-de-Vicente, F.; Fernández-Barciela, M. A Stress Test for Robustness of Photo Response Nonuniformity (Camera Sensor Fingerprint) Identification on Smartphones. Sensors 2023, 23, 3462. [Google Scholar] [CrossRef]
  35. Shaya, O.A.; Yang, P.; Ni, R.; Zhao, Y.; Piva, A. A New Dataset for Source Identification of High Dynamic Range Images. Sensors 2018, 18, 3801. [Google Scholar] [CrossRef]
  36. Geradts, Z.J.; Bijhold, J.; Kieft, M.; Kurosawa, K.; Kuroki, K.; Saitoh, N. Methods for identification of images acquired with digital cameras. In Enabling Technologies for Law Enforcement and Security; SPIE: Bellingham, WA, USA, 2001; pp. 505–512. [Google Scholar] [CrossRef]
  37. Kurosawa, K.; Kuroki, K.; Saitoh, N. CCD fingerprint method-identification of a video camera from videotaped images. In Proceedings of the 1999 International Conference on Image Processing (Cat. 99CH36348), Kobe, Japan, 24–28 October 1999; pp. 537–540. [Google Scholar] [CrossRef]
  38. Dirik, A.E.; Sencar, H.T.; Memon, N. Digital single lens reflex camera identification from traces of sensor dust. IEEE Trans. Inf. Forensics Secur. 2008, 3, 539–552. [Google Scholar] [CrossRef]
  39. Yang, P.; Ni, R.; Zhao, Y.; Zhao, W. Source camera identification based on content-adaptive fusion residual networks. Pattern Recognit. Lett. 2019, 119, 195–204. [Google Scholar] [CrossRef]
  40. You, C.; Zheng, H.; Guo, Z.; Wang, T.; Wu, X. Multiscale content-independent feature fusion network for source camera identification. Appl. Sci. 2021, 11, 6752. [Google Scholar] [CrossRef]
  41. Chen, M.; Fridrich, J.; Goljan, M.; Lukáš, J. Source digital camcorder identification using sensor photo response non-uniformity. In Security, Steganography, and Watermarking of Multimedia Contents IX; SPIE: Bellingham, WA, USA, 2007; pp. 517–528. [Google Scholar] [CrossRef]
  42. He, K.; Sun, J.; Tang, X. Guided image filtering. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 35, 1397–1409. [Google Scholar] [CrossRef] [PubMed]
  43. Goljan, M.; Fridrich, J. Camera identification from cropped and scaled images. In Security, Forensics, Steganography, and Watermarking of Multimedia Contents X; SPIE: Bellingham, WA, USA, 2008; pp. 154–166. [Google Scholar] [CrossRef]
  44. Goljan, M. Digital camera identification from images–estimating false acceptance probability. In International Workshop on Digital Watermarking; Springer: Berlin/Heidelberg, Germany, 2008; pp. 454–468. [Google Scholar] [CrossRef]
  45. Gloe, T.; Böhme, R. The ‘Dresden Image Database’ for benchmarking digital image forensics. In Proceedings of the 2010 ACM Symposium on Applied Computing, Sierre, Switzerland, 22–26 March 2010; pp. 1584–1590. [Google Scholar] [CrossRef]
  46. Tian, H.; Xiao, Y.; Cao, G.; Zhang, Y.; Xu, Z.; Zhao, Y. Daxing smartphone identification dataset. IEEE Access 2019, 7, 101046–101053. [Google Scholar] [CrossRef]
  47. Bertini, F.; Sharma, R.; Montesi, D. Are social networks watermarking us or are we (unawarely) watermarking ourself? J. Imaging 2022, 8, 132. [Google Scholar] [CrossRef]
  48. Ye, N.; Zeng, Z.; Zhou, J.; Zhu, L.; Duan, Y.; Wu, Y.; Wu, J.; Zeng, H.; Gu, Q.; Wang, X.; et al. OoD-Control: Generalizing Control in Unseen Environments. IEEE Trans. Pattern Anal. Mach. Intell. 2024, 46, 7421–7433. [Google Scholar] [CrossRef]
Figure 1. The workflow of PRNU guided-filter enhancement algorithm.
Figure 1. The workflow of PRNU guided-filter enhancement algorithm.
Sensors 24 07701 g001
Figure 2. Ideal image and non-ideal image for PRNU estimation.
Figure 2. Ideal image and non-ideal image for PRNU estimation.
Sensors 24 07701 g002
Figure 3. The calculation steps of guided-filtering PRNU high-frequency effective component enhancement.
Figure 3. The calculation steps of guided-filtering PRNU high-frequency effective component enhancement.
Sensors 24 07701 g003
Figure 4. Image segmentation results across different frequency bands. The rows from top to bottom display the color image, grayscale image, and noise image, respectively. From left to right, the columns represent the full-band frequency image, the image filtered to frequencies below 10 Hz, the 10–50 Hz frequency band image, and the image filtered to frequencies above 50 Hz.
Figure 4. Image segmentation results across different frequency bands. The rows from top to bottom display the color image, grayscale image, and noise image, respectively. From left to right, the columns represent the full-band frequency image, the image filtered to frequencies below 10 Hz, the 10–50 Hz frequency band image, and the image filtered to frequencies above 50 Hz.
Sensors 24 07701 g004
Figure 5. Results of the PRNU frequency band analysis experiments. The first row (ac) shows the results on the Dresden dataset, while the second row (df) presents those on the Daxing dataset. In each subfigure, “Full band” represents the use of full-band PRNU in the SCI experiments.
Figure 5. Results of the PRNU frequency band analysis experiments. The first row (ac) shows the results on the Dresden dataset, while the second row (df) presents those on the Daxing dataset. In each subfigure, “Full band” represents the use of full-band PRNU in the SCI experiments.
Sensors 24 07701 g005
Figure 6. PRNU spectrum under different enhancement schemes. (a) “Baseline” (non-enhancement); (b) “RSC” enhancement scheme; (c) “SEA” enhancement scheme; (d) “DC” enhancement scheme; (e) “RSC + HF” enhancement scheme; (f) “SEA + HF” enhancement scheme; (g) “DC + HF” enhancement scheme; (h) “RSC + Ours” enhancement scheme; (i) “SEA + Ours” enhancement scheme; (j) “DC + Ours” enhancement scheme.
Figure 6. PRNU spectrum under different enhancement schemes. (a) “Baseline” (non-enhancement); (b) “RSC” enhancement scheme; (c) “SEA” enhancement scheme; (d) “DC” enhancement scheme; (e) “RSC + HF” enhancement scheme; (f) “SEA + HF” enhancement scheme; (g) “DC + HF” enhancement scheme; (h) “RSC + Ours” enhancement scheme; (i) “SEA + Ours” enhancement scheme; (j) “DC + Ours” enhancement scheme.
Sensors 24 07701 g006
Figure 7. The results of the algorithm hyper-parameter analysis experiment. (ac) present the experimental analysis results for varying r and ε values with λ fixed at 5, while (df) show the results for varying λ values with r and ε fixed at 5 and 0.01, respectively.
Figure 7. The results of the algorithm hyper-parameter analysis experiment. (ac) present the experimental analysis results for varying r and ε values with λ fixed at 5, while (df) show the results for varying λ values with r and ε fixed at 5 and 0.01, respectively.
Sensors 24 07701 g007
Table 1. Non-JPEG compression scene enhancement experiments on the Dresden dataset.
Table 1. Non-JPEG compression scene enhancement experiments on the Dresden dataset.
ResolutionEnhancement SchemeAUCTPR@FPR10−3Kappa
128 × 128Baseline0.75300.00110.3389
RSC0.86260.25550.4405
RSC + HF0.83240.19580.3617
RSC + Ours0.86330.25860.4449
SEA0.85700.27150.4280
SEA + HF0.82160.22190.3437
SEA + Ours0.85700.27270.4319
DC0.87100.29640.4645
DC + HF0.84200.21530.3786
DC + Ours0.87280.29280.4684
256 × 256Baseline0.74230.00110.4337
RSC0.92340.45200.6671
RSC + HF0.90240.38000.5912
RSC + Ours0.92500.45460.6704
SEA0.92330.50340.6556
SEA + HF0.89840.44590.5725
SEA + Ours0.92340.50110.6611
DC0.92790.48490.6823
DC + HF0.90710.39420.5973
DC + Ours0.93070.49110.6868
512 × 512Baseline0.70830.00140.4760
RSC0.96310.65180.8299
RSC + HF0.94890.59410.7658
RSC + Ours0.96420.65760.8308
SEA0.96290.71150.8205
SEA + HF0.94750.66920.7651
SEA + Ours0.96400.72150.8273
DC0.95750.63620.8171
DC + HF0.94790.56720.7559
DC + Ours0.96400.65590.8356
Table 2. JPEG compression scene enhancement experiments on Dresden dataset.
Table 2. JPEG compression scene enhancement experiments on Dresden dataset.
Quality FactorEnhancement SchemeAUCTPR@FPR10−3Kappa
90Baseline0.74640.00110.3322
RSC0.85520.24990.4262
RSC + HF0.82270.18580.3425
RSC + Ours0.85600.25220.4299
SEA0.85290.26640.4221
SEA + HF0.81580.21270.3321
SEA + Ours0.85310.26590.4232
DC0.86480.28580.4497
DC + HF0.83340.20840.3664
DC + Ours0.86660.28780.4577
80Baseline0.73060.00110.3132
RSC0.84440.22950.4001
RSC + HF0.80980.17040.3131
RSC + Ours0.84530.23180.4026
SEA0.84380.24240.3956
SEA + HF0.80430.18860.3056
SEA + Ours0.84390.24310.3988
DC0.85530.27240.4288
DC + HF0.82130.18970.3359
DC + Ours0.85730.27260.4326
70Baseline0.73650.00110.3073
RSC0.84380.22740.3892
RSC + HF0.80880.16030.2995
RSC + Ours0.84500.22960.3956
SEA0.84280.24040.3874
SEA + HF0.80170.17740.2908
SEA + Ours0.84280.23730.3897
DC0.85560.26960.4230
DC + HF0.82090.18460.3256
DC + Ours0.85740.27140.4277
60Baseline0.73760.00090.3006
RSC0.83550.20910.3669
RSC + HF0.79940.14970.2818
RSC + Ours0.83650.21120.3713
SEA0.83540.22780.3774
SEA + HF0.79440.16230.2807
SEA + Ours0.83560.22780.3804
DC0.84880.25660.4057
DC + HF0.81350.17070.3113
DC + Ours0.85050.25640.4113
Table 3. The results of the effect of image texture complexity analysis experiment.
Table 3. The results of the effect of image texture complexity analysis experiment.
Enhancement SchemeAUCTPR@FPR10−3Kappa
F T F T F T
Baseline0.84770.79960.22110.00440.56810.4229
RSC0.91620.87840.55440.36890.66490.5899
RSC + Ours0.92480.87870.56130.55510.66760.5942
Table 4. Running time comparison (unit: ms).
Table 4. Running time comparison (unit: ms).
DatasetEnhancement SchemeResolution
128 × 128256 × 256512 × 512
DresdenRSC1.253.4819.56
RSC + HF3.5810.1946.89
RSC + Ours4.4318.06108.49
SEA9.5714.8372.61
SEA + HF10.4020.0397.03
SEA + Ours11.7028.92132.01
DC47.49191.23765.68
DC + HF49.11198.77790.65
DC + Ours50.53207.36844.11
DaxingRSC1.263.4119.75
RSC + HF3.429.8544.37
RSC + Ours4.4217.55108.68
SEA8.3614.7972.65
SEA + HF10.5819.9295.57
SEA + Ours11.2329.14131.78
DC47.32192.04765.63
DC + HF49.57199.49788.22
DC + Ours50.68210.02844.19
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, Y.; Xiao, Y.; Tian, H. Plug-and-Play PRNU Enhancement Algorithm with Guided Filtering. Sensors 2024, 24, 7701. https://doi.org/10.3390/s24237701

AMA Style

Liu Y, Xiao Y, Tian H. Plug-and-Play PRNU Enhancement Algorithm with Guided Filtering. Sensors. 2024; 24(23):7701. https://doi.org/10.3390/s24237701

Chicago/Turabian Style

Liu, Yufei, Yanhui Xiao, and Huawei Tian. 2024. "Plug-and-Play PRNU Enhancement Algorithm with Guided Filtering" Sensors 24, no. 23: 7701. https://doi.org/10.3390/s24237701

APA Style

Liu, Y., Xiao, Y., & Tian, H. (2024). Plug-and-Play PRNU Enhancement Algorithm with Guided Filtering. Sensors, 24(23), 7701. https://doi.org/10.3390/s24237701

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop