sensors-logo

Journal Browser

Journal Browser

Special Issue "Advances on Sensor Pattern Noise used in Multimedia Forensics and Counter Forensic"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Physical Sensors".

Deadline for manuscript submissions: closed (31 July 2019).

Special Issue Editors

Prof. Dr. Luis Javier Garcia Villalba
Website
Guest Editor
Dr. Mario Blaum

Guest Editor
IBM Almaden Research Center, 650 Harry Rd, San Jose, CA 95120, USA
Interests: error-correcting codes; fault tolerance; parallel processing; cryptography; modulation codes for magnetic recording; timing algorithms; holographic storage; parallel communications; neural networks; finite group theory
Special Issues and Collections in MDPI journals
Dr. Julio Hernandez-Castro
Website
Guest Editor
University of Kent, UK
Interests: cryptology; lightweight crypto; steganography; steganalysis; computer and network security; computer forensics; CAPTCHAs; RFID Security
Dr. Ana Lucila Sandoval Orozco
Website
Guest Editor
Group of Analysis, Security and Systems (GASS), Universidad Complutense de Madrid (UCM), 28040 Madrid, Spain
Interests: computer and network security; multimedia forensics; error-correcting codes; information theory
Special Issues and Collections in MDPI journals

Special Issue Information

Dear Colleauges,

Digital multimedia contents (images, audio, video, etc.) now play an important role in people’s daily lives, in part due to the increasing popularity of smartphones and the consistent capacity improvement of personal computers and network infrastructure. Even today, it is still common for people to trust what they see, rather than what they read. Multimedia Forensics (MF) deals with the recovery of information that can be directly used to measure the trustworthiness of digital multimedia content.

The authenticity of a digital image can be verified through the noise characteristics of an imaging sensor. In particular, sensor pattern noise (SPN) has been used in source camera identification (SCI) and forgery detection. However, this technique can be used maliciously to track or inculpate innocent people. Accordingly, in the last years, the security issues related to multimedia contents have attracted much attention from both academic researchers and industrial practitioners.

Based on this motivation, this Special Issue invites researchers in all related fields (including but not limited to image and video signal processing, machine learning, computer vision and pattern recognition, cyber security, digital forensics) to join us in a quest for pinpointing the next-generation image and video forensics solutions of tomorrow, capable of processing image and video data using the recently-developed deep learning paradigm and other new modelling and learning techniques. The core data used in your work should be visual data (images and videos). The potential topics of interest of this Special Issue are listed below. Submissions can contemplate original research, serious dataset collection and benchmarking, or critical surveys.

Potential topics include, but are not limited to:

  • Camera sensor fingerprint recognition
  • Camera identification from sensor fingerprints
  • Counter forensics
  • Cyber threat analysis for image and video data
  • Digital image and video forgeries using Sensor Pattern Noise
  • Forensic classification of imaging sensor types
  • Image and video forgery detection
  • Image sensor forgery
  • Machine learning techniques in image and video forensics
  • Metadata generation, video database indexing, searching and browsing
  • Mobile device sensor forensic analysis
  • Multi-camera systems in mobile devices
  • Multimedia authentication using sensor pattern noise
  • Multimedia fingerprinting in mobile devices
  • Multimedia processing history identification in mobile devices
  • Multimedia source identification in mobile devices
  • PRNU-based forgery detection
  • PRNU pattern in multimedia forensics
  • Sensitive content detection (porn and child porn detection, violence detection)
  • Sensor imperfections used in counter forensics techniques
  • Source identification of digital image and video
  • Surveillance for forensics and security applications
  • Sensor format and image and video quality
  • Visual analytics for forensics and security applications
  • Visual information hiding: Designs and attacks
Prof. Luis Javier García Villalba
Dr. Mario Blaum
Dr. Julio Hernandez-Castro
Dr. Ana Lucila Sandoval Orozco
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2200 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Open AccessArticle
Vehicle Counting in Video Sequences: An Incremental Subspace Learning Approach
Sensors 2019, 19(13), 2848; https://doi.org/10.3390/s19132848 - 27 Jun 2019
Cited by 7
Abstract
The counting of vehicles plays an important role in measuring the behavior patterns of traffic flow in cities, as streets and avenues can get crowded easily. To address this problem, some Intelligent Transport Systems (ITSs) have been implemented in order to count vehicles [...] Read more.
The counting of vehicles plays an important role in measuring the behavior patterns of traffic flow in cities, as streets and avenues can get crowded easily. To address this problem, some Intelligent Transport Systems (ITSs) have been implemented in order to count vehicles with already established video surveillance infrastructure. With this in mind, in this paper, we present an on-line learning methodology for counting vehicles in video sequences based on Incremental Principal Component Analysis (Incremental PCA). This incremental learning method allows us to identify the maximum variability (i.e., motion detection) between a previous block of frames and the actual one by using only the first projected eigenvector. Once the projected image is obtained, we apply dynamic thresholding to perform image binarization. Then, a series of post-processing steps are applied to enhance the binary image containing the objects in motion. Finally, we count the number of vehicles by implementing a virtual detection line in each of the road lanes. These lines determine the instants where the vehicles pass completely through them. Results show that our proposed methodology is able to count vehicles with 96.6% accuracy at 26 frames per second on average—dealing with both camera jitter and sudden illumination changes caused by the environment and the camera auto exposure. Full article
Show Figures

Graphical abstract

Open AccessArticle
Multi-Layer Feature Based Shoeprint Verification Algorithm for Camera Sensor Images
Sensors 2019, 19(11), 2491; https://doi.org/10.3390/s19112491 - 31 May 2019
Abstract
As a kind of forensic evidence, shoeprints have been treated as important as fingerprint and DNA evidence in forensic investigations. Shoeprint verification is used to determine whether two shoeprints could, or could not, have been made by the same shoe. Successful shoeprint verification [...] Read more.
As a kind of forensic evidence, shoeprints have been treated as important as fingerprint and DNA evidence in forensic investigations. Shoeprint verification is used to determine whether two shoeprints could, or could not, have been made by the same shoe. Successful shoeprint verification has tremendous evidentiary value, and the result can link a suspect to a crime, or even link crime scenes to each other. In forensic practice, shoeprint verification is manually performed by forensic experts; however, it is too dependent on experts’ experience. This is a meaningful and challenging problem, and there are few attempts to tackle it in the literatures. In this paper, we propose a multi-layer feature-based method to conduct shoeprint verification automatically. Firstly, we extracted multi-layer features; and then conducted multi-layer feature matching and calculated the total similarity score. Finally, we drew a verification conclusion according to the total similarity score. We conducted extensive experiments to evaluate the effectiveness of the proposed method on two shoeprint datasets. Experimental results showed that the proposed method achieved good performance with an equal error rate (EER) of 3.2% on the MUES-SV1KR2R dataset and an EER of 10.9% on the MUES-SV2HS2S dataset. Full article
Show Figures

Figure 1

Open AccessArticle
An Improved Recognition Approach for Noisy Multispectral Palmprint by Robust L2 Sparse Representation with a Tensor-Based Extreme Learning Machine
Sensors 2019, 19(2), 235; https://doi.org/10.3390/s19020235 - 09 Jan 2019
Cited by 5
Abstract
For the past decades, recognition technologies of multispectral palmprint have attracted more and more attention due to their abundant spatial and spectral characteristics compared with the single spectral case. Enlightened by this, an innovative robust L2 sparse representation with tensor-based extreme learning machine [...] Read more.
For the past decades, recognition technologies of multispectral palmprint have attracted more and more attention due to their abundant spatial and spectral characteristics compared with the single spectral case. Enlightened by this, an innovative robust L2 sparse representation with tensor-based extreme learning machine (RL2SR-TELM) algorithm is put forward by using an adaptive image level fusion strategy to accomplish the multispectral palmprint recognition. Firstly, we construct a robust L2 sparse representation (RL2SR) optimization model to calculate the linear representation coefficients. To suppress the affection caused by noise contamination, we introduce a logistic function into RL2SR model to evaluate the representation residual. Secondly, we propose a novel weighted sparse and collaborative concentration index (WSCCI) to calculate the fusion weight adaptively. Finally, we put forward a TELM approach to carry out the classification task. It can deal with the high dimension data directly and reserve the image spatial information well. Extensive experiments are implemented on the benchmark multispectral palmprint database provided by PolyU. The experiment results validate that our RL2SR-TELM algorithm overmatches a number of state-of-the-art multispectral palmprint recognition algorithms both when the images are noise-free and contaminated by different noises. Full article
Show Figures

Figure 1

Open AccessArticle
Digital Images Authentication Technique Based on DWT, DCT and Local Binary Patterns
Sensors 2018, 18(10), 3372; https://doi.org/10.3390/s18103372 - 09 Oct 2018
Cited by 6
Abstract
In the last few years, the world has witnessed a ground-breaking growth in the use of digital images and their applications in the modern society. In addition, image editing applications have downplayed the modification of digital photos and this compromises the authenticity and [...] Read more.
In the last few years, the world has witnessed a ground-breaking growth in the use of digital images and their applications in the modern society. In addition, image editing applications have downplayed the modification of digital photos and this compromises the authenticity and veracity of a digital image. These applications allow for tampering the content of the image without leaving visible traces. In addition to this, the easiness of distributing information through the Internet has caused society to accept everything it sees as true without questioning its integrity. This paper proposes a digital image authentication technique that combines the analysis of local texture patterns with the discrete wavelet transform and the discrete cosine transform to extract features from each of the blocks of an image. Subsequently, it uses a vector support machine to create a model that allows verification of the authenticity of the image. Experiments were performed with falsified images from public databases widely used in the literature that demonstrate the efficiency of the proposed method. Full article
Show Figures

Figure 1

Open AccessArticle
Digital Image Tamper Detection Technique Based on Spectrum Analysis of CFA Artifacts
Sensors 2018, 18(9), 2804; https://doi.org/10.3390/s18092804 - 25 Aug 2018
Cited by 6
Abstract
Existence of mobile devices with high performance cameras and powerful image processing applications eases the alteration of digital images for malicious purposes. This work presents a new approach to detect digital image tamper detection technique based on CFA artifacts arising from the differences [...] Read more.
Existence of mobile devices with high performance cameras and powerful image processing applications eases the alteration of digital images for malicious purposes. This work presents a new approach to detect digital image tamper detection technique based on CFA artifacts arising from the differences in the distribution of acquired and interpolated pixels. The experimental evidence supports the capabilities of the proposed method for detecting a broad range of manipulations, e.g., copy-move, resizing, rotation, filtering and colorization. This technique exhibits tampered areas by computing the probability of each pixel of being interpolated and then applying the DCT on small blocks of the probability map. The value of the coefficient for the highest frequency on each block is used to decide whether the analyzed region has been tampered or not. The results shown here were obtained from tests made on a publicly available dataset of tampered images for forensic analysis. Affected zones are clearly highlighted if the method detects CFA inconsistencies. The analysis can be considered successful if the modified zone, or an important part of it, is accurately detected. By analizing a publicly available dataset with images modified with different methods we reach an 86% of accuracy, which provides a good result for a method that does not require previous training. Full article
Show Figures

Figure 1

Open AccessArticle
Multimodal Biometric Recognition Based on Convolutional Neural Network by the Fusion of Finger-Vein and Finger Shape Using Near-Infrared (NIR) Camera Sensor
Sensors 2018, 18(7), 2296; https://doi.org/10.3390/s18072296 - 15 Jul 2018
Cited by 22
Abstract
Finger-vein recognition, which is one of the conventional biometrics, hinders fake attacks, is cheaper, and it features a higher level of user-convenience than other biometrics because it uses miniaturized devices. However, the recognition performance of finger-vein recognition methods may decrease due to a [...] Read more.
Finger-vein recognition, which is one of the conventional biometrics, hinders fake attacks, is cheaper, and it features a higher level of user-convenience than other biometrics because it uses miniaturized devices. However, the recognition performance of finger-vein recognition methods may decrease due to a variety of factors, such as image misalignment that is caused by finger position changes during image acquisition or illumination variation caused by non-uniform near-infrared (NIR) light. To solve such problems, multimodal biometric systems that are able to simultaneously recognize both finger-veins and fingerprints have been researched. However, because the image-acquisition positions for finger-veins and fingerprints are different, not to mention that finger-vein images must be acquired in NIR light environments and fingerprints in visible light environments, either two sensors must be used, or the size of the image acquisition device must be enlarged. Hence, there are multimodal biometrics based on finger-veins and finger shapes. However, such methods recognize individuals that are based on handcrafted features, which present certain limitations in terms of performance improvement. To solve these problems, finger-vein and finger shape multimodal biometrics using near-infrared (NIR) light camera sensor based on a deep convolutional neural network (CNN) are proposed in this research. Experimental results obtained using two types of open databases, the Shandong University homologous multi-modal traits (SDUMLA-HMT) and the Hong Kong Polytechnic University Finger Image Database (version 1), revealed that the proposed method in the present study features superior performance to the conventional methods. Full article
Show Figures

Figure 1

Open AccessArticle
An Effective Palmprint Recognition Approach for Visible and Multispectral Sensor Images
Sensors 2018, 18(5), 1575; https://doi.org/10.3390/s18051575 - 15 May 2018
Cited by 18
Abstract
Among several palmprint feature extraction methods the HOG-based method is attractive and performs well against changes in illumination and shadowing of palmprint images. However, it still lacks the robustness to extract the palmprint features at different rotation angles. To solve this problem, this [...] Read more.
Among several palmprint feature extraction methods the HOG-based method is attractive and performs well against changes in illumination and shadowing of palmprint images. However, it still lacks the robustness to extract the palmprint features at different rotation angles. To solve this problem, this paper presents a hybrid feature extraction method, named HOG-SGF that combines the histogram of oriented gradients (HOG) with a steerable Gaussian filter (SGF) to develop an effective palmprint recognition approach. The approach starts by processing all palmprint images by David Zhang’s method to segment only the region of interests. Next, we extracted palmprint features based on the hybrid HOG-SGF feature extraction method. Then, an optimized auto-encoder (AE) was utilized to reduce the dimensionality of the extracted features. Finally, a fast and robust regularized extreme learning machine (RELM) was applied for the classification task. In the evaluation phase of the proposed approach, a number of experiments were conducted on three publicly available palmprint databases, namely MS-PolyU of multispectral palmprint images and CASIA and Tongji of contactless palmprint images. Experimentally, the results reveal that the proposed approach outperforms the existing state-of-the-art approaches even when a small number of training samples are used. Full article
Show Figures

Figure 1

Open AccessArticle
Distinguishing Computer-Generated Graphics from Natural Images Based on Sensor Pattern Noise and Deep Learning
Sensors 2018, 18(4), 1296; https://doi.org/10.3390/s18041296 - 23 Apr 2018
Cited by 13
Abstract
Computer-generated graphics (CGs) are images generated by computer software. The rapid development of computer graphics technologies has made it easier to generate photorealistic computer graphics, and these graphics are quite difficult to distinguish from natural images (NIs) with the naked eye. In this [...] Read more.
Computer-generated graphics (CGs) are images generated by computer software. The rapid development of computer graphics technologies has made it easier to generate photorealistic computer graphics, and these graphics are quite difficult to distinguish from natural images (NIs) with the naked eye. In this paper, we propose a method based on sensor pattern noise (SPN) and deep learning to distinguish CGs from NIs. Before being fed into our convolutional neural network (CNN)-based model, these images—CGs and NIs—are clipped into image patches. Furthermore, three high-pass filters (HPFs) are used to remove low-frequency signals, which represent the image content. These filters are also used to reveal the residual signal as well as SPN introduced by the digital camera device. Different from the traditional methods of distinguishing CGs from NIs, the proposed method utilizes a five-layer CNN to classify the input image patches. Based on the classification results of the image patches, we deploy a majority vote scheme to obtain the classification results for the full-size images. The experiments have demonstrated that (1) the proposed method with three HPFs can achieve better results than that with only one HPF or no HPF and that (2) the proposed method with three HPFs achieves 100% accuracy, although the NIs undergo a JPEG compression with a quality factor of 75. Full article
Show Figures

Figure 1

Open AccessArticle
Combining Deep and Handcrafted Image Features for Presentation Attack Detection in Face Recognition Systems Using Visible-Light Camera Sensors
Sensors 2018, 18(3), 699; https://doi.org/10.3390/s18030699 - 26 Feb 2018
Cited by 37
Abstract
Although face recognition systems have wide application, they are vulnerable to presentation attack samples (fake samples). Therefore, a presentation attack detection (PAD) method is required to enhance the security level of face recognition systems. Most of the previously proposed PAD methods for face [...] Read more.
Although face recognition systems have wide application, they are vulnerable to presentation attack samples (fake samples). Therefore, a presentation attack detection (PAD) method is required to enhance the security level of face recognition systems. Most of the previously proposed PAD methods for face recognition systems have focused on using handcrafted image features, which are designed by expert knowledge of designers, such as Gabor filter, local binary pattern (LBP), local ternary pattern (LTP), and histogram of oriented gradients (HOG). As a result, the extracted features reflect limited aspects of the problem, yielding a detection accuracy that is low and varies with the characteristics of presentation attack face images. The deep learning method has been developed in the computer vision research community, which is proven to be suitable for automatically training a feature extractor that can be used to enhance the ability of handcrafted features. To overcome the limitations of previously proposed PAD methods, we propose a new PAD method that uses a combination of deep and handcrafted features extracted from the images by visible-light camera sensor. Our proposed method uses the convolutional neural network (CNN) method to extract deep image features and the multi-level local binary pattern (MLBP) method to extract skin detail features from face images to discriminate the real and presentation attack face images. By combining the two types of image features, we form a new type of image features, called hybrid features, which has stronger discrimination ability than single image features. Finally, we use the support vector machine (SVM) method to classify the image features into real or presentation attack class. Our experimental results indicate that our proposed method outperforms previous PAD methods by yielding the smallest error rates on the same image databases. Full article
Show Figures

Figure 1

Back to TopTop