E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Special Issue "Advances on Sensor Pattern Noise used in Multimedia Forensics and Counter Forensic"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Physical Sensors".

Deadline for manuscript submissions: 31 July 2019

Special Issue Editors

Guest Editor
Prof. Dr. Luis Javier Garcia Villalba

Group of Analysis, Security and Systems (GASS), Department of Software Engineering and Artificial Intelligence (DISIA), Faculty of Computer Science and Engineering, Office 431, Universidad Complutense de Madrid (UCM), Calle Profesor José García Santesmases, 9, Ciudad Universitaria, 28040 Madrid, Spain
Website | E-Mail
Phone: +34 91 394 76 38
Interests: anonymity; computer security; cyber security; cryptography; information security; intrusion detection; malware; privacy; trust
Guest Editor
Dr. Mario Blaum

IBM Almaden Research Center, San Jose, CA, USA
Website | E-Mail
Interests: error-correcting codes; fault tolerance; parallel processing; cryptography; modulation codes for magnetic recording; timing algorithms; holographic storage; parallel communications; neural networks; finite group theory
Guest Editor
Dr. Julio Hernandez-Castro

University of Kent, UK
Website | E-Mail
Interests: cryptology; lightweight crypto; steganography; steganalysis; computer and network security; computer forensics; CAPTCHAs; RFID Security
Guest Editor
Dr. Ana Lucila Sandoval Orozco

Universidad Complutense de Madrid, Spain
Website | E-Mail
Interests: multimedia forensics; computer and network security; error-correcting codes; information theory

Special Issue Information

Dear Colleauges,

Digital multimedia contents (images, audio, video, etc.) now play an important role in people’s daily lives, in part due to the increasing popularity of smartphones and the consistent capacity improvement of personal computers and network infrastructure. Even today, it is still common for people to trust what they see, rather than what they read. Multimedia Forensics (MF) deals with the recovery of information that can be directly used to measure the trustworthiness of digital multimedia content.

The authenticity of a digital image can be verified through the noise characteristics of an imaging sensor. In particular, sensor pattern noise (SPN) has been used in source camera identification (SCI) and forgery detection. However, this technique can be used maliciously to track or inculpate innocent people. Accordingly, in the last years, the security issues related to multimedia contents have attracted much attention from both academic researchers and industrial practitioners.

Based on this motivation, this Special Issue invites researchers in all related fields (including but not limited to image and video signal processing, machine learning, computer vision and pattern recognition, cyber security, digital forensics) to join us in a quest for pinpointing the next-generation image and video forensics solutions of tomorrow, capable of processing image and video data using the recently-developed deep learning paradigm and other new modelling and learning techniques. The core data used in your work should be visual data (images and videos). The potential topics of interest of this Special Issue are listed below. Submissions can contemplate original research, serious dataset collection and benchmarking, or critical surveys.

Potential topics include, but are not limited to:

  • Camera sensor fingerprint recognition
  • Camera identification from sensor fingerprints
  • Counter forensics
  • Cyber threat analysis for image and video data
  • Digital image and video forgeries using Sensor Pattern Noise
  • Forensic classification of imaging sensor types
  • Image and video forgery detection
  • Image sensor forgery
  • Machine learning techniques in image and video forensics
  • Metadata generation, video database indexing, searching and browsing
  • Mobile device sensor forensic analysis
  • Multi-camera systems in mobile devices
  • Multimedia authentication using sensor pattern noise
  • Multimedia fingerprinting in mobile devices
  • Multimedia processing history identification in mobile devices
  • Multimedia source identification in mobile devices
  • PRNU-based forgery detection
  • PRNU pattern in multimedia forensics
  • Sensitive content detection (porn and child porn detection, violence detection)
  • Sensor imperfections used in counter forensics techniques
  • Source identification of digital image and video
  • Surveillance for forensics and security applications
  • Sensor format and image and video quality
  • Visual analytics for forensics and security applications
  • Visual information hiding: Designs and attacks
Prof. Luis Javier García Villalba
Dr. Mario Blaum
Dr. Julio Hernandez-Castro
Dr. Ana Lucila Sandoval Orozco
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (6 papers)

View options order results:
result details:
Displaying articles 1-6
Export citation of selected articles as:

Research

Open AccessArticle Digital Images Authentication Technique Based on DWT, DCT and Local Binary Patterns
Sensors 2018, 18(10), 3372; https://doi.org/10.3390/s18103372
Received: 19 August 2018 / Revised: 26 September 2018 / Accepted: 30 September 2018 / Published: 9 October 2018
PDF Full-text (1576 KB) | HTML Full-text | XML Full-text
Abstract
In the last few years, the world has witnessed a ground-breaking growth in the use of digital images and their applications in the modern society. In addition, image editing applications have downplayed the modification of digital photos and this compromises the authenticity and
[...] Read more.
In the last few years, the world has witnessed a ground-breaking growth in the use of digital images and their applications in the modern society. In addition, image editing applications have downplayed the modification of digital photos and this compromises the authenticity and veracity of a digital image. These applications allow for tampering the content of the image without leaving visible traces. In addition to this, the easiness of distributing information through the Internet has caused society to accept everything it sees as true without questioning its integrity. This paper proposes a digital image authentication technique that combines the analysis of local texture patterns with the discrete wavelet transform and the discrete cosine transform to extract features from each of the blocks of an image. Subsequently, it uses a vector support machine to create a model that allows verification of the authenticity of the image. Experiments were performed with falsified images from public databases widely used in the literature that demonstrate the efficiency of the proposed method. Full article
Figures

Figure 1

Open AccessArticle Digital Image Tamper Detection Technique Based on Spectrum Analysis of CFA Artifacts
Sensors 2018, 18(9), 2804; https://doi.org/10.3390/s18092804
Received: 17 July 2018 / Revised: 10 August 2018 / Accepted: 23 August 2018 / Published: 25 August 2018
PDF Full-text (4002 KB) | HTML Full-text | XML Full-text
Abstract
Existence of mobile devices with high performance cameras and powerful image processing applications eases the alteration of digital images for malicious purposes. This work presents a new approach to detect digital image tamper detection technique based on CFA artifacts arising from the differences
[...] Read more.
Existence of mobile devices with high performance cameras and powerful image processing applications eases the alteration of digital images for malicious purposes. This work presents a new approach to detect digital image tamper detection technique based on CFA artifacts arising from the differences in the distribution of acquired and interpolated pixels. The experimental evidence supports the capabilities of the proposed method for detecting a broad range of manipulations, e.g., copy-move, resizing, rotation, filtering and colorization. This technique exhibits tampered areas by computing the probability of each pixel of being interpolated and then applying the DCT on small blocks of the probability map. The value of the coefficient for the highest frequency on each block is used to decide whether the analyzed region has been tampered or not. The results shown here were obtained from tests made on a publicly available dataset of tampered images for forensic analysis. Affected zones are clearly highlighted if the method detects CFA inconsistencies. The analysis can be considered successful if the modified zone, or an important part of it, is accurately detected. By analizing a publicly available dataset with images modified with different methods we reach an 86% of accuracy, which provides a good result for a method that does not require previous training. Full article
Figures

Figure 1

Open AccessArticle Multimodal Biometric Recognition Based on Convolutional Neural Network by the Fusion of Finger-Vein and Finger Shape Using Near-Infrared (NIR) Camera Sensor
Sensors 2018, 18(7), 2296; https://doi.org/10.3390/s18072296
Received: 22 June 2018 / Revised: 11 July 2018 / Accepted: 12 July 2018 / Published: 15 July 2018
PDF Full-text (11095 KB) | HTML Full-text | XML Full-text
Abstract
Finger-vein recognition, which is one of the conventional biometrics, hinders fake attacks, is cheaper, and it features a higher level of user-convenience than other biometrics because it uses miniaturized devices. However, the recognition performance of finger-vein recognition methods may decrease due to a
[...] Read more.
Finger-vein recognition, which is one of the conventional biometrics, hinders fake attacks, is cheaper, and it features a higher level of user-convenience than other biometrics because it uses miniaturized devices. However, the recognition performance of finger-vein recognition methods may decrease due to a variety of factors, such as image misalignment that is caused by finger position changes during image acquisition or illumination variation caused by non-uniform near-infrared (NIR) light. To solve such problems, multimodal biometric systems that are able to simultaneously recognize both finger-veins and fingerprints have been researched. However, because the image-acquisition positions for finger-veins and fingerprints are different, not to mention that finger-vein images must be acquired in NIR light environments and fingerprints in visible light environments, either two sensors must be used, or the size of the image acquisition device must be enlarged. Hence, there are multimodal biometrics based on finger-veins and finger shapes. However, such methods recognize individuals that are based on handcrafted features, which present certain limitations in terms of performance improvement. To solve these problems, finger-vein and finger shape multimodal biometrics using near-infrared (NIR) light camera sensor based on a deep convolutional neural network (CNN) are proposed in this research. Experimental results obtained using two types of open databases, the Shandong University homologous multi-modal traits (SDUMLA-HMT) and the Hong Kong Polytechnic University Finger Image Database (version 1), revealed that the proposed method in the present study features superior performance to the conventional methods. Full article
Figures

Figure 1

Open AccessArticle An Effective Palmprint Recognition Approach for Visible and Multispectral Sensor Images
Sensors 2018, 18(5), 1575; https://doi.org/10.3390/s18051575
Received: 18 April 2018 / Revised: 10 May 2018 / Accepted: 11 May 2018 / Published: 15 May 2018
PDF Full-text (6146 KB) | HTML Full-text | XML Full-text
Abstract
Among several palmprint feature extraction methods the HOG-based method is attractive and performs well against changes in illumination and shadowing of palmprint images. However, it still lacks the robustness to extract the palmprint features at different rotation angles. To solve this problem, this
[...] Read more.
Among several palmprint feature extraction methods the HOG-based method is attractive and performs well against changes in illumination and shadowing of palmprint images. However, it still lacks the robustness to extract the palmprint features at different rotation angles. To solve this problem, this paper presents a hybrid feature extraction method, named HOG-SGF that combines the histogram of oriented gradients (HOG) with a steerable Gaussian filter (SGF) to develop an effective palmprint recognition approach. The approach starts by processing all palmprint images by David Zhang’s method to segment only the region of interests. Next, we extracted palmprint features based on the hybrid HOG-SGF feature extraction method. Then, an optimized auto-encoder (AE) was utilized to reduce the dimensionality of the extracted features. Finally, a fast and robust regularized extreme learning machine (RELM) was applied for the classification task. In the evaluation phase of the proposed approach, a number of experiments were conducted on three publicly available palmprint databases, namely MS-PolyU of multispectral palmprint images and CASIA and Tongji of contactless palmprint images. Experimentally, the results reveal that the proposed approach outperforms the existing state-of-the-art approaches even when a small number of training samples are used. Full article
Figures

Figure 1

Open AccessArticle Distinguishing Computer-Generated Graphics from Natural Images Based on Sensor Pattern Noise and Deep Learning
Sensors 2018, 18(4), 1296; https://doi.org/10.3390/s18041296
Received: 7 March 2018 / Revised: 19 April 2018 / Accepted: 20 April 2018 / Published: 23 April 2018
PDF Full-text (2376 KB) | HTML Full-text | XML Full-text
Abstract
Computer-generated graphics (CGs) are images generated by computer software. The rapid development of computer graphics technologies has made it easier to generate photorealistic computer graphics, and these graphics are quite difficult to distinguish from natural images (NIs) with the naked eye. In this
[...] Read more.
Computer-generated graphics (CGs) are images generated by computer software. The rapid development of computer graphics technologies has made it easier to generate photorealistic computer graphics, and these graphics are quite difficult to distinguish from natural images (NIs) with the naked eye. In this paper, we propose a method based on sensor pattern noise (SPN) and deep learning to distinguish CGs from NIs. Before being fed into our convolutional neural network (CNN)-based model, these images—CGs and NIs—are clipped into image patches. Furthermore, three high-pass filters (HPFs) are used to remove low-frequency signals, which represent the image content. These filters are also used to reveal the residual signal as well as SPN introduced by the digital camera device. Different from the traditional methods of distinguishing CGs from NIs, the proposed method utilizes a five-layer CNN to classify the input image patches. Based on the classification results of the image patches, we deploy a majority vote scheme to obtain the classification results for the full-size images. The experiments have demonstrated that (1) the proposed method with three HPFs can achieve better results than that with only one HPF or no HPF and that (2) the proposed method with three HPFs achieves 100% accuracy, although the NIs undergo a JPEG compression with a quality factor of 75. Full article
Figures

Figure 1

Open AccessArticle Combining Deep and Handcrafted Image Features for Presentation Attack Detection in Face Recognition Systems Using Visible-Light Camera Sensors
Sensors 2018, 18(3), 699; https://doi.org/10.3390/s18030699
Received: 30 January 2018 / Revised: 19 February 2018 / Accepted: 24 February 2018 / Published: 26 February 2018
Cited by 3 | PDF Full-text (4369 KB) | HTML Full-text | XML Full-text
Abstract
Although face recognition systems have wide application, they are vulnerable to presentation attack samples (fake samples). Therefore, a presentation attack detection (PAD) method is required to enhance the security level of face recognition systems. Most of the previously proposed PAD methods for face
[...] Read more.
Although face recognition systems have wide application, they are vulnerable to presentation attack samples (fake samples). Therefore, a presentation attack detection (PAD) method is required to enhance the security level of face recognition systems. Most of the previously proposed PAD methods for face recognition systems have focused on using handcrafted image features, which are designed by expert knowledge of designers, such as Gabor filter, local binary pattern (LBP), local ternary pattern (LTP), and histogram of oriented gradients (HOG). As a result, the extracted features reflect limited aspects of the problem, yielding a detection accuracy that is low and varies with the characteristics of presentation attack face images. The deep learning method has been developed in the computer vision research community, which is proven to be suitable for automatically training a feature extractor that can be used to enhance the ability of handcrafted features. To overcome the limitations of previously proposed PAD methods, we propose a new PAD method that uses a combination of deep and handcrafted features extracted from the images by visible-light camera sensor. Our proposed method uses the convolutional neural network (CNN) method to extract deep image features and the multi-level local binary pattern (MLBP) method to extract skin detail features from face images to discriminate the real and presentation attack face images. By combining the two types of image features, we form a new type of image features, called hybrid features, which has stronger discrimination ability than single image features. Finally, we use the support vector machine (SVM) method to classify the image features into real or presentation attack class. Our experimental results indicate that our proposed method outperforms previous PAD methods by yielding the smallest error rates on the same image databases. Full article
Figures

Figure 1

Back to Top