E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Special Issue "Advances on Sensor Pattern Noise used in Multimedia Forensics and Counter Forensic"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Physical Sensors".

Deadline for manuscript submissions: 31 December 2018

Special Issue Editors

Guest Editor
Prof. Dr. Luis Javier Garcia Villalba

Universidad Complutense de Madrid, 28040 Madrid, Spain
Website | E-Mail
Phone: +34 91 394 76 38
Interests: anonymity; computer security; cyber security; cryptography; information security; intrusion detection; malware; privacy; trust
Guest Editor
Dr. Mario Blaum

IBM Almaden Research Center, San Jose, CA, USA
Website | E-Mail
Interests: error-correcting codes; fault tolerance; parallel processing; cryptography; modulation codes for magnetic recording; timing algorithms; holographic storage; parallel communications; neural networks; finite group theory
Guest Editor
Dr. Julio Hernandez-Castro

University of Kent, UK
Website | E-Mail
Interests: cryptology; lightweight crypto; steganography; steganalysis; computer and network security; computer forensics; CAPTCHAs; RFID Security
Guest Editor
Dr. Ana Lucila Sandoval Orozco

Universidad Complutense de Madrid, Spain
Website | E-Mail
Interests: multimedia forensics; computer and network security; error-correcting codes; information theory

Special Issue Information

Dear Colleauges,

Digital multimedia contents (images, audio, video, etc.) now play an important role in people’s daily lives, in part due to the increasing popularity of smartphones and the consistent capacity improvement of personal computers and network infrastructure. Even today, it is still common for people to trust what they see, rather than what they read. Multimedia Forensics (MF) deals with the recovery of information that can be directly used to measure the trustworthiness of digital multimedia content.

The authenticity of a digital image can be verified through the noise characteristics of an imaging sensor. In particular, sensor pattern noise (SPN) has been used in source camera identification (SCI) and forgery detection. However, this technique can be used maliciously to track or inculpate innocent people. Accordingly, in the last years, the security issues related to multimedia contents have attracted much attention from both academic researchers and industrial practitioners.

Based on this motivation, this Special Issue invites researchers in all related fields (including but not limited to image and video signal processing, machine learning, computer vision and pattern recognition, cyber security, digital forensics) to join us in a quest for pinpointing the next-generation image and video forensics solutions of tomorrow, capable of processing image and video data using the recently-developed deep learning paradigm and other new modelling and learning techniques. The core data used in your work should be visual data (images and videos). The potential topics of interest of this Special Issue are listed below. Submissions can contemplate original research, serious dataset collection and benchmarking, or critical surveys.

Potential topics include, but are not limited to:

  • Camera sensor fingerprint recognition
  • Camera identification from sensor fingerprints
  • Counter forensics
  • Cyber threat analysis for image and video data
  • Digital image and video forgeries using Sensor Pattern Noise
  • Forensic classification of imaging sensor types
  • Image and video forgery detection
  • Image sensor forgery
  • Machine learning techniques in image and video forensics
  • Metadata generation, video database indexing, searching and browsing
  • Mobile device sensor forensic analysis
  • Multi-camera systems in mobile devices
  • Multimedia authentication using sensor pattern noise
  • Multimedia fingerprinting in mobile devices
  • Multimedia processing history identification in mobile devices
  • Multimedia source identification in mobile devices
  • PRNU-based forgery detection
  • PRNU pattern in multimedia forensics
  • Sensitive content detection (porn and child porn detection, violence detection)
  • Sensor imperfections used in counter forensics techniques
  • Source identification of digital image and video
  • Surveillance for forensics and security applications
  • Sensor format and image and video quality
  • Visual analytics for forensics and security applications
  • Visual information hiding: Designs and attacks
Prof. Luis Javier García Villalba
Dr. Mario Blaum
Dr. Julio Hernandez-Castro
Dr. Ana Lucila Sandoval Orozco
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (3 papers)

View options order results:
result details:
Displaying articles 1-3
Export citation of selected articles as:

Research

Open AccessArticle An Effective Palmprint Recognition Approach for Visible and Multispectral Sensor Images
Sensors 2018, 18(5), 1575; https://doi.org/10.3390/s18051575
Received: 18 April 2018 / Revised: 10 May 2018 / Accepted: 11 May 2018 / Published: 15 May 2018
PDF Full-text (6146 KB) | HTML Full-text | XML Full-text
Abstract
Among several palmprint feature extraction methods the HOG-based method is attractive and performs well against changes in illumination and shadowing of palmprint images. However, it still lacks the robustness to extract the palmprint features at different rotation angles. To solve this problem, this
[...] Read more.
Among several palmprint feature extraction methods the HOG-based method is attractive and performs well against changes in illumination and shadowing of palmprint images. However, it still lacks the robustness to extract the palmprint features at different rotation angles. To solve this problem, this paper presents a hybrid feature extraction method, named HOG-SGF that combines the histogram of oriented gradients (HOG) with a steerable Gaussian filter (SGF) to develop an effective palmprint recognition approach. The approach starts by processing all palmprint images by David Zhang’s method to segment only the region of interests. Next, we extracted palmprint features based on the hybrid HOG-SGF feature extraction method. Then, an optimized auto-encoder (AE) was utilized to reduce the dimensionality of the extracted features. Finally, a fast and robust regularized extreme learning machine (RELM) was applied for the classification task. In the evaluation phase of the proposed approach, a number of experiments were conducted on three publicly available palmprint databases, namely MS-PolyU of multispectral palmprint images and CASIA and Tongji of contactless palmprint images. Experimentally, the results reveal that the proposed approach outperforms the existing state-of-the-art approaches even when a small number of training samples are used. Full article
Figures

Figure 1

Open AccessArticle Distinguishing Computer-Generated Graphics from Natural Images Based on Sensor Pattern Noise and Deep Learning
Sensors 2018, 18(4), 1296; https://doi.org/10.3390/s18041296
Received: 7 March 2018 / Revised: 19 April 2018 / Accepted: 20 April 2018 / Published: 23 April 2018
PDF Full-text (2376 KB) | HTML Full-text | XML Full-text
Abstract
Computer-generated graphics (CGs) are images generated by computer software. The rapid development of computer graphics technologies has made it easier to generate photorealistic computer graphics, and these graphics are quite difficult to distinguish from natural images (NIs) with the naked eye. In this
[...] Read more.
Computer-generated graphics (CGs) are images generated by computer software. The rapid development of computer graphics technologies has made it easier to generate photorealistic computer graphics, and these graphics are quite difficult to distinguish from natural images (NIs) with the naked eye. In this paper, we propose a method based on sensor pattern noise (SPN) and deep learning to distinguish CGs from NIs. Before being fed into our convolutional neural network (CNN)-based model, these images—CGs and NIs—are clipped into image patches. Furthermore, three high-pass filters (HPFs) are used to remove low-frequency signals, which represent the image content. These filters are also used to reveal the residual signal as well as SPN introduced by the digital camera device. Different from the traditional methods of distinguishing CGs from NIs, the proposed method utilizes a five-layer CNN to classify the input image patches. Based on the classification results of the image patches, we deploy a majority vote scheme to obtain the classification results for the full-size images. The experiments have demonstrated that (1) the proposed method with three HPFs can achieve better results than that with only one HPF or no HPF and that (2) the proposed method with three HPFs achieves 100% accuracy, although the NIs undergo a JPEG compression with a quality factor of 75. Full article
Figures

Figure 1

Open AccessArticle Combining Deep and Handcrafted Image Features for Presentation Attack Detection in Face Recognition Systems Using Visible-Light Camera Sensors
Sensors 2018, 18(3), 699; https://doi.org/10.3390/s18030699
Received: 30 January 2018 / Revised: 19 February 2018 / Accepted: 24 February 2018 / Published: 26 February 2018
Cited by 1 | PDF Full-text (4369 KB) | HTML Full-text | XML Full-text
Abstract
Although face recognition systems have wide application, they are vulnerable to presentation attack samples (fake samples). Therefore, a presentation attack detection (PAD) method is required to enhance the security level of face recognition systems. Most of the previously proposed PAD methods for face
[...] Read more.
Although face recognition systems have wide application, they are vulnerable to presentation attack samples (fake samples). Therefore, a presentation attack detection (PAD) method is required to enhance the security level of face recognition systems. Most of the previously proposed PAD methods for face recognition systems have focused on using handcrafted image features, which are designed by expert knowledge of designers, such as Gabor filter, local binary pattern (LBP), local ternary pattern (LTP), and histogram of oriented gradients (HOG). As a result, the extracted features reflect limited aspects of the problem, yielding a detection accuracy that is low and varies with the characteristics of presentation attack face images. The deep learning method has been developed in the computer vision research community, which is proven to be suitable for automatically training a feature extractor that can be used to enhance the ability of handcrafted features. To overcome the limitations of previously proposed PAD methods, we propose a new PAD method that uses a combination of deep and handcrafted features extracted from the images by visible-light camera sensor. Our proposed method uses the convolutional neural network (CNN) method to extract deep image features and the multi-level local binary pattern (MLBP) method to extract skin detail features from face images to discriminate the real and presentation attack face images. By combining the two types of image features, we form a new type of image features, called hybrid features, which has stronger discrimination ability than single image features. Finally, we use the support vector machine (SVM) method to classify the image features into real or presentation attack class. Our experimental results indicate that our proposed method outperforms previous PAD methods by yielding the smallest error rates on the same image databases. Full article
Figures

Figure 1

Back to Top