sensors-logo

Journal Browser

Journal Browser

Sensors in Multimedia Forensics

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (31 December 2023) | Viewed by 2906

Special Issue Editor


E-Mail Website
Guest Editor
Vicomtech Foundation, Basque Research and Technology Alliance (BRTA), Mikeletegi 57, 20009 Donostia-San Sebastián, Spain
Interests: speech recognition; natural language processing; deep learning; paralinguistics in speech

Special Issue Information

Dear Colleagues,

With the rise of social media, digital multimedia content such as images, videos, and audio recordings has become an integral part of our daily lives, and thanks to the ease of digital content creation, the amount of content has grown exponentially in recent years. This has resulted in new challenges in the realm of multimedia forensics—the analysis, authentication, and processing of online content.

Multimedia forensics has become increasingly important due to the widespread use of digital media in various domains such as law enforcement, journalism, and entertainment. The field has advanced significantly with the development of new techniques and tools for multimedia analysis, processing, and authentication. In this context, sensor technologies have emerged as a promising avenue for advancing the field of multimedia forensics.

In multimedia forensics, sensors can be used to capture data related to the creation and manipulation of digital multimedia content. Data from Image, Video, Audio, and Keyword dynamics can be analyzed to identify and authenticate multimedia content, detect tampering, and extract information such as camera settings, location, the time of capture, etc.

This Special Issue aims to explore the role of sensor technologies in advancing the field of multimedia forensics. Accepted papers will showcase the latest research and developments in sensor-based approaches for multimedia analysis, processing, and authentication. Overall, this Special Issue will provide a comprehensive overview of the latest trends and advancements in sensor technologies for multimedia forensics.

Potential topics include (but are not limited to):

  • Multimedia forensics;
  • Image analysis in multimedia scenarios;
  • Video processing from online sources;
  • Multimedia audio processing;
  • Digital forensics;
  • Image/video/audio deep-fake detection;
  • Biometric identification for online applications;
  • Tampering detection.

Dr. Juan Camilo Vásquez-Correa
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

35 pages, 10075 KiB  
Article
AuCFSR: Authentication and Color Face Self-Recovery Using Novel 2D Hyperchaotic System and Deep Learning Models
by Achraf Daoui, Mohamed Yamni, Torki Altameem, Musheer Ahmad, Mohamed Hammad, Paweł Pławiak, Ryszard Tadeusiewicz and Ahmed A. Abd El-Latif
Sensors 2023, 23(21), 8957; https://doi.org/10.3390/s23218957 - 3 Nov 2023
Viewed by 1566
Abstract
Color face images are often transmitted over public channels, where they are vulnerable to tampering attacks. To address this problem, the present paper introduces a novel scheme called Authentication and Color Face Self-Recovery (AuCFSR) for ensuring the authenticity of color face images and [...] Read more.
Color face images are often transmitted over public channels, where they are vulnerable to tampering attacks. To address this problem, the present paper introduces a novel scheme called Authentication and Color Face Self-Recovery (AuCFSR) for ensuring the authenticity of color face images and recovering the tampered areas in these images. AuCFSR uses a new two-dimensional hyperchaotic system called two-dimensional modular sine-cosine map (2D MSCM) to embed authentication and recovery data into the least significant bits of color image pixels. This produces high-quality output images with high security level. When tampered color face image is detected, AuCFSR executes two deep learning models: the CodeFormer model to enhance the visual quality of the recovered color face image and the DeOldify model to improve the colorization of this image. Experimental results demonstrate that AuCFSR outperforms recent similar schemes in tamper detection accuracy, security level, and visual quality of the recovered images. Full article
(This article belongs to the Special Issue Sensors in Multimedia Forensics)
Show Figures

Figure 1

10 pages, 1577 KiB  
Communication
A Pre-Training Framework Based on Multi-Order Acoustic Simulation for Replay Voice Spoofing Detection
by Changhwan Go, Nam In Park, Oc-Yeub Jeon and Chanjun Chun
Sensors 2023, 23(16), 7280; https://doi.org/10.3390/s23167280 - 20 Aug 2023
Viewed by 809
Abstract
Voice spoofing attempts to break into a specific automatic speaker verification (ASV) system by forging the user’s voice and can be used through methods such as text-to-speech (TTS), voice conversion (VC), and replay attacks. Recently, deep learning-based voice spoofing countermeasures have been developed. [...] Read more.
Voice spoofing attempts to break into a specific automatic speaker verification (ASV) system by forging the user’s voice and can be used through methods such as text-to-speech (TTS), voice conversion (VC), and replay attacks. Recently, deep learning-based voice spoofing countermeasures have been developed. However, the problem with replay is that it is difficult to construct a large number of datasets because it requires a physical recording process. To overcome these problems, this study proposes a pre-training framework based on multi-order acoustic simulation for replay voice spoofing detection. Multi-order acoustic simulation utilizes existing clean signal and room impulse response (RIR) datasets to generate audios, which simulate the various acoustic configurations of the original and replayed audios. The acoustic configuration refers to factors such as the microphone type, reverberation, time delay, and noise that may occur between a speaker and microphone during the recording process. We assume that a deep learning model trained on an audio that simulates the various acoustic configurations of the original and replayed audios can classify the acoustic configurations of the original and replay audios well. To validate this, we performed pre-training to classify the audio generated by the multi-order acoustic simulation into three classes: clean signal, audio simulating the acoustic configuration of the original audio, and audio simulating the acoustic configuration of the replay audio. We also set the weights of the pre-training model to the initial weights of the replay voice spoofing detection model using the existing replay voice spoofing dataset and then performed fine-tuning. To validate the effectiveness of the proposed method, we evaluated the performance of the conventional method without pre-training and proposed method using an objective metric, i.e., the accuracy and F1-score. As a result, the conventional method achieved an accuracy of 92.94%, F1-score of 86.92% and the proposed method achieved an accuracy of 98.16%, F1-score of 95.08%. Full article
(This article belongs to the Special Issue Sensors in Multimedia Forensics)
Show Figures

Figure 1

Back to TopTop