Next Article in Journal
The Application of Improved Harmony Search Algorithm to Multi-UAV Task Assignment
Previous Article in Journal
Velocity-Free Formation Control and Collision Avoidance for UUVs via RBF: A High-Gain Approach
Previous Article in Special Issue
Time Series Segmentation Using Neural Networks with Cross-Domain Transfer Learning
 
 
Article

Objective Video Quality Assessment Method for Face Recognition Tasks

by 1,*,†, 1,†, 1,† and 2,†
1
AGH University of Science and Technology, Institute of Telecommunications, 30-059 Kraków, Poland
2
Huawei Technologies Düsseldorf GmbH, 40549 Düsseldorf, Germany
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Academic Editor: Jyh-Cheng Chen
Electronics 2022, 11(8), 1167; https://doi.org/10.3390/electronics11081167
Received: 31 December 2021 / Revised: 7 March 2022 / Accepted: 21 March 2022 / Published: 7 April 2022
(This article belongs to the Special Issue Application of Neural Networks in Biosignal Process)
Nowadays, there are many metrics for overall Quality of Experience (QoE), both those with Full Reference (FR), such as Peak Signal-to-Noise Ratio (PSNR) or Structural Similarity (SSIM), and those with No Reference (NR), such as Video Quality Indicators (VQI), which are successfully used in video processing systems to evaluate videos whose quality is degraded by different processing scenarios. However, they are not suitable for video sequences used for recognition tasks (Target Recognition Videos, TRV). Therefore, correctly estimating the performance of the video processing pipeline in both manual and Computer Vision (CV) recognition tasks is still a major research challenge. There is a need for objective methods to evaluate video quality for recognition tasks. In response to this need, we show in this paper that it is possible to develop the new concept of an objective model for evaluating video quality for face recognition tasks. The model is trained, tested and validated on a representative set of image sequences. The set of degradation scenarios is based on the model of a digital camera and how the luminous flux reflected from the scene eventually becomes a digital image. The resulting degraded images are evaluated using a CV library for face recognition as well as VQI. The measured accuracy of a model, expressed as the value of the F-measure parameter, is 0.87. View Full-Text
Keywords: video quality indicators (VQI); target recognition video (TRV); computer vision (CV); metrics; evaluation; video quality assessment; face recognition video quality indicators (VQI); target recognition video (TRV); computer vision (CV); metrics; evaluation; video quality assessment; face recognition
Show Figures

Figure 1

MDPI and ACS Style

Leszczuk, M.; Janowski, L.; Nawała, J.; Boev, A. Objective Video Quality Assessment Method for Face Recognition Tasks. Electronics 2022, 11, 1167. https://doi.org/10.3390/electronics11081167

AMA Style

Leszczuk M, Janowski L, Nawała J, Boev A. Objective Video Quality Assessment Method for Face Recognition Tasks. Electronics. 2022; 11(8):1167. https://doi.org/10.3390/electronics11081167

Chicago/Turabian Style

Leszczuk, Mikołaj, Lucjan Janowski, Jakub Nawała, and Atanas Boev. 2022. "Objective Video Quality Assessment Method for Face Recognition Tasks" Electronics 11, no. 8: 1167. https://doi.org/10.3390/electronics11081167

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop