Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (4)

Search Parameters:
Keywords = video quality indicators (VQI)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 19345 KiB  
Communication
Objective Video Quality Assessment Method for Object Recognition Tasks
by Mikołaj Leszczuk, Lucjan Janowski, Jakub Nawała and Atanas Boev
Electronics 2024, 13(9), 1750; https://doi.org/10.3390/electronics13091750 - 1 May 2024
Viewed by 2088
Abstract
In the field of video quality assessment for object recognition tasks, accurately predicting the impact of different quality factors on recognition algorithms remains a significant challenge. Our study introduces a novel evaluation framework designed to address this gap by focussing on machine vision [...] Read more.
In the field of video quality assessment for object recognition tasks, accurately predicting the impact of different quality factors on recognition algorithms remains a significant challenge. Our study introduces a novel evaluation framework designed to address this gap by focussing on machine vision rather than human perceptual quality metrics. We used advanced machine learning models and custom Video Quality Indicators to enhance the predictive accuracy of object recognition performance under various conditions. Our results indicate a model performance, achieving a mean square error (MSE) of 672.4 and a correlation coefficient of 0.77, which underscores the effectiveness of our approach in real-world scenarios. These findings highlight not only the robustness of our methodology but also its potential applicability in critical areas such as surveillance and telemedicine. Full article
(This article belongs to the Special Issue Machine Learning, Image Analysis and IoT Applications in Industry)
Show Figures

Figure 1

32 pages, 24400 KiB  
Article
Objective Video Quality Assessment and Ground Truth Coordinates for Automatic License Plate Recognition
by Mikołaj Leszczuk, Lucjan Janowski, Jakub Nawała, Jingwen Zhu, Yuding Wang and Atanas Boev
Electronics 2023, 12(23), 4721; https://doi.org/10.3390/electronics12234721 - 21 Nov 2023
Cited by 3 | Viewed by 2651
Abstract
In the realm of modern video processing systems, traditional metrics such as the Peak Signal-to-Noise Ratio and Structural Similarity are often insufficient for evaluating videos intended for recognition tasks, like object or license plate recognition. Recognizing the need for specialized assessment in this [...] Read more.
In the realm of modern video processing systems, traditional metrics such as the Peak Signal-to-Noise Ratio and Structural Similarity are often insufficient for evaluating videos intended for recognition tasks, like object or license plate recognition. Recognizing the need for specialized assessment in this domain, this study introduces a novel approach tailored to Automatic License Plate Recognition (ALPR). We developed a robust evaluation framework using a dataset with ground truth coordinates for ALPR. This dataset includes video frames captured under various conditions, including occlusions, to facilitate comprehensive model training, testing, and validation. Our methodology simulates quality degradation using a digital camera image acquisition model, representing how luminous flux is transformed into digital images. The model’s performance was evaluated using Video Quality Indicators within an OpenALPR library context. Our findings show that the model achieves a high F-measure score of 0.777, reflecting its effectiveness in assessing video quality for recognition tasks. The proposed model presents a promising avenue for accurate video quality assessment in ALPR tasks, outperforming traditional metrics in typical recognition application scenarios. This underscores the potential of the methodology for broader adoption in video quality analysis for recognition purposes. Full article
(This article belongs to the Special Issue Advanced Technologies for Image/Video Quality Assessment)
Show Figures

Figure 1

17 pages, 26663 KiB  
Article
“In the Wild” Video Content as a Special Case of User Generated Content and a System for Its Recognition
by Mikołaj Leszczuk, Marek Kobosko, Jakub Nawała, Filip Korus and Michał Grega
Sensors 2023, 23(4), 1769; https://doi.org/10.3390/s23041769 - 4 Feb 2023
Cited by 2 | Viewed by 2393
Abstract
In the five years between 2017 and 2022, IP video traffic tripled, according to Cisco. User-Generated Content (UGC) is mainly responsible for user-generated IP video traffic. The development of widely accessible knowledge and affordable equipment makes it possible to produce UGCs of quality [...] Read more.
In the five years between 2017 and 2022, IP video traffic tripled, according to Cisco. User-Generated Content (UGC) is mainly responsible for user-generated IP video traffic. The development of widely accessible knowledge and affordable equipment makes it possible to produce UGCs of quality that is practically indistinguishable from professional content, although at the beginning of UGC creation, this content was frequently characterized by amateur acquisition conditions and unprofessional processing. In this research, we focus only on UGC content, whose quality is obviously different from that of professional content. For the purpose of this paper, we refer to “in the wild” as a closely related idea to the general idea of UGC, which is its particular case. Studies on UGC recognition are scarce. According to research in the literature, there are currently no real operational algorithms that distinguish UGC content from other content. In this study, we demonstrate that the XGBoost machine learning algorithm (Extreme Gradient Boosting) can be used to develop a novel objective “in the wild” video content recognition model. The final model is trained and tested using video sequence databases with professional content and “in the wild” content. We have achieved a 0.916 accuracy value for our model. Due to the comparatively high accuracy of the model operation, a free version of its implementation is made accessible to the research community. It is provided via an easy-to-use Python package installable with Pip Installs Packages (pip). Full article
(This article belongs to the Special Issue Nature-Inspired Algorithms for Sensor Networks and Image Processing)
Show Figures

Figure 1

23 pages, 12745 KiB  
Article
Objective Video Quality Assessment Method for Face Recognition Tasks
by Mikołaj Leszczuk, Lucjan Janowski, Jakub Nawała and Atanas Boev
Electronics 2022, 11(8), 1167; https://doi.org/10.3390/electronics11081167 - 7 Apr 2022
Cited by 5 | Viewed by 2668
Abstract
Nowadays, there are many metrics for overall Quality of Experience (QoE), both those with Full Reference (FR), such as Peak Signal-to-Noise Ratio (PSNR) or Structural Similarity (SSIM), and those with No Reference (NR), such as Video Quality Indicators (VQI), which are successfully used [...] Read more.
Nowadays, there are many metrics for overall Quality of Experience (QoE), both those with Full Reference (FR), such as Peak Signal-to-Noise Ratio (PSNR) or Structural Similarity (SSIM), and those with No Reference (NR), such as Video Quality Indicators (VQI), which are successfully used in video processing systems to evaluate videos whose quality is degraded by different processing scenarios. However, they are not suitable for video sequences used for recognition tasks (Target Recognition Videos, TRV). Therefore, correctly estimating the performance of the video processing pipeline in both manual and Computer Vision (CV) recognition tasks is still a major research challenge. There is a need for objective methods to evaluate video quality for recognition tasks. In response to this need, we show in this paper that it is possible to develop the new concept of an objective model for evaluating video quality for face recognition tasks. The model is trained, tested and validated on a representative set of image sequences. The set of degradation scenarios is based on the model of a digital camera and how the luminous flux reflected from the scene eventually becomes a digital image. The resulting degraded images are evaluated using a CV library for face recognition as well as VQI. The measured accuracy of a model, expressed as the value of the F-measure parameter, is 0.87. Full article
(This article belongs to the Special Issue Application of Neural Networks in Biosignal Process)
Show Figures

Figure 1

Back to TopTop