Next Article in Journal
Survey on Security Threats in Agricultural IoT and Smart Farming
Next Article in Special Issue
Intelligent Video Highlights Generation with Front-Camera Emotion Sensing
Previous Article in Journal
First-Order Linear Mechatronics Model for Closed-Loop MEMS Disk Resonator Gyroscope
Previous Article in Special Issue
User-Experience with Haptic Feedback Technologies and Text Input in Interactive Multimedia Devices
 
 
Article

DLNR-SIQA: Deep Learning-Based No-Reference Stitched Image Quality Assessment

Mixed Reality and Interaction Lab, Department of Software, Sejong University, Seoul 143-747, Korea
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(22), 6457; https://doi.org/10.3390/s20226457
Received: 30 September 2020 / Revised: 4 November 2020 / Accepted: 10 November 2020 / Published: 12 November 2020
(This article belongs to the Special Issue Data, Signal and Image Processing and Applications in Sensors)
Due to recent advancements in virtual reality (VR) and augmented reality (AR), the demand for high quality immersive contents is a primary concern for production companies and consumers. Similarly, the topical record-breaking performance of deep learning in various domains of artificial intelligence has extended the attention of researchers to contribute to different fields of computer vision. To ensure the quality of immersive media contents using these advanced deep learning technologies, several learning based Stitched Image Quality Assessment methods have been proposed with reasonable performances. However, these methods are unable to localize, segment, and extract the stitching errors in panoramic images. Further, these methods used computationally complex procedures for quality assessment of panoramic images. With these motivations, in this paper, we propose a novel three-fold Deep Learning based No-Reference Stitched Image Quality Assessment (DLNR-SIQA) approach to evaluate the quality of immersive contents. In the first fold, we fined-tuned the state-of-the-art Mask R-CNN (Regional Convolutional Neural Network) on manually annotated various stitching error-based cropped images from the two publicly available datasets. In the second fold, we segment and localize various stitching errors present in the immersive contents. Finally, based on the distorted regions present in the immersive contents, we measured the overall quality of the stitched images. Unlike existing methods that only measure the quality of the images using deep features, our proposed method can efficiently segment and localize stitching errors and estimate the image quality by investigating segmented regions. We also carried out extensive qualitative and quantitative comparison with full reference image quality assessment (FR-IQA) and no reference image quality assessment (NR-IQA) on two publicly available datasets, where the proposed system outperformed the existing state-of-the-art techniques. View Full-Text
Keywords: computer vision; deep learning; image quality assessment; image segmentation; immersive contents computer vision; deep learning; image quality assessment; image segmentation; immersive contents
Show Figures

Figure 1

MDPI and ACS Style

Ullah, H.; Irfan, M.; Han, K.; Lee, J.W. DLNR-SIQA: Deep Learning-Based No-Reference Stitched Image Quality Assessment. Sensors 2020, 20, 6457. https://doi.org/10.3390/s20226457

AMA Style

Ullah H, Irfan M, Han K, Lee JW. DLNR-SIQA: Deep Learning-Based No-Reference Stitched Image Quality Assessment. Sensors. 2020; 20(22):6457. https://doi.org/10.3390/s20226457

Chicago/Turabian Style

Ullah, Hayat, Muhammad Irfan, Kyungjin Han, and Jong Weon Lee. 2020. "DLNR-SIQA: Deep Learning-Based No-Reference Stitched Image Quality Assessment" Sensors 20, no. 22: 6457. https://doi.org/10.3390/s20226457

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop