Image and Video Quality Assessment

A special issue of Journal of Imaging (ISSN 2313-433X). This special issue belongs to the section "Image and Video Processing".

Deadline for manuscript submissions: closed (30 November 2021) | Viewed by 14168

Special Issue Editors


E-Mail Website
Guest Editor
Department of Computer Science, Norwegian University of Science and Technology, Trondheim, Norway
Interests: image quality assessment; video quality assessment; computational aesthetics; perception
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Computer Science, Norwegian University of Science and Technology, Trondheim, Norway
Interests: perceptual imaging; highly dynamic range imaging; computer vision; image and video quality

Special Issue Information

Dear Colleagues,

With the advancement of technology. the need to automatically evaluate the quality of images and videos has become an important part of most image processing and computer vision applications. While current objective image and video quality metrics have shown high correlation with subjective scores, nevertheless, there exist huge room for improvement. This includes but is not limited to difference in the performance of metrics across various datasets and distortions, dealing with multiple distortions, run-time performance, memory requirements, etc. 

In this Special Issue, we aim to address these issues. We encourage contributions presenting methods, techniques, tools, and ideas on how the state-of-the-art could be advanced. We seek original contributions in image and video quality assessment, but not limited to the following:

  • Large scale datasets for image and video quality assessment;
  • Novel methods for subjective evaluations (in particular crowdsourcing);
  • Objective image and video quality assessment;
  • Image and video quality enhancement;
  • Human perception;
  • Aesthetic quality assessment of image and videos;
  • Image and video quality assessment for different environments, including but not limited to printing, virtual reality, high dynamic range, displays, video conferencing, etc.;
  • Medical image and video quality assessment.

Dr. Seyed Ali Amirshahi
Dr. Mekides Assefa Abebe
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Imaging is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • image quality
  • video quality
  • quality assessment
  • human visual system
  • quality enhancement
  • subjective datasets

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 10158 KiB  
Article
SEDIQA: Sound Emitting Document Image Quality Assessment in a Reading Aid for the Visually Impaired
by Jane Courtney
J. Imaging 2021, 7(9), 168; https://doi.org/10.3390/jimaging7090168 - 30 Aug 2021
Cited by 4 | Viewed by 2201
Abstract
For visually impaired people (VIPs), the ability to convert text to sound can mean a new level of independence or the simple joy of a good book. With significant advances in optical character recognition (OCR) in recent years, a number of reading aids [...] Read more.
For visually impaired people (VIPs), the ability to convert text to sound can mean a new level of independence or the simple joy of a good book. With significant advances in optical character recognition (OCR) in recent years, a number of reading aids are appearing on the market. These reading aids convert images captured by a camera to text which can then be read aloud. However, all of these reading aids suffer from a key issue—the user must be able to visually target the text and capture an image of sufficient quality for the OCR algorithm to function—no small task for VIPs. In this work, a sound-emitting document image quality assessment metric (SEDIQA) is proposed which allows the user to hear the quality of the text image and automatically captures the best image for OCR accuracy. This work also includes testing of OCR performance against image degradations, to identify the most significant contributors to accuracy reduction. The proposed no-reference image quality assessor (NR-IQA) is validated alongside established NR-IQAs and this work includes insights into the performance of these NR-IQAs on document images. SEDIQA is found to consistently select the best image for OCR accuracy. The full system includes a document image enhancement technique which introduces improvements in OCR accuracy with an average increase of 22% and a maximum increase of 68%. Full article
(This article belongs to the Special Issue Image and Video Quality Assessment)
Show Figures

Figure 1

13 pages, 16018 KiB  
Article
Evaluation of 360° Image Projection Formats; Comparing Format Conversion Distortion Using Objective Quality Metrics
by Ikram Hussain and Oh-Jin Kwon
J. Imaging 2021, 7(8), 137; https://doi.org/10.3390/jimaging7080137 - 05 Aug 2021
Cited by 1 | Viewed by 2570
Abstract
Currently available 360° cameras normally capture several images covering a scene in all directions around a shooting point. The captured images are spherical in nature and are mapped to a two-dimensional plane using various projection methods. Many projection formats have been proposed for [...] Read more.
Currently available 360° cameras normally capture several images covering a scene in all directions around a shooting point. The captured images are spherical in nature and are mapped to a two-dimensional plane using various projection methods. Many projection formats have been proposed for 360° videos. However, standards for a quality assessment of 360° images are limited. In this paper, various projection formats are compared to explore the problem of distortion caused by a mapping operation, which has been a considerable challenge in recent approaches. The performances of various projection formats, including equi-rectangular, equal-area, cylindrical, cube-map, and their modified versions, are evaluated based on the conversion causing the least amount of distortion when the format is changed. The evaluation is conducted using sample images selected based on several attributes that determine the perceptual image quality. The evaluation results based on the objective quality metrics have proved that the hybrid equi-angular cube-map format is the most appropriate solution as a common format in 360° image services for where format conversions are frequently demanded. This study presents findings ranking these formats that are useful for identifying the best image format for a future standard. Full article
(This article belongs to the Special Issue Image and Video Quality Assessment)
Show Figures

Figure 1

15 pages, 673 KiB  
Article
No-Reference Image Quality Assessment with Multi-Scale Orderless Pooling of Deep Features
by Domonkos Varga
J. Imaging 2021, 7(7), 112; https://doi.org/10.3390/jimaging7070112 - 10 Jul 2021
Cited by 4 | Viewed by 3184
Abstract
The goal of no-reference image quality assessment (NR-IQA) is to evaluate their perceptual quality of digital images without using the distortion-free, pristine counterparts. NR-IQA is an important part of multimedia signal processing since digital images can undergo a wide variety of distortions during [...] Read more.
The goal of no-reference image quality assessment (NR-IQA) is to evaluate their perceptual quality of digital images without using the distortion-free, pristine counterparts. NR-IQA is an important part of multimedia signal processing since digital images can undergo a wide variety of distortions during storage, compression, and transmission. In this paper, we propose a novel architecture that extracts deep features from the input image at multiple scales to improve the effectiveness of feature extraction for NR-IQA using convolutional neural networks. Specifically, the proposed method extracts deep activations for local patches at multiple scales and maps them onto perceptual quality scores with the help of trained Gaussian process regressors. Extensive experiments demonstrate that the introduced algorithm performs favorably against the state-of-the-art methods on three large benchmark datasets with authentic distortions (LIVE In the Wild, KonIQ-10k, and SPAQ). Full article
(This article belongs to the Special Issue Image and Video Quality Assessment)
Show Figures

Figure 1

21 pages, 2847 KiB  
Article
No-Reference Image Quality Assessment with Global Statistical Features
by Domonkos Varga
J. Imaging 2021, 7(2), 29; https://doi.org/10.3390/jimaging7020029 - 05 Feb 2021
Cited by 22 | Viewed by 5006
Abstract
The perceptual quality of digital images is often deteriorated during storage, compression, and transmission. The most reliable way of assessing image quality is to ask people to provide their opinions on a number of test images. However, this is an expensive and time-consuming [...] Read more.
The perceptual quality of digital images is often deteriorated during storage, compression, and transmission. The most reliable way of assessing image quality is to ask people to provide their opinions on a number of test images. However, this is an expensive and time-consuming process which cannot be applied in real-time systems. In this study, a novel no-reference image quality assessment method is proposed. The introduced method uses a set of novel quality-aware features which globally characterizes the statistics of a given test image, such as extended local fractal dimension distribution feature, extended first digit distribution features using different domains, Bilaplacian features, image moments, and a wide variety of perceptual features. Experimental results are demonstrated on five publicly available benchmark image quality assessment databases: CSIQ, MDID, KADID-10k, LIVE In the Wild, and KonIQ-10k. Full article
(This article belongs to the Special Issue Image and Video Quality Assessment)
Show Figures

Figure 1

Back to TopTop