You are currently viewing a new version of our website. To view the old version click .
Sensors
  • Article
  • Open Access

23 January 2023

Subjective Assessment of Objective Image Quality Metrics Range Guaranteeing Visually Lossless Compression

,
,
,
,
and
Department of Electronics Engineering, Sejong University, Seoul 05006, Republic of Korea
*
Author to whom correspondence should be addressed.
This article belongs to the Topic Advances in Perceptual Quality Assessment of User Generated Contents This article belongs to the Section Intelligent Sensors

Abstract

The usage of media such as images and videos has been extensively increased in recent years. It has become impractical to store images and videos acquired by camera sensors in their raw form due to their huge storage size. Generally, image data is compressed with a compression algorithm and then stored or transmitted to another platform. Thus, image compression helps to reduce the storage size and transmission cost of the images and videos. However, image compression might cause visual artifacts, depending on the compression level. In this regard, performance evaluation of the compression algorithms is an essential task needed to reconstruct images with visually or near-visually lossless quality in case of lossy compression. The performance of the compression algorithms is assessed by both subjective and objective image quality assessment (IQA) methodologies. In this paper, subjective and objective IQA methods are integrated to evaluate the range of the image quality metrics (IQMs) values that guarantee the visually or near-visually lossless compression performed by the JPEG 1 standard (ISO/IEC 10918). A novel “Flicker Test Software” is developed for conducting the proposed subjective and objective evaluation study. In the flicker test, the selected test images are subjectively analyzed by subjects at different compression levels. The IQMs are calculated at the previous compression level, when the images were visually lossless for each subject. The results analysis shows that the objective IQMs with more closely packed values having the least standard deviation that guaranteed the visually lossless compression of the images with JPEG 1 are the feature similarity index measure (FSIM), the multiscale structural similarity index measure (MS-SSIM), and the information content weighted SSIM (IW-SSIM), with average values of 0.9997, 0.9970, and 0.9970 respectively.

1. Introduction

Nowadays, it is a common practice to collect and share a great number of pictures due to advancements in image-acquiring devices such as digital cameras, smartphones with high-definition image-capturing capabilities, and social media platforms []. Therefore, there is always a need for efficient image compression techniques to compress this huge amount of image data to reduce its storage size and reduce transmission costs []. On a daily basis, vision sensors capture billions of images, which are compressed with an image codec before they are stored or transferred. In fact, image compression plays the role of a fundamental tool which makes it possible to store and share an extensive amount of digital data, such as images and videos []. No doubt, image compression is a useful tool, however, while reconstructing images, lossy compression standards may cause some distortions in images that the human eye can detect while comparing the reconstructed images to the originals []. The intensity of this alteration in image quality depends on the type of media, the compression level to which the image has been compressed, and other display and environment perspectives []. Image compression techniques cause different types of visual abnormalities in images, such as blocking artifacts, color shift, blurring effects, and ringing artifacts, that result in the degradation of the image quality []. Therefore, while introducing a new image compression technique, quality assessment techniques should be used to evaluate the performance and to consider the severity of the visual abnormalities produced [].
To assess the visual quality of the compressed images generated, both objective and subjective methods of image quality evaluation are used []. These two types of methods are mentioned in many studies, and are used both in traditional and learning-based image codecs performance evaluation []. In an objective case, the image quality is assessed by calculating IQMs that quantitatively assess the image quality. The objective metrics are mathematical models that calculate the image quality precisely and instinctively. The performance of objective metrics is considered a standard that represents a quality performance that is the same as that of human subjects. Several IQMs are defined based on the availability of reference images []. In a subjective case, a group of subjects observe image quality subjectively and present their opinion based on the observed image quality []. To conduct a standardized subjective test of images, several recommendations are proposed that, when followed, deliver outstanding results []. Objective methodologies of image quality evaluation are considered quick and economical, while subjective methods are considered time-consuming and expensive. Further, subjective methods are dependent on the physical conditions and emotions of the viewers, which makes them impractical in real-life applications. However, subjective methods of evaluation are considered more reliable and robust because they mainly rely on the opinions of human subjects, who represent the ultimate users of digital media applications [].
In the current era, the availability of advanced image-capturing and display devices has increased the interest of researchers to design lossless image compression techniques []. Human eyes cannot perceive the very tiny artifacts that appear in compressed images when reconstructed up to a specific compression level. Therefore, trustworthy methodologies for standardization of the evaluation strategies for the visually or near-visually lossless compression standards are released by the joint photographic expert’s group (JPEG) []. Aiming to create a solution for visually lossless compression assessment, this paper integrates both of the aforementioned subjective and objective methods of IQA to evaluate the objective metrics, guaranteeing an image’s visually lossless compression with the JPEG 1 standard. For the subjective case, the two alternative forced choice (2AFC)-based strategy is adopted, in which the subject has to determine the visual difference between two images. Human subjects analyze the two test images subjectively through a unique 2AFC-based flicker test method at different compression levels. The compression level is degraded by the subject up to a just noticeable difference level when the subject observes the flickering or visual difference between the original and the corresponding compressed image. The IQMs are determined for the relative images at the point of the previous compression level, at which the images were visually lossless for the particular subject during the flicker test. To perform this subjective flicker test and to calculate the objective metrics, a novel platform “Flicker Test Software” was developed that effectively compressed the images using the JPEG 1 standard at different compression levels to perform the flicker test, then calculated objective IQMs. Furthermore, the results of the objective IQMs that best define the visually lossless compression of the images are discussed. The contributions of this work are summarized in the following points.
  • This study performs a subjective quality assessment of JPEG 1 standard compressed images and evaluates the objective IQM values range that guarantees the visually or near-visually lossless compression of the images.
  • A unique platform “Flicker Test Software” is designed that compress the images using the worldwide utilized JPEG 1 standard at different compression levels to perform a flicker test for the subjective assessment of visually or near-visually lossless compressed images and evaluates the objective IQMs.
  • A subjective test activity performing the flicker test is conducted by 25 participants, individually assessing ten raw images subjectively at different quality levels of compression. The objective metrics for the test images at the point of visually or near-visually lossless compression level observed by each subject are determined.
  • The objective IQMs, named FSIM, MS-SSIM, and IW-SSIM, show the least standard deviations with a close range of values that best guarantee the visually or near-visually lossless compression of the images.
The rest of the paper is organized as follows: Section 2 details the related works and discusses previous IQA methods. Section 3 describes the implementation of the proposed method. In Section 4, the experiments performed in this study are explained and the results are discussed in detail. Section 5 concludes this study and presents future directions.

3. Proposed Methodology

This section of the paper provides a complete overview of the proposed method for the subjective and objective IQMs evaluation of visually lossless image compression. For the subjective evaluation, the flicker test procedures proposed by the JPEG committee for visually or near-visually lossless compressed images are incorporated, and a novel 2AFC-based flicker test is presented [].
In this proposed framework, the novel “Flicker Test Software” is developed using MATLAB (R2022b) and Unity3D (2021.3.3f1) to conduct the subjective test and calculate the objective metrics for the evaluation of the visually lossless compressed images. The subjective test approach is related to the psychophysical-based adaptive staircase method that is incorporated for the barely noticeable difference in the experimental analysis []. In this method, the observer starts from a particular threshold and observes the change in the stimulus. The intensity of the threshold is changed each time and the observer makes a decision based on the difference. This process continues until the stimulus becomes too weak, the difference becomes visible to the observer, and the decision is changed.
In this proposed method, the subject compresses the images using JPEG 1 standard with the highest quality factor and subjectively observes the reconstructed and original image, then observes the visual difference using the 2AFC-based flicker test. The subject decreases the quality factor step-by-step up to the level when he or she observes the visual difference between the original and its reconstructed image.
For the compression task, one of the most popular and widely used standards, JPEG 1, is employed []. In multimedia technologies, JPEG 1 has become one of the most successful compression standards used across the world. JPEG 1 is used for compression tasks in diverse applications such as by digital cameras for photography, in medical images, by web-based applications, for multimedia storage, etc. For performing JPEG 1 compression, the open source “libjpeg-turbo” JPEG image codec is utilized, which can be accessed on the JPEG official website []. The overall framework of the proposed method and its workflow is presented in Figure 1. Further description of the “Flicker Test Software” is explained in the following section.
Figure 1. The overall framework of the proposed flicker test for performing subjective assessment and IQMs evaluation at visual lossless compression level. The subject enters his/her information and starts the test. The program selects an image from the test images directory and performs JPEG 1 compression at a quality value (q-value) equal to 100, then displays both the original and reconstructed image in the Unity flicker test, where the subject observes the visual artifacts in the images by toggling the images. When the subject does not observe any flickering, then the image is reconstructed at a lower q-value. In case the subject observes a difference, the IQMs are calculated at the previous q-value (visually lossless stage) and moved to the next image.

Flicker Test Software

The visually or near-visually lossless compressed images have very tiny artifacts that can not be observed by human eyes in normal conditions. To subjectively observe these small changes, the flicker test is a promising solution and has been used by researchers for IQA []. The developed “Flicker Test Software” has two parts: first, the selected image is encoded and decoded with the JPEG 1 compression standard using “libjpeg-turbo” implemented in MATLAB, and second, this reconstructed image and the corresponding original image are displayed in a flicker viewer designed in Unity3D for subjective evaluation. Figure 2 shows the interface of the developed framework.
Figure 2. The user interface of the developed “Flicker Test Software”.
To conduct the subjective test using “Flicker Test Software”, the subject enters his or her details (name, age, and gender) and starts the test. At this step, the current image in the hierarchy is reconstructed with the JPEG 1 standard at the maximum q-value. These reconstructed JPEG 1 images are compressed, and their corresponding original images are displayed at the same coordinates in the designed image viewer using Unity3D. The subject shuffles these images with a toggle button and observes the flickering occurring while shuffling both images in the same position. In case no flickering is noticed, the subject downgrades the quality level and observes the flickering with the newly reconstructed image again. Finally, when the observer notices flickering in the images between the original and reconstructed images at a particular compression level, the objective metrics for the images are calculated at the previous q-value, when the images were visually lossless for the subject conducting the test. Consequently, the subject moves to the next image and conducts the subjective test again for all the test images assigned.

4. Experimentation and Results

This section briefly describes the experimental setup of the proposed method, the selected test images, and the evaluation of the IQMs guaranteeing the visually or near-visually lossless compression of images.

4.1. Experimental Setup and Display Configuration

The recommendations proposed in ITU-R BT.500-11 in terms of system and display configurations for the subjective assessment standards are followed []. These tests are conducted in the controlled environment of the laboratory under controlled lighting conditions. The system is connected to a BENQ monitor, model PD3200U having a size of 32 inches and a resolution of 4K ultra-high-definition. The images are displayed in their actual size to avoid the distortion produced due to the display device. While conducting the test, the subjects are allowed to sit at their preferred comfortable viewing distance according to the display size.

4.1.1. Test Subjects

In case of subjective assessment, twenty-five subjects participated and performed the subjective flicker test. Most likely, the subjects were research students who were used to multimedia applications and had knowledge of image quality and artifacts. However, before starting the test, each student was briefed on the subjective test and the software in order to get used to the procedure, then they performed a demo test. The subjects were guided to perform the test in a relaxed state to obtain authentic results. The subjects were not bound to any time limit; however, the time taken by the subject to perform a single test was determined. At the end of the test, a gift was provided to every participant.

4.1.2. Test Images

In the case of test images, ten raw images were used for the subjective test. These images were selected from the well-known JPEG-AI test dataset that is commonly used for assessment tasks of the image compression frameworks []. These images provide a balanced set of different types and categories in terms of image content and spatial resolution. Figure 3 shows the visuals of the selected ten images used for the IQA test.
Figure 3. The thumbnails and resolutions of selected raw images from the JPEG-AI test dataset for the subjective test.
These sample images possess a variety of image quality attributes []. The image quality attribute values of the zero crossing (ZC), colorfulness, and sum modified Laplacian (SML) of the selected images are shown in Figure 4, respectively. These graphs show a variety of metric values that guarantee the diversity of the sample images.
Figure 4. Image quality attributes ZC, SML, and Colorfulness of the test images.

4.1.3. Objective Image Quality Metrics

Objective IQMs are calculated for the compressed images at the visually lossless point observed by a particular subject. In this study, we used the well-known IQMs that are used for the assessment of the learning-based image codecs by the JPEG committee during the development of the learning-based image coding standard []. Several objective IQMs were evaluated by the JPEG members to find the best-performing metrics in the compression domain based on human perceptions. The suggested IQMs for evaluating compression methods are FSIM, MS-SSIM, IW-SSIM, VIF, the Normalized Laplacian Pyramid (NLPD), PSNR-HSV, VMAF, and PSNR. These IQMs, along with the specified color spaces and channels, are given in Table 1.
Table 1. Objective IQMs and the specified color space and channel used for metric calculation.

4.2. Results and Discussion

In this section, the resultant data from the subjective and objective assessments are analyzed. In the case of results, the “Flicker Test Software” stored the results for each subject while conducting the flicker test. These data include the information regarding subject and image, test conducting time, and the calculated objective quality metrics for each corresponding image at the visually lossless compression level. Figure 5 shows the time taken by a particular subject to perform the complete single subjective test for the selected images. The average time cost for conducting a single subjective test for the selected images observed in the proposed study is fifty-three minutes.
Figure 5. The time cost for each subject conducting the subjective test for the images.
Table 2 shows the noted q-value and bits per pixel (bpp) recorded as the results of the subjective flicker test. These values are at the point where the images are visually lossless for the subjects while conducting the subjective flicker test. Test images with corresponding minimum q-value (Min q-value), maximum q-value (Max q-value), and the average of the q-value (Avg q-value) recorded while conducting the subjective flicker test by 25 subjects are presented. Similarly, the Min bpp, Max bpp, and Avg bpp are also presented in the table.
Table 2. Test images with corresponding q-values and bpp values recorded by the flicker test in the proposed subjective test at the point of visually lossless compression.
In the overall subjective flicker test, the minimum q-value noted for compression is 65 and the maximum value experienced is 100. Because in a few images, the high-frequency color regions are distorted upon the first compression and are easily perceivable by human eyes. In the case of bpp, the overall minimum bpp value across the flicker test noted is 0.3525 and the maximum bpp is 8.3588. The average bpp value is 1.9502 for the visually lossless compressed images across the flicker test observed by the subjects. These results confirm that the range of the q-value and the bpp are not suitable pillars for guaranteeing the visually lossy compression level of images.
The objective quality metrics calculated for the visually lossless compressed images are presented in Table 3. These metrics are calculated in the prescribed channel and color spaces as mentioned in Table 1. The table presents the corresponding average values of the FSIM, MS-SSIM, IW-SSIM, VIF, NLPD, PSNR-HVS, VMAF, and PSNR by each subject recorded from the flicker test.
Table 3. The calculated average IQMs for the corresponding images at the point visually lossless compression level in the overall subjective test.
The varied nature of the selected images (presented in Figure 4) helps us to present a diverse variety of results. The IQMs presents a diverse range of values to the corresponding images at the visually lossless compression level. The overall average value of the FSIM metric noted is 0.9997, guaranteeing visually lossless compression of the images in the subjective test conducted. Similarly, the overall average MS-SSIM value is 0.9970, the average value noted for IW-SSIM is the same (0.9970), the average value for the VIF metric is 0.9930, and the average NLPD value is 0.0542. The PSNR-HVS and PSNR show average values of 44.65 and 42.08, respectively. The average VMAF value guaranteeing the visual losslessness of the compressed images is 94.83 in the overall flicker test.
The objective metrics show different trends for the corresponding images. Figure 6 shows the line trends of the objective IQMs for the corresponding images at the stage that are observed as visually lossless by the subjects during the subjective flicker test.
Figure 6. Representation of the line trends of the IQMs for corresponding test images at the point of visually lossless compressed level observed by subjects.
The overall statistical analysis of the IQM values for the test images is presented in Table 4. It shows the overall minimum (Min value), maximum (Max value), average (Avg value), and standard deviation (Std) for the targeted metrics calculated.
Table 4. Statistical analysis of the IQMs for the corresponding test image guarantees visually lossless compression of the images in the flicker test.
The statistical analysis of the objective metrics reveals that the FSIM metric shows the range of the values between the minimum value of 0.9985 to the maximum value of 1.0000, which guarantees the visually lossless compression of the images. The average FSIM value as the outcome of the overall subjective flicker test is 0.9997. As a result, the best metric to guarantee the visual losslessness of the JPEG 1 compressed images is FSIM, with the metrics values at the smallest standard deviation of 0.0003. Next, the best metric that predicts the visual losslessness of JPEG 1 compressed images is MS-SSIM, with an overall average result of 0.9970. It shows a range of values between 0.9882 to the maximum value of 0.9998. These values are almost in the same range, with a standard deviation of 0.0025. In the case of IW-SSIM, it shows a standard deviation of 0.0026, which is the next best metric that guarantees the visual losslessness of the compressed images. The IW-SSIM values are in the range of 0.9877 to 0.9998, with an average of 0.9970 for the particular set of the test images that guarantee visually lossless compression. Further, the VIF also shows satisfying results, with a standard deviation of 0.0054. The VIF shows an overall average of 0.9930. The range of the VIF result values is 0.97722 minimum to 0.09992 maximum. The VMAF values are in the range of 90.01 minimum up to 97.16 maximum. The average of the VMAF is 94.83, with a standard deviation of 1.5799. The performance of the VMAF can also satisfactorily guarantee the visually lossless compression of the images. The performance of NLPD shows an average of 0.0542, with a range of 0.0169 minimum to 0.1014 maximum, and a standard deviation of 0.0209. The performance of the NLPD is not good, with a high range of results as compared to the results of previous metrics. The results of PSNR-HVS fall in the range of 37.8483 minimum and 51.8247 maximum, with an average of 44.6545 at the standard deviation of 3.2799. The PSNR results are in the range of 32.9527 and 50.6389. The average value notified is 42.0773, with a standard deviation of 3.7473.

5. Conclusions and Future Work

This paper conducted subjective and objective image quality evaluations for the visually lossless assessment of JPEG 1 compressed images. For this purpose, a platform was developed that accomplished the compression task of images at different quality levels and performed the calculation of IQMs. In the case of the subjective test, a unique concept of the flicker test was used in order to observe the flickering in compressed and reference original images. The subjective activity was performed by 25 students on the test images from the JPEG-AI test dataset. Each image was subjectively observed by all the subjects at different compression levels. The IQMs of the images were calculated at the compression level when the compressed and original images were visually lossless for the subject in the flicker test. The results analysis discussed the range of the quality metrics that guarantee the visually or near-visually lossless compression of the images. The calculated values of the FSIM, MS-SSIM, and IW-SSIM can be effectively utilized with average values of 0.9997, 0.9970, and 0.9970, respectively, to predict the compression level of the images and reconstruct them at the visually lossless compressed quality.
Furthermore, this work can be extended for the performance evaluation of other state-of-the-art image compression algorithms. Moreover, recent IQMs can also be incorporated into the presented framework for further validation. The proposed subjective test methodology can be performed in a crowdsourced-based environment using additional image databases. Our next idea is to integrate the machine and deep learning approaches to perform prediction of the compression level and quality range for reconstructing visually or near-visually lossless compressed images for unknown raw images.

Author Contributions

Conceptualization, A. and O.-J.K.; methodology, A.; software, A. and F.U.; validation, A. and F.U.; formal analysis, Y. and S.J.; resources, O.-J.K.; data curation, A.; writing—original draft preparation, A.; writing—review and editing, A.; visualization, A. and F.U.; supervision, O.-J.K.; project administration, J.L.; funding acquisition, O.-J.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Institute for Information and Communications Technology Promotion (IITP) funded by the Korean Government, development of JPEG systems standard for snack culture content, under Grant 2020-0-00347.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Abi-Jaoude, E.; Naylor, K.T.; Pignatiello, A. Smartphones, social media use and youth mental health. Can. Med. Assoc. J. 2020, 192, E136–E141. [Google Scholar] [CrossRef] [PubMed]
  2. Aljuaid, H.; Parah, S.A. Secure patient data transfer using information embedding and hyperchaos. Sensors 2021, 21, 282. [Google Scholar] [CrossRef] [PubMed]
  3. Lungisani, B.A.; Lebekwe, C.K.; Zungeru, A.M.; Yahya, A. Image compression techniques in wireless sensor networks: A survey and comparison. IEEE Access 2022, 10, 82511–82530. [Google Scholar] [CrossRef]
  4. Varga, D. No-reference video quality assessment using multi-pooled, saliency weighted deep features and decision fusion. Sensors. 2022, 22, 2209. [Google Scholar] [CrossRef]
  5. Wakin, M.; Romberg, J.; Choi, H.; Baraniuk, R. Rate-distortion optimized image compression using wedge lets. In Proceedings of the International Conference on Image Processing, Rochester, NY, USA, 22–25 September 2002. [Google Scholar]
  6. Sun, M.; He, X.; Xiong, S.; Ren, C.; Li, X. Reduction of JPEG compression artifacts based on DCT coefficients prediction. Neurocomputing 2020, 384, 335–345. [Google Scholar] [CrossRef]
  7. Jenadeleh, M.; Pedersen, M.; Saupe, D. Blind quality assessment of iris images acquired in visible light for biometric recognition. Sensors 2020, 20, 1308. [Google Scholar] [CrossRef]
  8. Dumic, E.; Bjelopera, A.; Nüchter, A. Dynamic point cloud compression based on projections, surface reconstruction and video compression. Sensors 2021, 22, 197. [Google Scholar] [CrossRef] [PubMed]
  9. Zhai, G.; Min, X. Perceptual image quality assessment: A survey. Sci. China Inf. Sci. 2020, 63, 1–52. [Google Scholar] [CrossRef]
  10. Opozda, S.; Sochan, A. The survey of subjective and objective methods for quality assessment of 2D and 3D images. Theor. Appl. Inform. 2014, 26, 39–67. [Google Scholar]
  11. Lin, H.; Chen, G.; Jenadeleh, M.; Hosu, V.; Reips, U.-D.; Hamzaoui, R.; Saupe, D. Large-scale crowdsourced subjective assessment of picture wise just noticeable difference. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 5859–5873. [Google Scholar] [CrossRef]
  12. ITU-R Recommendation, B.T. 500-11. Methodology for the Subjective Assessment of the Quality of Television Pictures; ITU: Geneva, Switzerland, 2002. [Google Scholar]
  13. Testolina, M.; Ebrahimi, T. Review of subjective quality assessment methodologies and standards for compressed images evaluation. In Applications of Digital Image Processing XLIV; SPIE: Bellingham, MA, USA, 2021; Volume 11842, pp. 302–315. [Google Scholar]
  14. ISO/IEC 29170-2:2015; Information Technology—Advanced Image Coding and Evaluation—Part 2: Evaluation Procedure for Nearly Lossless Coding. ISO: Geneva, Switzerland, 2021.
  15. Jiang, J.; Wang, X.; Li, B.; Tian, M.; Yao, H. Multi-Dimensional Feature Fusion Network for No-Reference Quality Assessment of In-the-Wild Videos. Sensors 2021, 21, 5322. [Google Scholar] [CrossRef]
  16. Zhang, H.; Hu, X.; Gou, R.; Zhang, L.; Zheng, B.; Shen, Z. Rich Structural Index for Stereoscopic Image Quality Assessment. Sensors 2022, 22, 499. [Google Scholar] [CrossRef] [PubMed]
  17. Mahdaoui, A.E.; Ouahabi, A.; Moulay, M.S. Image denoising using a compressive sensing approach based on regularization constraints. Sensors 2022, 22, 2199. [Google Scholar] [CrossRef] [PubMed]
  18. Zhang, X.; Kwong, S.; Kuo, C.-C.J. Data-Driven Transform-Based Compressed Image Quality Assessment. IEEE Trans. Circuits Syst. Video Technology. 2020, 31, 3352–3365. [Google Scholar] [CrossRef]
  19. Testolina, M.; Upenik, E.; Ascenso, J.; Pereira, F.; Ebrahimi, T. Performance evaluation of objective image quality metrics on conventional and learning-based compression artifacts. In Proceedings of the 13th International Conference on Quality of Multimedia Experience (QoMEX), Online, 14–17 June 2021. [Google Scholar]
  20. Sheikh, H.R.; Sabir, M.F.; Bovik, A.C. A statistical evaluation of recent full reference image quality assessment algorithms. IEEE Trans. Image Process. 2006, 15, 3440–3451. [Google Scholar] [CrossRef] [PubMed]
  21. Li, X. Blind image quality assessment. In Proceedings of the International Conference on Image Processing, Rochester, NY, USA, 22–25 September 2002. [Google Scholar]
  22. Varga, D. A Human Visual System Inspired No-Reference Image Quality Assessment Method Based on Local Feature Descriptors. Sensors 2022, 22, 6775. [Google Scholar] [CrossRef] [PubMed]
  23. Stępień, I.; Oszust, M. A Brief Survey on No-Reference Image Quality Assessment Methods for Magnetic Resonance Images. J. Imaging. 2022, 8, 160. [Google Scholar] [CrossRef]
  24. Xu, S.; Jiang, S.; Min, W. No-reference/blind image quality assessment: A survey. IETE Tech. Rev. 2017, 34, 223–245. [Google Scholar] [CrossRef]
  25. Kamble, V.; Bhurchandi, K. No-reference image quality assessment algorithms: A survey. Optik 2015, 126, 1090–1097. [Google Scholar] [CrossRef]
  26. Lu, W.; Sun, W.; Min, X.; Zhu, W.; Zhou, Q.; He, J.; Wang, Q.; Zhang, Z.; Wang, T.; Zhai, G. Deep Neural Network for Blind Visual Quality Assessment of 4K Content. arXiv 2022, arXiv:2206.04363. [Google Scholar] [CrossRef]
  27. Golestaneh, S.A.; Dadsetan, S.; Kitani, K. No-reference image quality assessment via transformers, relative ranking, and self-consistency. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 4–8 January 2022. [Google Scholar]
  28. Lu, W.; Sun, W.; Min, X.; Zhu, W.; Zhou, Q.; He, J.; Wang, Q.; Zhang, Z.; Wang, T.; Zhai, G. No-reference panoramic image quality assessment based on multi-region adjacent pixels correlation. PloS One 2022, 17, e0266021. [Google Scholar]
  29. Lee, S.; Park, S. A new image quality assessment method to detect and measure strength of blocking artifacts. Signal Process. Image Commun. 2012, 27, 31–38. [Google Scholar] [CrossRef]
  30. Mittal, A.; Moorthy, A.K.; Bovik, A.C. No-reference image quality assessment in the spatial domain. IEEE Trans. Image Process. 2012, 21, 4695–4708. [Google Scholar] [CrossRef]
  31. Mittal, A.; Soundararajan, R.; Bovik, A.C. Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 2012, 20, 209–212. [Google Scholar] [CrossRef]
  32. Su, S.; Yan, Q.; Zhu, Y.; Zhang, C.; Ge, X.; Sun, J.; Zhang, Y. Blindly assess image quality in the wild guided by a self-adaptive hyper network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Online, 14–19 June 2020. [Google Scholar]
  33. Zhu, H.; Li, L.; Wu, J.; Dong, W.; Shi, G. Generalizable no-reference image quality assessment via deep meta-learning. IEEE Trans. Circuits Syst. Video Technol. 2021, 32, 1048–1060. [Google Scholar] [CrossRef]
  34. Ma, Y.; Zhang, W.; Yan, J.; Fan, C.; Shi, W. Blind image quality assessment in multiple bandpass and redundancy domains. Digit. Signal Process. 2018, 80, 37–47. [Google Scholar] [CrossRef]
  35. Li, D.; Jiang, T.; Jiang, M. Norm-in-norm loss with faster convergence and better performance for image quality assessment. In Proceedings of the 28th ACM International Conference on Multimedia, Seattle, WA, USA, 12–16 October 2020. [Google Scholar]
  36. Ying, Z.; Niu, H.; Gupta, P.; Mahajan, D.; Ghadiyaram, D.; Bovik, A. From patches to pictures (PaQ-2-PiQ): Mapping the perceptual space of picture quality. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020. [Google Scholar]
  37. Liu, J.; Zhou, W.; Xu, J.; Li, X.; An, S.; Chen, Z. LIQA: Lifelong Blind Image Quality Assessment. arXiv 2021, arXiv:2104.14115. [Google Scholar] [CrossRef]
  38. Zhang, W.; Li, D.; Ma, C.; Zhai, G.; Yang, X.; Ma, K. Continual learning for blind image quality assessment. IEEE Trans. Pattern Anal. Mach. Intell. 2022, Early Access, 1. [Google Scholar] [CrossRef]
  39. Sun, S.; Yu, T.; Xu, J.; Zhou, W.; Chen, Z. GraphIQA: Learning distortion graph representations for blind image quality assessment. IEEE Trans. Multimed. 2022, 1. [Google Scholar] [CrossRef]
  40. Balanov, A.; Schwartz, A.; Moshe, Y. Reduced-reference image quality assessment based on dct subband similarity. In Proceedings of the 2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX), Lisbon, Portugal, 6–8 June 2016. [Google Scholar]
  41. Gu, K.; Zhai, G.; Yang, X.; Zhang, W. A new reduced-reference image quality assessment using structural degradation model. In Proceedings of the 2013 IEEE international symposium on circuits and systems (ISCAS), Beijing, China, 19–23 May 2013. [Google Scholar]
  42. Gu, K.; Zhai, G.; Yang, X.; Zhang, W.; Liu, M. Subjective and objective quality assessment for images with contrast change. In Proceedings of the 2013 IEEE International Conference on Image Processing, Mlebourne, VI, Australia, 15–18 September 2013. [Google Scholar]
  43. Wu, J.; Lin, W.; Shi, G.; Li, L.; Fang, Y. Orientation selectivity based visual pattern for reduced-reference image quality assessment. Inf. Sci.. 2016, 351, 18–29. [Google Scholar] [CrossRef]
  44. Phadikar, B.S.; Maity, G.K.; Phadikar, A. Full reference image quality assessment: A survey. In Industry Interactive Innovations in Science, Engineering and Technology; Springer: Cham, Switzerland, 2018; pp. 197–208. [Google Scholar]
  45. George, A.; Livingston, S.J. A survey on full reference image quality assessment algorithms. Int. J. Res. Eng. Technol. 2013, 2, 303–307. [Google Scholar]
  46. Pedersen, M.; Hardeberg, J.Y. Survey of Full-reference Image QUALITY metrics. 2009. Available online: https://ntnuopen.ntnu.no/ntnu-xmlui/bitstream/handle/11250/144194/rapport052009_elektroniskversjon.pdf?sequence=1 (accessed on 18 December 2022).
  47. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed]
  48. Wang, Z.; Bovik, A.C. Modern image quality assessment. Synth. Lect. Image Video Multimed. Process. 2006, 2, 1–156. [Google Scholar]
  49. Ponomarenko, N.; Jin, L.; Ieremeiev, O.; Lukin, V.; Egiazarian, K.; Astola, J.; Vozel, B.; Chehdi, K.; Carli, M.; Battisti, F. Image database TID2013: Peculiarities, results and perspectives. Signal Process. Image Commun. 2015, 30, 57–77. [Google Scholar] [CrossRef]
  50. Gu, K.; Zhai, G.; Yang, X.; Zhang, W. Hybrid no-reference quality metric for singly and multiply distorted images. IEEE Trans. Broadcast. 2014, 60, 555–567. [Google Scholar] [CrossRef]
  51. Damera-Venkata, N.; Kite, T.D.; Geisler, W.S.; Evans, B.L.; Bovik, A.C. Image quality assessment based on a degradation model. IEEE Trans. Image Process. 2000, 9, 636–650. [Google Scholar] [CrossRef]
  52. Wang, Z.; Li, Q. Information content weighting for perceptual image quality assessment. IEEE Trans. Image Process. 2010, 20, 1185–1198. [Google Scholar] [CrossRef]
  53. Larson, E.C.; Chandler, D.M. Most apparent distortion: Full-reference image quality assessment and the role of strategy. J. Electron. Imaging. 2010, 19, 011006. [Google Scholar]
  54. Johnson, J.; Alahi, A.; Fei-Fei, L. Perceptual losses for real-time style transfer and super-resolution. In European Conference on Computer Vision; Springer: Cham, Switzerland, 2016. [Google Scholar]
  55. Zhang, R.; Isola, P.; Efros, A.A.; Shechtman, E.; Wang, O. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018. [Google Scholar]
  56. Prashnani, E.; Cai, H.; Mostofi, Y.; Sen, P. Pieapp: Perceptual image-error assessment through pairwise preference. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018. [Google Scholar]
  57. Gu, J.; Cai, H.; Chen, H.; Ye, X.; Ren, J.; Dong, C. Image quality assessment for perceptual image restoration: A new dataset, benchmark and metric. arXiv 2020, arXiv:2011.15002. [Google Scholar]
  58. Wang, Z.; Simoncelli, E.P.; Bovik, A.C. Multiscale structural similarity for image quality assessment. In Proceedings of the The Thrity-Seventh Asilomar Conference on Signals, Systems & Computers, Pacific Grove, CA, USA, 9–12 October 2003. [Google Scholar]
  59. Chen, G.-H.; Yang, C.-L.; Po, L.-M.; Xie, S.-L. Edge-based structural similarity for image quality assessment. In Proceedings of the 2006 IEEE International Conference on Acoustics Speech and Signal Processing Proceedings, Toulouse, France, 14–19 May 2006. [Google Scholar]
  60. Liu, A.; Lin, W.; Narwaria, M. Image quality assessment based on gradient similarity. IEEE Trans. Image Process. 2011, 21, 1500–1512. [Google Scholar]
  61. Xue, W.; Zhang, L.; Mou, X.; Bovik, A.C. Gradient magnitude similarity deviation: A highly efficient perceptual image quality index. IEEE Trans. Image Process. 2013, 23, 684–695. [Google Scholar] [CrossRef]
  62. Zhang, B.; Sander, P.V.; Bermak, A. Gradient magnitude similarity deviation on multiple scales for color image quality assessment. In Proceedings of the 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), New Orleans, LA, USA, 5–9 March 2017. [Google Scholar]
  63. Zhang, L.; Zhang, L.; Mou, X.; Zhang, D. FSIM: A feature similarity index for image quality assessment. IEEE Trans. Image Process. 2011, 20, 2378–2386. [Google Scholar] [CrossRef] [PubMed]
  64. Zhang, L.; Shen, Y.; Li, H. VSI: A visual saliency-induced index for perceptual image quality assessment. IEEE Trans. Image Process. 2014, 23, 4270–4281. [Google Scholar] [CrossRef] [PubMed]
  65. Reisenhofer, R.; Bosse, S.; Kutyniok, G.; Wiegand, T. A Haar wavelet-based perceptual similarity index for image quality assessment. Signal Process. Image Commun. 2018, 61, 33–43. [Google Scholar] [CrossRef]
  66. Nafchi, H.Z.; Shahkolaei, A.; Hedjam, R.; Cheriet, M. Mean deviation similarity index: Efficient and reliable full-reference image quality evaluator. IEEE Access 2016, 4, 5579–5590. [Google Scholar] [CrossRef]
  67. Ding, K.; Ma, K.; Wang, S.; Simoncelli, E.P. Image quality assessment: Unifying structure and texture similarity. IEEE Trans. Pattern Anal.Mach. Intell.. 2020, 44, 2567–2581. [Google Scholar] [CrossRef] [PubMed]
  68. Sheikh, H.R.; Bovik, A.C. A Visual Information Fidelity Approach to Video Quality Assessment. 2005, 7, pp. 2117–2128. Available online: https://utw10503.utweb.utexas.edu/publications/2005/hrs_vidqual_vpqm2005.pdf (accessed on 18 December 2022).
  69. Mohammadi, P.; Ebrahimi-Moghadam, A.; Shirani, S. Subjective and objective quality assessment of image: A survey. arXiv 2014, arXiv:1406.7799. [Google Scholar]
  70. ITU-T Recommendation, P. 910. Subjective Video Quality Assessment Methods for Multimedia Applications; ITU: Geneva, Switzerland, 2008. [Google Scholar]
  71. ITU-R Recommendation, B.T. 814-1. Specification and Alignment Procedures for Setting of Brightness and Contrast of Displays; ITU: Geneva, Switzerland, 1994. [Google Scholar]
  72. ITU-R Recommendation, B.T. 1129-2. Subjective Assessment of Standard Definition Digital Television (SDTV) Systems; ITU: Geneva, Switzerland, 1998. [Google Scholar]
  73. Cheng, Z.; Akyazi, P.; Sun, H.; Katto, J.; Ebrahimi, T. Perceptual quality study on deep learning based image compression. In Proceedings of the 2019 IEEE International Conference on Image Processing (ICIP), Taipei, Taiwan, 22–25 September 2019. [Google Scholar]
  74. Ascenso, J.; Akyazi, P.; Pereira, F.; Ebrahimi, T. Learning-based image coding: Early solutions reviewing and subjective quality evaluation. In Optics, Photonics and Digital Technologies for Imaging Applications; SPIE: Bellingham, MA, USA, 2020; Volume 11353, pp. 164–176. [Google Scholar]
  75. Egger-Lampl, S.; Redi, J.; Hoßfeld, T.; Hirth, M.; Möller, S.; Naderi, B.; Keimel, C.; Saupe, D. Crowdsourcing quality of experience experiments. In Evaluation in the crowd. Crowdsourcing and human-centered experiments; Springer: Cham, Switzerland, 2017; pp. 154–190. [Google Scholar]
  76. Chen, K.-T.; Wu, C.-C.; Chang, Y.-C.; Lei, C.-L. A crowdsourceable QoE evaluation framework for multimedia content. In Proceedings of the 17th ACM International Conference on Multimedia, Beijing, China, 19–24 October 2009. [Google Scholar]
  77. Willème, A.; Mahmoudpour, S.; Viola, I.; Fliegel, K.; Pospíšil, J.; Ebrahimi, T.; Schelkens, P.; Descampe, A.; Macq, B. Overview of the JPEG XS core coding system subjective evaluations. In Applications of Digital Image Processing XLI; SPIE: Bellingham, MA, USA, 2018; Volume 10752, pp. 512–523. [Google Scholar]
  78. Hoffman, D.M.; Stolitzka, D. A new standard method of subjective assessment of barely visible image artifacts and a new public database. J. Soc. Inf. Disp.. 2014, 22, 631–643. [Google Scholar] [CrossRef]
  79. Cornsweet, T.N. The staircase-method in psychophysics. Am. J. Psychol. 1962, 75, 485–491. [Google Scholar] [CrossRef]
  80. Hudson, G.; Léger, A.; Niss, B.; Sebestyén, I.; Vaaben. JPEG-1 standard 25 years: Past, present, and future reasons for a success. J. Electron. Imaging. 2018, 27, 040901. [Google Scholar] [CrossRef]
  81. Libjpeg-Turbo. Available online: https://libjpeg-turbo.org/Main/HomePage (accessed on 18 October 2022).
  82. JPEG—JPEG, A.I. Available online: https://jpeg.org/jpegai/dataset.html (accessed on 1 November 2022).
  83. Choi, S.; Kwon, O.-J.; Lee, J. A method for fast multi-exposure image fusion. IEEE Access 2017, 5, 7371–7380. [Google Scholar] [CrossRef]
  84. ISO/IEC JTC 1/SC29/WG1 N100106; ICQ JPEG AI Common Training and Test Conditions. ISO: Geneva, Switzerland, 2022.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.