Next Article in Journal
The Analysis and Verification of Unbiased Estimator for Multilateral Positioning
Next Article in Special Issue
Urban Plants Classification Using Deep-Learning Methodology: A Case Study on a New Dataset
Previous Article in Journal
Transmission Line Fault Classification of Multi-Dataset Using CatBoost Classifier
Previous Article in Special Issue
A Perspective on Information Optimality in a Neural Circuit and Other Biological Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Saliency-Guided Local Full-Reference Image Quality Assessment

Ronin Institute, Montclair, NJ 07043, USA
Signals 2022, 3(3), 483-496; https://doi.org/10.3390/signals3030028
Submission received: 28 April 2022 / Revised: 27 May 2022 / Accepted: 29 June 2022 / Published: 11 July 2022

Abstract

:
Research and development of image quality assessment (IQA) algorithms have been in the focus of the computer vision and image processing community for decades. The intent of IQA methods is to estimate the perceptual quality of digital images correlating as high as possible with human judgements. Full-reference image quality assessment algorithms, which have full access to the distortion-free images, usually contain two phases: local image quality estimation and pooling. Previous works have utilized visual saliency in the final pooling stage. In addition to this, visual saliency was utilized as weights in the weighted averaging of local image quality scores, emphasizing image regions that are salient to human observers. In contrast to this common practice, visual saliency is applied in the computation of local image quality in this study, based on the observation that local image quality is determined both by local image degradation and visual saliency simultaneously. Experimental results on KADID-10k, TID2013, TID2008, and CSIQ have shown that the proposed method was able to improve the state-of-the-art’s performance at low computational costs.

1. Introduction

Image quality assessment (IQA) is a hot research topic in the image processing community, since it can be applied in a wide range of practical and important applications, such as the optimization of computer vision system parameters [1] and monitoring the quality of image displays [2], it is also helpful in the process of benchmarking image and video compression and denoising algorithms [3,4]. In practice, the final perceivers and users of digital images are human beings. As a consequence, the most reliable way of evaluating the perceptual quality of digital images is subjective user studies involving a group of vision experts and human observers either in a laboratory environment [5] or a crowdsourcing experiment [6]. Further, these subjective evaluation methods result in publicly available benchmark IQA databases, which can be used in objective IQA. Namely, objective IQA tries to construct mathematical algorithms that estimate the perceptual quality of digital images as consistently as possible with human judgement. To help the development of IQA methods, publicly available benchmark databases, such as TID2013 [7] and CSIQ [8], contain digital images with their mean opinion score (MOS) values, which are the averages of individual human quality ratings. Traditionally, IQA is divided into three classes [9]: full-reference (FR), no-reference (NR), and reduced-reference (RR). The main difference is the degree of access to the reference, i.e., the distortion-free images. Namely, FR-IQA algorithms have full access to the reference image, while NR-IQA ones have no access to the them, and RR-IQA methods have limited access to the reference images.
Human vision research has increased the understanding of the human visual system. During observation of an image, people tend to focus on the visually significant part of a scene. As a consequence, the content of the image is not treated equally. This is why considering visual saliency in FR-IQA has gained a lot of attention in the literature [10,11,12]. In the literature, the traditional and well-known PSNR [13] was first improved using visual saliency computation by dividing the input images into different regions and assigning weights to them according to their estimated saliency. This work was followed by a line of papers utilizing visual saliency from different features. Specifically, Zhang et al. [14] utilized the phase congruency feature [15] as a visual saliency measure to create weights for defined feature similarity metrics. Wang and Li [16] compiled local distortion maps from the reference and distorted images. To quantify visual quality, these local distortion maps were pooled together by using saliency as a weighting function. The common point of algorithms using visual saliency is that visual saliency is used as a weighting function in the comparison of local image quality maps.

1.1. Motivation and Contributions

Over the evolution of the human race, humans have evolved to pay different amounts of attention to different regions of a visual scene as it is viewed. This mechanism obviously affects human perceptual quality judgements. Previously published FR-IQA methods [14,16,17,18] utilize visual saliency in the final pooling stage of the algorithm. To be more specific, visual saliency is used as weight in the weighted averaging of local image quality scores, emphasizing image regions that are more salient to the human visual system. The main contribution of this paper was the following. As opposed to other previously published algorithms, visual saliency was utilized by this work in the computation of local image quality and not in the pooling stage, which is usual in the literature. In fact, the human visual system perceives image degradation with higher probability in visually significant regions and tends to neglect degradation in visually less significant regions. In other words, local quality degradation is simultaneously influenced by both local image degradation and visual saliency. The effect of visual saliency on the perception of local image quality is demonstrated below. If humans observe Figure 1a, they to tend to focus their attention on the child’s face rather than on the vegetation in the background. To be specific, we added the same amount of noise to two different visually significant areas of the image. In Figure 1a, it was added to the region containing the the vegetation in the left corner, while it was added to the face in Figure 1b. Comparing these two figures, the human visual system was less likely to perceive the degradation if it was not in a visually salient area.
Based on the above-mentioned observations, a visual-saliency-guided FR-IQA framework was introduced where the visual saliency adaptively adjusted local image quality. Within this framework, ESSIM (edge strength similarity-based image quality metric) [19] method was further developed. Furthermore, it was demonstrated that the proposed method was able to improve state-of-the-art performance at low computational costs on four large and widely accepted IQA benchmark databases, i.e., KADID-10k [20], TID2013 [7], TID2008 [21], and CSIQ [8].

1.2. Organization of the Paper

After this introduction, related and previous papers are reviewed in Section 2. Next, our proposed method is described in Section 3. Databases, evaluation metrics, and implementation details are given in Section 4. Numerical, experimental results, and a comparison to the state-of-the-art is presented in Section 5. At the end, a conclusion is drawn in Section 6.

2. Related Work

As already mentioned, FR-IQA algorithms predict the perceptual quality of distorted images with full access to the reference images. The traditional and probably the simplest methods for FR-IQA are mean squared error (MSE) and peak signal-to-noise ratio (PSNR), which rely on the measurement of the distortion energy of the images. However, their performance lags behind other more sophisticated metrics, since their results are not consistent with human quality perception [22]. Hence, many FR-IQA methods have tried to build on the properties of the human visual system (HVS) to compile effective algorithms. For example, the structural similarity index (SSIM) [23] utilizes the observation that HVS is sensitive to contrast and structural distortions. More specifically, SSIM [23] applies local sliding windows on the reference and distorted images and measures luminance, contrast, and structure similarity using predefined functions containing average, variance, and covariance computations. Finally, the perceptual quality of the distorted image is obtained by taking the arithmetic mean of the similarity values of the local sliding windows. Brunet et al. [24] investigated the mathematical properties of SSIM and pointed out that SSIM satisfies the identity of indiscernibles and exhibits symmetry properties, excluding the triangle inequality. SSIM has become quickly popular in the signal processing community and gained a significant amount of attention for further research [25]. For example, Wang et al. [26] extended SSIM into MS-SSIM by conducting multi-scale processing. Further, Sampat et al. [27] utilized complex wavelet domains to define a new structural similarity index. In contrast, Chen et al. [28] modified SSIM to compare edge information between the reference and the distorted images. In addition to structural degradation, image gradient is also a popular feature in FR-IQA. Specifically, Liu et al. [29] defined an FR-IQA metric using gradient similarity between the reference and distorted images. Similarly, Zhu and Wang [30] utilized gradient similarity, but scale information was also incorporated into their metric.
Motivated by the success of deep learning in many image processing tasks [31,32,33], deep learning has also appeared in FR-IQA. For example, significant amount of proposed works compare deep activations of convolutional neural networks (CNNs) for a reference–distorted image pair to establish an FR quality metric. For example, Amirshahi et al. [34] extracted feature maps using an AlexNet [35] CNN. Subsequently, the extracted feature maps were compared using histogram intersection kernels [36] and a similarity value was produced for each feature map. To obtain perceptual quality, the arithmetic mean of these similarity values was taken. Later, this approach was developed further [37] by replacing the histogram intersection kernels by traditional image similarity metrics. Another line of work focused on training end-to-end deep architectures on large IQA benchmark databases [38,39,40]. Chubarau et al. [39] investigated vision transformers and self-attention for FR-IQA. Namely, a context-aware sampling procedure was introduced to extract patches from the reference and the distorted images. Next, the patches were encoded by a vision transformer.
Another class of FR-IQA methods uses existing and available FR-IQA metrics to construct a new metric with improved performance. For instance, Okarma [41] studied the characteristics of three different metrics. In addition, the authors introduced a combined metric by taking the exponentiated product of the examined three metrics. In contrast, other proposals utilized optimization techniques to find optimal weights for a linear combination of already existing FR-IQA techniques [42,43,44]. Instead of optimization, Lukin et al. [45] trained a neural network from scratch using traditional FR-IQA metrics as image features to obtain improved estimation performance.

3. Proposed Method

In this section, our proposed method is introduced and described. Specifically, we first gave an overview about the ESSIM [19] method to better understand our contribution to the saliency-guided determination of local image quality. In the following, the ESSIM [19] algorithm is first briefly described, and then the saliency-guided SG-ESSIM is described in detail.

3.1. ESSIM Method

To quantify quality degradation of a distorted image ( g ) with respect to a reference image ( f ) , the ESSIM is defined as
E S S I M ( f , g ) = 1 N i = 1 N 2 E ( f , i ) E ( g , i ) + C ( E ( f , i ) ) 2 + ( E ( g , i ) ) 2 + C ,
where C is a small constant to avoid the division by zero. To be more specific, C has two roles in Equation (1): as already mentioned, it is necessary to avoid having a denominator equal to zero; secondly C is also a scaling factor. If C , the result of similarity measure is 1. Further, E ( f , i ) is used to characterize the edge strength around pixel i in the reference image f, and is determined as
E ( f , i ) = m a x ( E i 1 , 3 ( f ) , E i 2 , 4 ( f ) ) ,
where E i 2 , 4 ( f ) is the edge strength in the diagonal directions and is determined as
E i 2 , 4 ( f ) = | f i 2 f i 4 | p .
Similarly, E i 1 , 3 ( f ) stands for the edge strength but in the vertical and horizontal directions and is computed as
E i 1 , 3 ( f ) = | f i 1 f i 3 | p .
In Equations (3) and (4), f i j stands for the directional derivatives at pixel i in reference image f. The applied directions in the ESSIM are the following: 0 , 45 , 90 , 135 . Further, the fractional derivatives in the ESSIM are implemented with the help of 5 × 5 Scharr operators [46]. Moreover, p is used to adapt the edge strength in the algorithm. The edge strength of distorted image g around pixel i is defined as
E ( g , i ) = E i 1 , 3 ( g ) , if E ( f , i ) = E i 1 , 3 ( f ) E i 2 , 4 ( g ) , if E ( f , i ) = E i 2 , 4 ( f )
in order to guarantee that the edge strengths of f and g are compared in the same direction. Otherwise, E i 1 , 3 ( g ) and E i 2 , 4 ( g ) are determined similarly to Equations (3) and (4).

3.2. SG-ESSIM Method

As one can observe from the derivation of the ESSIM [19], the impact of visual saliency was not incorporated into these metrics. In this section, we describe how we enhanced the ESSIM by using visual saliency in the measurement of local image quality. Hence, it was highlighted that local image quality degradation was jointly characterized by the objective degradation of edge strength and significance.
As proposed by the authors of the ESSIM [19], our algorithm first determined the edge strength maps of the reference and distorted images as recommended in the ESSIM [19]. Subsequently, it calculated the saliency-guided local image quality map. Finally, it pooled the local image quality map to obtain the overall perceptual quality of the distorted image.
Using Equations (2) and (5), which describe E ( f , i ) and E ( g , i ) in the ESSIM, our visual saliency-guided local similarity map is defined as
S M ( E ( f , i ) , E ( g , i ) ) = 2 · E ( f , i ) · E ( g , i ) + H ( V ( i ) ) ( E ( f , i ) ) 2 + ( E ( g , i ) ) 2 + H ( V ( i ) ) ,
where V ( i ) stands for the visual saliency measure at pixel location at pixel location i. Moreover, H ( · ) is a decreasing function defined as
H ( V ( i ) ) = K · e V ( i ) h ,
where e is Euler’s number, K is a scaling factor, and h is an attenuation factor (Equation (7) is a decreasing function because the similarity metric defined by Equation (9) also needs to be regulated). To be more specific, V ( i ) takes the edge strength maps of the reference and distorted images and returns their pixel-wise maximum:
V ( i ) = m a x ( E ( f , i ) , E ( g , i ) ) .
Equation (8) implies that regions with strong edges are more salient to the human visual system than those with weak edges since edge information conveys essential information about the visual scene to humans [47]. A larger value of V ( i ) = m a x ( E ( f , i ) , E ( g , i ) ) indicated a higher visual saliency at pixel location i. Moreover, the computation of V ( i ) did not result in a significant increase in the computational costs since E ( f , i ) and E ( g , i ) were determined anyway in the ESSIM [19]. Substituting Equations (7) and (8) into Equation (9), we obtain the following equation for visual saliency guided local similarity map:
S M ( E ( f , i ) , E ( g , i ) ) = 2 · E ( f , i ) · E ( g , i ) + K · e m a x ( E ( f , i ) , E ( g , i ) ) h ( E ( f , i ) ) 2 + ( E ( g , i ) ) 2 + K · e m a x ( E ( f , i ) , E ( g , i ) ) h .
Finally, the overall predicted perceptual quality score of the image was estimated by averaging the visual saliency-guided (SG) local similarity map
S G E S S I M = 1 N i = 1 N S M ( E ( f , i ) , E ( g , i ) ) ,
where N is the total number of pixels.
The hyperparameters of the SG-ESSIM, which have to be set, were K and h. These parameters were determined on a subset of the TID2008 [21] database, which contained 5 random reference images and 340 corresponding distorted images. Specifically, those hyperparameters were chosen where the highest Spearman’s rank order correlation coefficient between the ground truth and the predicted values was obtained. As a result, we chose K = 51,000 and h = 0.5 in our MATLAB implementation. Since the SG-ESSIM was not a machine-learning-based approach, this random subset was also used in the evaluation process.

4. Materials

In this section, the description of the applied benchmark IQA databases are given first. Second, we give the applied evaluation metrics and protocol. Finally, the implementation details of the proposed method are given.

4.1. Databases

In this study, four large publicly available IQA benchmark databases (KADID-10k [20], TID2013 [7], TID2008 [21], and CSIQ [8]) were used to evaluate and compare the state-of-the-art to the proposed method. The most important information about the applied databases is summarized in Table 1. Namely, these databases contained a small set of distortion-free reference images from which distorted images were derived using different distortion types at different distortion levels. Moreover, the distorted images were annotated with subjective quality ratings. Therefore, they were suitable and accepted for the evaluation and ranking of FR-IQA metrics in the literature [48]. Figure 2 depicts the empirical MOS distributions in the applied databases.

4.2. Evaluation Metrics and Protocol

Ranking of FR-IQA methods relies on measuring the correlation between predicted and ground truth quality scores of benchmark IQA databases, such as KADID-10k [20]. To be more specific, the performance of an FR-IQA metric was evaluated using three different criteria in this study: Pearson’s linear correlation coefficient (PLCC), Spearman’s rank order correlation coefficient (SROCC), and Kendall’s rank order correlation coefficient (KROCC). In addition, a nonlinear logistic regression was applied before the calculation of the PLCC as recommended in [49], because nonlinear relationship exists between the ground truth and predicted scores:
Q = β 1 1 2 1 1 + e β 2 ( Q p β 3 ) + β 4 Q p + β 5 ,
where Q and Q p are the fitted and predicted scores, respectively. In addition, the regression described by Equation (11) is determined by the β i ( i { 1 , 2 , , 5 } ) parameters.

4.3. Implementation Details

MATLAB R2021a was used to implement the proposed FR-IQA metric using the functions of the Image Processing Toolbox. The computer configuration applied in our experiments is outlined in Table 2.

5. Experimental Results and Analysis

In this section, our experimental results and analysis are presented using four common benchmark IQA databases: KADID-10k [20], TID2013 [7], TID2008 [21], and CSIQ [8]. More specifically, the proposed SG-ESSIM method was compared to thirteen other state-of-the-art FR-IQA metrics (2stepQA [50], CSV [51], DISTS [52], ESSIM [19], GSM [29], IW-SSIM [16], MAD [8], MS-SSIM [26], PSNR, ReSIFT [53], RVSIM [54], SSIM [23], SUMMER [55]), whose original source codes were made publicly available by their authors, on the above-mentioned databases in terms of correlations with ground truth quality ratings. Furthermore, their hyperparameter values (if any) were applied in the examined state-of-the-art methods, which were recommended as default values by the original authors in their MATLAB implementations. In addition to this, a detailed comparison of TID2013 [7] and TID2008 [21] with respect to different noise types and noise levels is also presented.
The comparison, in terms of the PLCC, SROCC, and KROCC, in KADID-10k [20], TID2013 [7], TID2008 [21], and CSIQ [8] is summarized in Table 3 and Table 4. From these experimental, numerical results, it could be seen that the proposed SG-ESSIM was able to provide the best results in terms of the SROCC and KROCC on KADID-10k [20] and the PLCC on TID2013 [7], respectively. Moreover, it gave the best results for all correlation values in TID2008 [21]. On CSIQ [8], the second best results in terms of the SROCC and KROCC were achieved by the proposed method. To illustrate the overall performance of the examined methods on these databases, direct and weighted average (using the numbers of images as weights) performance values are summarized in Table 5. It could be seen that the proposed method was able to deliver the second best results in terms of the SROCC and KROCC, both in direct and weighted averages. In Table 6 and Table 7, detailed results could be seen on TID2013 [7] and TID2008 [21] with respect to the different distortion types. Similarly, Table 8 and Table 9 illustrate the detailed results with respect to the noise levels. TID2013 [7] contained images with 24 different distortion types, such as AGN (additive Gaussian noise), ANC (additive noise in color components), SCN (spatially correlated noise), MN (masked noise), HFN (high frequency noise), IN (impulse noise), QN (quantization noise), GB (Gaussian blur), DEN (image denoising), JPEG (JPEG compression), JP2K (JPEG2000 compression), JGTE (JPEG transmission errors), J2TE (JPEG2000 transmission errors), NEPN (noneccentricity pattern noise), BLOCK (local block-wise distortions of different intensity), MS (mean shift), CC (contrast change), CCS (change in color saturation), MGN (multiplicative Gaussian noise), CN (comfort noise), LCNI (lossy compression of noisy images), ICQD (image color quantization with dither), CA (chromatic aberrations), and SSR (sparse sampling and reconstruction). TID2008 [21] consisted of the first 17 distortion types from TID2013 [7]. From the presented results, it could be observed that SG-ESSIM was able to produce the best SROCC values on five distortion types of TID2013 [7] and on four distortion types of TID2008 [21]. On the other hand, it provided the second best results on nine distortion types of TID2013 [7] and on seven distortion types of TID2008 [21]. Moreover, the SG-ESSIM was able to produce the best SROCC values on three distortion levels of TID2013 [7], while it gave the second best results on the remaining two distortion levels. In TID2008 [21], it provided the best results on three distortion levels and the second best result on the remaining distortion level.
In Figure 3, the execution time (logarithmic scale) versus SROCC scatter plot measured on KADID-10k [20] is depicted. From this figure, it could be seen that the proposed SG-ESSIM method was the fourth fastest algorithm out of the examined thirteen ones. Considering execution time and estimation performance together, the SG-ESSIM provided a competitive result against the state-of-the-art. Similarly, Figure 4 depicts the execution versus the SROCC on the CSIQ [8] database. Here, we could also observe that considering execution time and performance together, the proposed method was able to achieve a competitive result.

6. Conclusions

The goal of FR-IQA is to predict digital images’ perceptual quality with full access to the distortion-free, reference images. FR-IQA methods usually contain two stages: local image quality estimation and pooling. In addition, visual saliency is utilized in the pooling stage as weights for the weighted averaging of local image quality. Unlike this traditional approach, we applied visual saliency in the computation of local image quality, motivated by the fact that local image quality is determined both by local image degradation and visual saliency simultaneously. Experimental results on four publicly available benchmark IQA databases showed that the proposed algorithm was able to outperform the state-of-the-art and was characterized by low computational costs. Future work could involve the incorporation of local and global saliency into a novel FR-IQA metric. The source code of the proposed method is available at: https://github.com/Skythianos/SG-ESSIM (accessed on 12 June 2022).

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

In this paper, the following publicly available benchmark databases were used: 1. KADID-10k: http://database.mmsp-kn.de/kadid-10k-database.html (accessed on 12 May 2022), 2. TID2013: http://www.ponomarenko.info/tid2013.htm (accessed on 12 May 2022), 3. TID2008: http://www.ponomarenko.info/tid2008.htm (accessed on 12 May 2022), 4. CSIQ: https://isp.uv.es/data_quality.html (accessed on 12 May 2022).

Acknowledgments

We thank the anonymous reviewers for their careful reading of our manuscript and their many insightful comments and suggestions.

Conflicts of Interest

The author declares no conflict of interest.

Abbreviations

The following abbreviations were used in this manuscript:
AGNadditive Gaussian noise;
ANCadditive noise in color components;
CAchromatic aberrations;
CCcontrast change;
CCSchange in color saturation;
CNcomfort noise;
CNNconvolutional neural network;
CPUcentral processing unit;
DENimage denoising;
FR-IQAfull-reference image quality assessment;
GBGaussian blur;
GPUgraphics processing unit;
HFNhigh frequency noise;
HVShuman visual system;
ICQDimage color quantization with dither;
INimpulse noise;
IQAimage quality assessment;
JGTEJPEG transmission error;
JPEGjoint photographic experts group;
KROCCKendall’s rank order correlation coefficient;
LCNIlossy compression of noisy images;
MGNmultiplicative Gaussian noise;
MNmasked noise;
MOSmean opinion score;
MSmean shift;
MSEmean squared error;
NEPNnoneccentricity pattern noise;
NR-IQAno-reference image quality assessment;
PLCCPearson’s linear correlation coefficient;
PSNRpeak signal-to-noise ratio;
QNquantization noise;
RR-IQAreduced-reference image quality assessment;
SCNspatially correlated noise;
SGsaliency guided;
SROCCSpearman’s rank order correlation coefficient;
SSRspace sampling and reconstruction

References

  1. Ding, K.; Ma, K.; Wang, S.; Simoncelli, E.P. Comparison of full-reference image quality models for optimization of image processing systems. Int. J. Comput. Vis. 2021, 129, 1258–1281. [Google Scholar] [CrossRef]
  2. Chen, B.; Zhu, L.; Zhu, H.; Yang, W.; Lu, F.; Wang, S. The Loop Game: Quality Assessment and Optimization for Low-Light Image Enhancement. arXiv 2022, arXiv:2202.09738. [Google Scholar]
  3. Goyal, B.; Gupta, A.; Dogra, A.; Koundal, D. An adaptive bitonic filtering based edge fusion algorithm for Gaussian denoising. Int. J. Cogn. Comput. Eng. 2022, 3, 90–97. [Google Scholar] [CrossRef]
  4. Saito, Y.; Miyata, T. Recovering Texture with a Denoising-Process-Aware LMMSE Filter. Signals 2021, 2, 286–303. [Google Scholar] [CrossRef]
  5. Chubarau, A.; Akhavan, T.; Yoo, H.; Mantiuk, R.K.; Clark, J. Perceptual image quality assessment for various viewing conditions and display systems. Electron. Imaging 2020, 2020, 67-1. [Google Scholar] [CrossRef]
  6. Saupe, D.; Hahn, F.; Hosu, V.; Zingman, I.; Rana, M.; Li, S. Crowd workers proven useful: A comparative study of subjective video quality assessment. In Proceedings of the QoMEX 2016: 8th International Conference on Quality of Multimedia Experience, Lisbon, Portugal, 6–8 June 2016. [Google Scholar]
  7. Ponomarenko, N.; Ieremeiev, O.; Lukin, V.; Egiazarian, K.; Jin, L.; Astola, J.; Vozel, B.; Chehdi, K.; Carli, M.; Battisti, F.; et al. Color image database TID2013: Peculiarities and preliminary results. In Proceedings of the European Workshop on Visual Information Processing (EUVIP), Paris, France, 10–12 June 2013; pp. 106–111. [Google Scholar]
  8. Larson, E.C.; Chandler, D.M. Most apparent distortion: Full-reference image quality assessment and the role of strategy. J. Electron. Imaging 2010, 19, 011006. [Google Scholar]
  9. Zhai, G.; Min, X. Perceptual image quality assessment: A survey. Sci. China Inf. Sci. 2020, 63, 1–52. [Google Scholar] [CrossRef]
  10. Liu, H.; Heynderickx, I. Visual attention in objective image quality assessment: Based on eye-tracking data. IEEE Trans. Circuits Syst. Video Technol. 2011, 21, 971–982. [Google Scholar]
  11. Zhang, L.; Shen, Y.; Li, H. VSI: A visual saliency-induced index for perceptual image quality assessment. IEEE Trans. Image Process. 2014, 23, 4270–4281. [Google Scholar] [CrossRef] [Green Version]
  12. Zhang, W.; Borji, A.; Wang, Z.; Le Callet, P.; Liu, H. The application of visual saliency models in objective image quality assessment: A statistical evaluation. IEEE Trans. Neural Netw. Learn. Syst. 2015, 27, 1266–1278. [Google Scholar] [CrossRef] [Green Version]
  13. Ma, Q.; Zhang, L. Saliency-based image quality assessment criterion. In Proceedings of the International Conference on Intelligent Computing, Shanghai, China, 15–18 September 2008; pp. 1124–1133. [Google Scholar]
  14. Zhang, L.; Zhang, L.; Mou, X.; Zhang, D. FSIM: A feature similarity index for image quality assessment. IEEE Trans. Image Process. 2011, 20, 2378–2386. [Google Scholar] [CrossRef] [Green Version]
  15. Kovesi, P.; Robotics and Vision Research Group. Image features from phase congruency. Videre J. Comput. Vis. Res. 1999, 1, 1–26. [Google Scholar]
  16. Wang, Z.; Li, Q. Information content weighting for perceptual image quality assessment. IEEE Trans. Image Process. 2010, 20, 1185–1198. [Google Scholar] [CrossRef]
  17. Shi, C.; Lin, Y. Full reference image quality assessment based on visual salience with color appearance and gradient similarity. IEEE Access 2020, 8, 97310–97320. [Google Scholar] [CrossRef]
  18. Varga, D. Full-Reference Image Quality Assessment Based on Grünwald–Letnikov Derivative, Image Gradients, and Visual Saliency. Electronics 2022, 11, 559. [Google Scholar] [CrossRef]
  19. Zhang, X.; Feng, X.; Wang, W.; Xue, W. Edge strength similarity for image quality assessment. IEEE Signal Process. Lett. 2013, 20, 319–322. [Google Scholar] [CrossRef]
  20. Lin, H.; Hosu, V.; Saupe, D. KADID-10k: A large-scale artificially distorted IQA database. In Proceedings of the 2019 Eleventh International Conference on Quality of Multimedia Experience (QoMEX), Berlin, Germany, 5–7 June 2019; pp. 1–3. [Google Scholar]
  21. Ponomarenko, N.; Lukin, V.; Zelensky, A.; Egiazarian, K.; Carli, M.; Battisti, F. TID2008-a database for evaluation of full-reference visual quality assessment metrics. Adv. Mod. Radioelectron. 2009, 10, 30–45. [Google Scholar]
  22. Yang, X.; Sun, Q.; Wang, T. Image quality assessment via spatial structural analysis. Comput. Electr. Eng. 2018, 70, 349–365. [Google Scholar] [CrossRef]
  23. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [Green Version]
  24. Brunet, D.; Vrscay, E.R.; Wang, Z. On the mathematical properties of the structural similarity index. IEEE Trans. Image Process. 2011, 21, 1488–1499. [Google Scholar] [CrossRef]
  25. Nilsson, J.; Akenine-Möller, T. Understanding ssim. arXiv 2020, arXiv:2006.13846. [Google Scholar]
  26. Wang, Z.; Simoncelli, E.P.; Bovik, A.C. Multiscale structural similarity for image quality assessment. In Proceedings of the Thrity-Seventh Asilomar Conference on Signals, Systems & Computers, 2003, Pacific Grove, CA, USA, 9–12 November 2003; Volume 2, pp. 1398–1402. [Google Scholar]
  27. Sampat, M.P.; Wang, Z.; Gupta, S.; Bovik, A.C.; Markey, M.K. Complex wavelet structural similarity: A new image similarity index. IEEE Trans. Image Process. 2009, 18, 2385–2401. [Google Scholar] [CrossRef]
  28. Chen, G.H.; Yang, C.L.; Po, L.M.; Xie, S.L. Edge-based structural similarity for image quality assessment. In Proceedings of the 2006 IEEE International Conference on Acoustics Speech and Signal Processing Proceedings, Toulouse, France, 14–19 May 2006; Volume 2, p. II. [Google Scholar]
  29. Liu, A.; Lin, W.; Narwaria, M. Image quality assessment based on gradient similarity. IEEE Trans. Image Process. 2011, 21, 1500–1512. [Google Scholar] [PubMed]
  30. Zhu, J.; Wang, N. Image quality assessment by visual gradient similarity. IEEE Trans. Image Process. 2011, 21, 919–933. [Google Scholar] [PubMed]
  31. Ma, J.; Wu, J.; Li, L.; Dong, W.; Xie, X.; Shi, G.; Lin, W. Blind image quality assessment with active inference. IEEE Trans. Image Process. 2021, 30, 3650–3663. [Google Scholar] [CrossRef]
  32. Chetouani, A.; Pedersen, M. Image Quality Assessment without Reference by Combining Deep Learning-Based Features and Viewing Distance. Appl. Sci. 2021, 11, 4661. [Google Scholar] [CrossRef]
  33. Vosta, S.; Yow, K.C. A CNN-RNN Combined Structure for Real-World Violence Detection in Surveillance Cameras. Appl. Sci. 2022, 12, 1021. [Google Scholar] [CrossRef]
  34. Amirshahi, S.A.; Pedersen, M.; Yu, S.X. Image quality assessment by comparing CNN features between images. J. Imaging Sci. Technol. 2016, 60, 60410-1. [Google Scholar] [CrossRef]
  35. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2012, 25, 1097–1105. [Google Scholar] [CrossRef]
  36. Barla, A.; Franceschi, E.; Odone, F.; Verri, A. Image kernels. In Proceedings of the International Workshop on Support Vector Machines, Niagara Falls, ON, Canada, 10 August 2002; pp. 83–96. [Google Scholar]
  37. Amirshahi, S.A.; Pedersen, M.; Beghdadi, A. Reviving traditional image quality metrics using CNNs. Color Imaging Conf. 2018, 26, 241–246. [Google Scholar] [CrossRef]
  38. Ahn, S.; Choi, Y.; Yoon, K. Deep learning-based distortion sensitivity prediction for full-reference image quality assessment. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 344–353. [Google Scholar]
  39. Chubarau, A.; Clark, J. VTAMIQ: Transformers for Attention Modulated Image Quality Assessment. arXiv 2021, arXiv:2110.01655. [Google Scholar]
  40. Bosse, S.; Maniry, D.; Müller, K.R.; Wiegand, T.; Samek, W. Deep neural networks for no-reference and full-reference image quality assessment. IEEE Trans. Image Process. 2017, 27, 206–219. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  41. Okarma, K. Combined full-reference image quality metric linearly correlated with subjective assessment. In Proceedings of the International Conference on Artificial Intelligence and Soft Computing, Zakopane, Poland, 13–17 June 2010; pp. 539–546. [Google Scholar]
  42. Okarma, K. Combined image similarity index. Opt. Rev. 2012, 19, 349–354. [Google Scholar] [CrossRef]
  43. Okarma, K. Extended hybrid image similarity–combined full-reference image quality metric linearly correlated with subjective scores. Elektron. Elektrotechnika 2013, 19, 129–132. [Google Scholar] [CrossRef] [Green Version]
  44. Oszust, M. Full-reference image quality assessment with linear combination of genetically selected quality measures. PLoS ONE 2016, 11, e0158333. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  45. Lukin, V.V.; Ponomarenko, N.N.; Ieremeiev, O.I.; Egiazarian, K.O.; Astola, J. Combining full-reference image visual quality metrics by neural network. In Proceedings of the Human Vision and Electronic Imaging XX, SPIE, San Francisco, CA, USA, 9–12 February 2015; Volume 9394, pp. 172–183. [Google Scholar]
  46. Levkine, G. Prewitt, Sobel and Scharr gradient 5 × 5 convolution matrices. Image Process. Artic. Second. Draft. 2012, 1–17. [Google Scholar]
  47. Kim, D.J.; Lee, H.C.; Lee, T.S.; Lee, K.W.; Lee, B.H. Edge-Based Gaze Planning for Salient Proto-Objects. Appl. Mech. Mater. 2013, 330, 1003–1007. [Google Scholar] [CrossRef]
  48. Pedersen, M.; Hardeberg, J.Y. Full-reference image quality metrics: Classification and evaluation. Found. Trends® Comput. Graph. Vis. 2012, 7, 1–80. [Google Scholar]
  49. Sheikh, H.R.; Bovik, A.C.; De Veciana, G. An information fidelity criterion for image quality assessment using natural scene statistics. IEEE Trans. Image Process. 2005, 14, 2117–2128. [Google Scholar] [CrossRef] [Green Version]
  50. Yu, X.; Bampis, C.G.; Gupta, P.; Bovik, A.C. Predicting the quality of images compressed after distortion in two steps. IEEE Trans. Image Process. 2019, 28, 5757–5770. [Google Scholar] [CrossRef]
  51. Temel, D.; AlRegib, G. CSV: Image quality assessment based on color, structure, and visual system. Signal Process. Image Commun. 2016, 48, 92–103. [Google Scholar] [CrossRef] [Green Version]
  52. Ding, K.; Ma, K.; Wang, S.; Simoncelli, E.P. Image quality assessment: Unifying structure and texture similarity. arXiv 2020, arXiv:2004.07728. [Google Scholar] [CrossRef] [PubMed]
  53. Temel, D.; AlRegib, G. ReSIFT: Reliability-weighted sift-based image quality assessment. In Proceedings of the 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016; pp. 2047–2051. [Google Scholar]
  54. Yang, G.; Li, D.; Lu, F.; Liao, Y.; Yang, W. RVSIM: A feature similarity method for full-reference image quality assessment. EURASIP J. Image Video Process. 2018, 2018, 1–15. [Google Scholar] [CrossRef] [Green Version]
  55. Temel, D.; AlRegib, G. Perceptual image quality assessment through spectral analysis of error representations. Signal Process. Image Commun. 2019, 70, 37–46. [Google Scholar] [CrossRef] [Green Version]
Figure 1. HVS’s perception of local image quality in relation to visual saliency: (a) degradation in non-salient region, (b) degradation in salient region.
Figure 1. HVS’s perception of local image quality in relation to visual saliency: (a) degradation in non-salient region, (b) degradation in salient region.
Signals 03 00028 g001
Figure 2. Empirical MOS distributions in benchmark IQA databases: (a) KADID-10k [20], (b) TID2013 [7], (c) TID2008 [21], and (d) CSIQ [8].
Figure 2. Empirical MOS distributions in benchmark IQA databases: (a) KADID-10k [20], (b) TID2013 [7], (c) TID2008 [21], and (d) CSIQ [8].
Signals 03 00028 g002
Figure 3. Execution time (logarithmic scale) versus SROCC measured on KADID-10k [20].
Figure 3. Execution time (logarithmic scale) versus SROCC measured on KADID-10k [20].
Signals 03 00028 g003
Figure 4. Execution time (logarithmic scale) versus SROCC measured on CSIQ [8].
Figure 4. Execution time (logarithmic scale) versus SROCC measured on CSIQ [8].
Signals 03 00028 g004
Table 1. Summary of benchmark databases used in this study.
Table 1. Summary of benchmark databases used in this study.
DatabaseRef. ImagesDist. ImagesDist. TypesDist. Levels
KADID-10k [20]8110,125255
TID2013 [7]253000245
TID2008 [21]251700174
CSIQ [8]3086664–5
Table 2. Computer configuration applied in our experiments.
Table 2. Computer configuration applied in our experiments.
Computer modelSTRIX Z270H Gaming
Operating systemWindows 10
Memory15 GB
CPUIntel(R) Core(TM) i7-7700K CPU 4.20 GHz (8 cores)
GPUNvidia GeForce GTX 1080
Table 3. Comparison of the SG-ESSIM to several other state-of-the-art algorithms on KADID-10k [20] and TID2013 [7]. The highest values are typed in bold, while the second highest ones are underlined.
Table 3. Comparison of the SG-ESSIM to several other state-of-the-art algorithms on KADID-10k [20] and TID2013 [7]. The highest values are typed in bold, while the second highest ones are underlined.
KADID-10k [20]TID2013 [7]
FR-IQA MetricPLCCSROCCKROCCPLCCSROCCKROCC
2stepQA [50]0.7680.7710.5710.7360.7330.550
CSV [51]0.6710.6690.5310.8520.8480.657
DISTS [52]0.8090.8140.6260.7590.7110.524
ESSIM [19]0.6440.8230.6340.7400.7970.627
GSM [29]0.7800.7800.5880.7890.7870.593
IW-SSIM [16]0.7810.7560.5240.8320.7780.598
MAD [8]0.7160.7240.5350.8270.7780.600
MS-SSIM [26]0.8190.8210.6300.7940.7850.604
PSNR0.4790.6760.4880.6160.6460.467
ReSIFT [53]0.6480.6280.4680.6300.6230.471
RVSIM [54]0.7280.7190.5400.7630.6830.520
SSIM [23]0.6700.6710.4890.6180.6160.437
SUMMER [55]0.7190.7230.5400.6230.6220.472
SG-ESSIM0.7390.8380.6500.8780.8050.636
Table 4. Comparison of SG-ESSIM to several other state-of-the-art algorithms on TID2008 [21] and CSIQ [8]. The highest values are typed in bold, while the second highest ones are underlined.
Table 4. Comparison of SG-ESSIM to several other state-of-the-art algorithms on TID2008 [21] and CSIQ [8]. The highest values are typed in bold, while the second highest ones are underlined.
TID2008 [21]CSIQ [8]
FR-IQA MetricPLCCSROCCKROCCPLCCSROCCKROCC
2stepQA [50]0.7570.7690.5740.8410.8490.655
CSV [51]0.8520.8510.6590.9330.9330.766
DISTS [52]0.7050.6680.4880.9300.9300.764
ESSIM [19]0.6580.8760.6960.8140.9330.768
GSM [29]0.7820.7810.5780.9060.9100.729
IW-SSIM [16]0.8420.8560.6640.8040.9210.753
MAD [8]0.8310.8290.6390.9500.9470.796
MS-SSIM [26]0.8380.8460.6480.9130.9170.743
PSNR0.4470.4890.3460.8530.8090.599
ReSIFT [53]0.6270.6320.4840.8840.8680.695
RVSIM [54]0.7890.7430.5660.9230.9030.728
SSIM [23]0.6690.6750.4850.8120.8120.606
SUMMER [55]0.8170.8230.6230.8260.8300.658
SG-ESSIM0.8530.8880.7080.8360.9440.786
Table 5. Comparison of SG-ESSIM to several other state-of-the-art algorithms. Direct and weighted average PLCC, SROCC, and KROCC values are presented. Measured on KADID-10k [20], TID2013 [7], TID2008 [21], and CSIQ [8]. The highest values are typed in bold, while the second highest ones are underlined.
Table 5. Comparison of SG-ESSIM to several other state-of-the-art algorithms. Direct and weighted average PLCC, SROCC, and KROCC values are presented. Measured on KADID-10k [20], TID2013 [7], TID2008 [21], and CSIQ [8]. The highest values are typed in bold, while the second highest ones are underlined.
Direct AverageWeighted Average
FR-IQA MetricPLCCSROCCKROCCPLCCSROCCKROCC
2stepQA [50]0.7760.7810.5870.7650.7680.572
CSV [51]0.8270.8250.6530.7400.7380.582
DISTS [52]0.8010.7810.6010.7950.7850.599
ESSIM [19]0.7140.8570.6810.6730.8300.647
GSM [29]0.8140.8150.6220.7890.7890.596
IW-SSIM [16]0.8150.8280.6350.8000.7800.570
MAD [8]0.8310.8200.6430.7630.7580.573
MS-SSIM [26]0.8410.8420.6560.8210.8220.633
PSNR0.5990.6550.4750.5200.6600.470
ReSIFT [53]0.6970.6880.5300.6550.6410.483
RVSIM [54]0.8010.7620.5890.7520.7250.549
SSIM [23]0.6920.6940.5040.6680.6690.485
SUMMER [55]0.7460.7500.5730.7200.7200.540
SG-ESSIM0.8270.8690.6950.7830.8430.661
Table 6. Comparison of TID2013’s [7] distortion types. SROCC values are given. The highest values are typed in bold, while the second highest ones are underlined.
Table 6. Comparison of TID2013’s [7] distortion types. SROCC values are given. The highest values are typed in bold, while the second highest ones are underlined.
2stepQA [50]CSV [51]DISTS [52]ESSIM [19]GSM [29]MAD [8]MS-SSIM [26]ReSIFT [53]RVSIM [54]SSIM [23]SG-ESSIM
AGN0.8170.9380.8450.9110.8990.9120.6240.8310.8860.8480.936
ANC0.5900.8620.7860.8060.8230.8000.3870.7490.8360.7790.855
SCN0.8600.9390.8590.9380.9270.9290.6830.8390.8680.8510.935
MN0.3950.7480.8140.7110.7040.6580.3720.7020.7340.7750.715
HFN0.8280.9270.8680.8900.8840.9020.7040.8690.8950.8890.920
IN0.7150.8480.6740.8250.8130.7430.7660.8240.8650.8100.833
QN0.8860.8920.8100.9040.9110.8950.7200.7450.8690.8170.911
GB0.8530.9330.9260.9700.9540.9150.7620.9370.9700.9100.969
DEN0.9000.9520.8990.9560.9550.9220.8190.9070.9260.8760.963
JPEG0.8670.9440.8970.9230.9330.9240.7840.9050.9300.8930.950
JP2K0.8910.9660.9310.9460.9340.9290.7900.9280.9460.8060.949
JGTE0.8060.8000.9060.8260.8660.7680.5820.7120.8310.7010.823
J2TE0.8540.8870.8650.9020.8930.8540.7420.8350.8820.8130.899
NEPN0.7750.8110.8330.7990.8040.8030.7920.6930.7710.6340.801
BLOCK0.0440.1830.3020.6490.588−0.3220.3820.4400.5450.5640.623
MS0.6600.6540.7520.7120.7280.7080.7320.4180.5590.7380.706
CC0.4300.2270.4640.4530.4660.4200.027−0.0550.1320.3550.452
CCS−0.2580.8090.789−0.2970.676−0.059−0.055−0.2090.3660.7420.010
MGN0.7470.8840.7900.8530.8310.8880.6530.7650.8530.8040.900
CN0.8580.9240.9070.9100.9020.9040.5960.8820.9140.7970.916
LCNI0.9020.9650.9320.9570.9450.9500.7130.8970.9330.8770.952
ICQD0.8080.9190.8320.9040.9010.8670.7390.7700.8710.8200.928
CA0.7020.8450.8790.8390.8350.7600.5680.8380.8710.7400.835
SSR0.9260.9760.9440.9650.9610.9490.8010.9440.9560.8220.964
All0.7330.8480.7110.7970.7870.7780.7850.6230.6830.6160.805
Table 7. Comparison of TID2008’s [21] distortion types. SROCC values are given. The highest values are typed in bold, while the second highest ones are underlined.
Table 7. Comparison of TID2008’s [21] distortion types. SROCC values are given. The highest values are typed in bold, while the second highest ones are underlined.
2stepQA [50]CSV [51]DISTS [52]ESSIM [19]GSM [29]MAD [8]MS-SSIM [26]ReSIFT [53]RVSIM [54]SSIM [23]SG-ESSIM
AGN0.7660.9220.8120.8750.8550.8720.6100.7710.8400.8050.913
ANC0.6270.8930.8110.7920.8210.8030.3540.7620.8290.7800.900
SCN0.8140.9320.8380.9090.9040.9010.7270.8100.8370.8000.920
MN0.4500.7810.8300.7440.7360.6730.3040.7280.7600.7970.825
HFN0.8180.9360.8700.8990.8890.8940.7490.8810.8860.8710.921
IN0.6590.8190.6260.7770.7640.6500.7670.7770.8360.7760.786
QN0.8500.8940.7700.8840.9030.8510.7080.7300.8360.7840.873
GB0.8770.9230.9090.9660.9480.8960.7590.9040.9630.8660.964
DEN0.9190.9700.9310.9740.9710.9280.7860.9230.9390.8730.963
JPEG0.8950.9480.8940.9380.9370.9310.7740.9140.9260.8800.959
JP2K0.9100.9840.9530.9660.9490.9410.8370.9350.9700.7450.972
JGTE0.8510.7900.9070.8590.8710.7810.6060.7350.8600.6660.855
J2TE0.8450.8520.8330.8750.8800.8020.7420.7780.8540.7690.863
NEPN0.8030.7520.8820.7420.7840.8010.7490.7610.7320.5880.729
Block0.4410.7700.6180.8760.843−0.3620.7650.7430.7820.8040.905
MS0.6550.5940.6810.6110.6380.5630.7110.3220.5250.6290.683
CC0.5970.3300.6490.6240.6340.5480.042−0.0180.1940.5020.642
All0.7690.8510.6680.8760.7810.8290.8460.6320.7430.6750.888
Table 8. Comparison of SROCC of each FR-IQA metrics on TID2013’s [7] distortion levels (Level 1 represents the lowest level of degradation, while Level 5 represents the highest one). The highest values are typed in bold, while the second highest ones are underlined.
Table 8. Comparison of SROCC of each FR-IQA metrics on TID2013’s [7] distortion levels (Level 1 represents the lowest level of degradation, while Level 5 represents the highest one). The highest values are typed in bold, while the second highest ones are underlined.
2stepQA [50]CSV [51]DISTS [52]ESSIM [19]GSM [29]MAD [8]MS-SSIM [26]ReSIFT [53]RVSIM [54]SSIM [23]SG-ESSIM
Level 10.2460.4240.2350.3880.3720.3880.1660.1810.2480.2040.448
Level 20.3940.6260.4400.5470.5120.3680.0490.4010.4300.2760.569
Level 30.5390.6350.3670.6380.5230.4420.2400.4150.4160.0840.660
Level 40.5710.7490.6060.7660.6690.2840.1720.6990.7020.2080.787
Level 50.6630.7870.6640.8750.7450.3080.3970.7880.8030.2020.861
All0.7330.8480.7110.7970.7870.7780.7850.6230.6830.6160.805
Table 9. Comparison of SROCC for each FR-IQA metric on TID2008’s [21] distortion levels (Level 1 represents the lowest level of degradation, while Level 5 represents the highest one). The highest values are typed in bold, while the second highest ones are underlined.
Table 9. Comparison of SROCC for each FR-IQA metric on TID2008’s [21] distortion levels (Level 1 represents the lowest level of degradation, while Level 5 represents the highest one). The highest values are typed in bold, while the second highest ones are underlined.
2stepQA [50]CSV [51]DISTS [52]ESSIM [19]GSM [29]MAD [8]MS-SSIM [26]ReSIFT [53]RVSIM [54]SSIM [23]SG-ESSIM
Level 10.4700.6380.5660.6550.6390.4320.0670.4570.6340.3680.691
Level 20.6190.6830.3810.7730.6360.5200.2210.4370.5130.1050.807
Level 30.5730.7740.5810.8260.6770.2390.0590.7070.7610.1900.849
Level 40.6100.8290.6280.9050.7180.2320.2750.7880.8250.2410.891
All0.7690.8510.6680.8760.7810.8290.8460.6320.7430.6750.888
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Varga, D. Saliency-Guided Local Full-Reference Image Quality Assessment. Signals 2022, 3, 483-496. https://doi.org/10.3390/signals3030028

AMA Style

Varga D. Saliency-Guided Local Full-Reference Image Quality Assessment. Signals. 2022; 3(3):483-496. https://doi.org/10.3390/signals3030028

Chicago/Turabian Style

Varga, Domonkos. 2022. "Saliency-Guided Local Full-Reference Image Quality Assessment" Signals 3, no. 3: 483-496. https://doi.org/10.3390/signals3030028

Article Metrics

Back to TopTop