Next Article in Journal
Research on Road Lighting Control Strategies Based on the Impact of Different Lighting Sources on Driver Visibility
Previous Article in Journal
Clastic Rock Lithology Identification Based on Multivariate Feature Enhancement and Dynamic Confidence-Weighted Ensemble
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Image-Quality Assessment Algorithm for Solar Tone-Mapped Images Based on Visual Simulation

1
National Laboratory on Adaptive Optics, Chengdu 610209, China
2
Institute of Optics and Electronics, Chinese Academy of Sciences, Chengdu 610209, China
3
University of Chinese Academy of Sciences, Beijing 100049, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2026, 16(4), 1811; https://doi.org/10.3390/app16041811
Submission received: 5 January 2026 / Revised: 5 February 2026 / Accepted: 5 February 2026 / Published: 12 February 2026

Abstract

To facilitate the display of solar images, captured solar images are often subjected to tone-mapping and enhancement. Accordingly, it is necessary to assess the quality of solar images before and after tone-mapping. However, there exist certain differences in human subjective perception under different ambient light intensities and when using different displays. Therefore, this paper proposes an image-quality assessment algorithm for solar tone-mapped images based on visual simulation. By effectively modeling the display characteristics model and the human visual system (HVS) model, the modeled images can reflect the perceptual effects of the human visual system under different ambient lighting conditions and display devices. Feeding modeled images into general image-quality assessment (IQA) metrics enables a better alignment with human visual perception. The proposed approach has been validated by two metrics: the solar IQA metric T based on the image power spectrum, and the IQA metric S based on the signal detection probability. We conducted subjective quality assessment experiments in a bright indoor environment with an ambient light intensity of 400 lux. By adjusting the display brightness, the Ambient Contrast Ratio (ACR) was controlled at 226.69 and 17.83, respectively. When the ACR was 226.69, the subjective Spearman Rank Correlation Coefficient (SRCC) of the T metric for the input before and after modeled increased by 1.83%, and that of the S metric by 5.44%. In addition, at an ACR of 17.83, the subjective SRCC of the metric T increased by 1.79%, while that of the metric S by 8.38%. We also conducted a regression test on the tone-mapping enhancement parameters using the metric S, and the test results demonstrated that the images generated from the metric with modeled image input yielded better visual effects.

1. Introduction

Solar activities exert significant impacts on the variations of Earth’s environment and human life. To effectively observe solar activities and conduct analytical investigations on solar physics, several ground-based large-aperture solar telescopes have been constructed [1,2,3]. Meanwhile, it is necessary to perform tone-mapping on the captured high dynamic range (HDR) images of solar active regions to enhance image display performance. During the tone-mapping and enhancement process, the assessment of solar image quality emerges as an indispensable component.
Generally, image-quality assessment (IQA) algorithms can be categorized into two major types: subjective image-quality assessment and objective image-quality assessment. The human visual system (HVS) serves as the ultimate receiver of image information, and the use of observers’ subjective scores (i.e., Mean Opinion Score, MOS) can obtain the most intuitive and reliable assessment outcomes. However, as subjective quality assessment is susceptible to human physiological states and individual preferences, such assessments are highly subjective. The International Telecommunication Union (ITU) has developed several standards, including the flat image-quality test specification (ITU-R BT.500-11) [4] and the stereoscopic image-quality test specification (ITU-R BT.1438) [5], which provide reference standards for subjective quality assessment and ensure its reliability. Nevertheless, subjective quality assessment through human observers still requires substantial time and faces considerable challenges in practical implementation.
Objective IQA algorithms can be mainly categorized into three types: Full-Reference, Reduced-Reference, and No-Reference image-quality assessment. Full-Reference Image-Quality Assessment (FR-IQA) algorithms primarily assess image quality by quantifying the differences between the reference image and the distorted image. Important indices such as Mean-Square Error (MSE) [6] and Peak Signal-to-Noise Ratio (PSNR) [7] are still widely utilized, but they exhibit limited consistency with human visual perception. Based on this, IQA algorithms incorporating HVS characteristics [8,9], such as Structural Similarity (SSIM) [10] and Visual Information Fidelity (VIF) [11], have gradually been developed. Reduced-Reference Image-Quality Assessment (RR-IQA) algorithms utilize only partial features of the original image as a reference benchmark. This type of algorithm is typically applied to process large volumes of data (e.g., videos) and assesses the distortion degree by calculating differences in partial features between the reference image and distorted images. No-Reference Image-Quality Assessment (NR-IQA) algorithms do not require a reference image and only assess quality based on the information of the distorted images themselves. Such algorithms are generally divided into two categories: algorithms based on Natural Scene Statistics (NSS) [12,13,14] and algorithms that assess image distortion by assessing the sharpness [15,16,17,18,19,20].
Deep learning also plays an important role in NR-IQA. In 2011, Li et al. [21] employed generalized regression neural networks (GRNNs) to establish the correlation between multiple image features and image quality. In 2014, Kang et al. [22] pioneered the introduction of convolutional neural networks (CNNs) into NR-IQA, achieving image-to-score regression mapping. In 2015, Lv et al. [23] extracted Gaussian difference features for model training. In 2017, Kim et al. [24] used local quality maps of distorted images generated by FR-IQA algorithms as training labels to train CNNs. In 2020, Ma et al. [25] utilized generative adversarial networks (GANs) to enhance the model’s capability to distinguish distorted image features. Zeng et al. [26] introduced pseudo-reference images into deep learning networks, representing a novel attempt in NR-IQA.
It is worth noting that a special category of IQA algorithms is specifically designed for evaluating the image quality before and after tone-mapping. IQA algorithms for tone-mapped natural scene images are primarily classified into two categories: full-reference and no-reference algorithms. Currently, FR-IQA algorithms primarily focus on assessing image quality, with little consideration of dynamic range variations. In 2008, Aydin et al. [27] proposed the first objective assessment algorithm dedicated to image tone-mapping. In 2012, Yeganeh and Wang [28] proposed a tone-mapped IQA algorithm (TMQI), which extracts contrast features of tone-mapped images and primarily quantifies the presence of contrast loss, amplification, or reversal. The TMQI algorithm integrates the structural similarity between the tone-mapped image and the original image, as well as the naturalness metric of the tone-mapped image, thereby laying the foundation for subsequent tone-mapping IQA studies [29,30,31,32]. No-reference tone-mapped IQA is primarily implemented by machine learning algorithms [33,34,35,36,37,38,39]. However, traditional machine learning algorithms necessitate manual feature extraction, so tone-mapping IQA algorithms based on deep learning have emerged as a research focus in the field. Such algorithms encompass transfer learning [40,41], GANs [42], etc.
Currently, relatively few IQA algorithms are dedicated to solar images. Due to the lack of ideal reference images in collected solar image datasets, FR-IQA algorithms are not applicable to solar images, and statistical analysis of information from solar reference images cannot be performed for quality assessment. Meanwhile, due to the absence of a learnable solar reference image dataset, machine learning-based approaches are also not feasible for assessment. Currently, the most widely used metric is the contrast of granulation regions, as shown in Equation (1):
c g = std ( f g ) mean ( f g ) × 100 % ,
where f g represents the selected granulation region, std ( f g ) is the variance value of the selected granulation region, mean ( f g ) is the mean value of the selected granulation region. The higher the contrast, the better the quality of solar images. However, this metric often exhibits limitations for regions with sunspots. At the same time, due to the significant spatial variation in image quality across different regions of large field-of-view (FoV) solar images, the quality of a specific region often fails to represent the overall image quality. Nill and Bouzas [15] in 1992 proposed an objective image-quality metric based on the digital image power spectrum. Firstly, the two-dimensional (2D) power spectrum of the image to be assessed is computed through the Fourier transform. Then, the 2D power spectrum is normalized by the zero-frequency signal and polarized. By taking the average in each spatial frequency band, the one-dimensional (1D) power spectrum of the image is obtained. Finally, the 1D power spectrum is weighted and summed with the Contrast Sensitivity Function (CSF) after unit conversion to obtain the final assessment metric. In addition, Yang et al. [43] improved this algorithm in 2018 by incorporating the effects of the noise masking effect into the original framework and effectively estimating noise from the image. This metric is applied to solar image-quality assessment.
Nevertheless, depending on the ambient light intensity, luminance, and reflectivity of display devices, the images presented on the display and perceived by humans in different environments exhibit significant variations. Existing IQA algorithms fail to establish a connection between the HVS, display devices, and observation environments, making it difficult to truly simulate human visual perception under varying observation conditions. Therefore, IQA for solar tone-mapped images needs to address one main challenge: The assessment metric should reflect the perceptual performance of the HVS under different ambient light intensities and displays.
As a result, this paper proposes an image-quality assessment algorithm for solar tone-mapped images based on visual simulation. In this paper, the innovations lie in the following points:
1.
The algorithm integrates display device models and HVS models to simulate human visual perception of displayed images under various observation environments, thereby reflecting the displayed performance under different observation scenarios and display devices. The proposed algorithm can adapt to quality assessments under various display devices and ambient light intensities.
2.
When using the human visual system model, we design a decay ratio by leveraging the Ambient Contrast Ratio (ACR) and the S-shaped model of the HVS. Then, this decay ratio is superimposed on the original mapping coefficient. This makes the final model more consistent with human visual perception.
3.
The original TMQI metric consists of two components: structural fidelity and image naturalness. Since image naturalness is derived from the statistical analysis of 3000 natural scene images, it is not applicable to these special scenarios (solar images). Thus, we extracted the effective components from the calculation process of structural fidelity as the signal detection probability. We input the images before and after modeling into the signal detection probability metric S and the solar IQA metric T based on the power spectrum, and conducted subjective experiments, which verified the necessity of modeling for the display device model and human visual system model.
The overall arrangement of the article is as follows: Section 2 briefly introduces the algorithmic process for simulating human visual perception of displayed images under various observation environments. Experimental results and discussions will be presented in Section 3. Finally, Section 4 is the conclusion.

2. Algorithm Proposed

The proposed algorithm mainly includes the following four steps:
1.
Establish the display device model;
2.
Calculate the mapping coefficients of the rod and cone cell model based on ambient light conditions and display device characteristics;
3.
Map the display-modeled images using the rod and cone cell model;
4.
Calculate IQA metrics based on the modeled images.
The overall algorithm flow is shown in Figure 1. This section will explain the calculation of the display device model and the HVS model under different scenarios. Meanwhile, we also introduce two IQA metrics for verifying the effectiveness of our modeling: The signal detection probability metric and the solar IQA metric based on the image power spectrum.

2.1. Display Device Model

For the modeling of display devices, we use the method proposed by Mantiuk [44] in 2008. In this paper, Equation (2) was proposed to model common LCD displays:
L d ( L ) = L γ · ( L m a x L b l a c k ) + L b l a c k + L r e f l ,
where γ is the gamma parameter of the display, L m a x is the maximum luminance that the screen can reach when a full-white signal is input, L b l a c k is the minimum luminance that the screen still emits when a full-black signal is input, and L r e f l is the reflected light intensity. For non-glossy screens, the ambient light is uniformly reflected by the rough structure on the display screen surface, forming a low-contrast background light. Its intensity is proportional to the ambient light intensity and the reflectivity of the display screen, as shown in Equation (3):
L r e f l = ρ d · L e n v π ,
where ρ d is the reflectivity of the display screen, L e n v is the ambient light intensity, and π is the optical constant for converting irradiance into luminance. As the ambient light intensity increases, L r e f l gradually intensifies, which will significantly compress the effective dynamic range of the display devices and substantially reduce the image contrast.

2.2. Human Visual System Model Under Different Ambient Light Intensity

While Equation (2) models the physical reflection of the display under varying ambient light intensities, it does not account for how HVS perceives these signals. Specifically, the HVS adaptation state shifts significantly under different lighting conditions. Ferwerda [45] adopted the contrast threshold matching algorithm proposed by Ward [46] to model the operating mechanisms of human cone cells and rod cells. As the scene adaptation luminance changes from dark to bright, this model accounts for the gradual inactivation of rod cells and the gradual activation of cone cells. The threshold-intensity models for cone cells and rod cells are shown in Equation (4) and Equation (5), respectively.
log t p ( L a ) = 0.72 if log ( L a ) 2.6 log ( L a ) 1.255 if log ( L a ) 1.9 ( 0.249 log ( L a ) + 0.65) 2.7 0.72 otherwise ,
log t s ( L a ) = 2.0 if log ( L a ) 3.94 if log ( L a ) 1.44 ( 0.4   log ( L a ) + 1.8) 2.0 2.0 otherwise ,
where t p ( L a ) and t s ( L a ) are the detection thresholds of cone cells and rod cells under the adaptation luminance L a , respectively. Based on this, image mapping can be carried out for scotopic, photopic, and mesopic situations, and the calculation of the mapping coefficient is shown in Equation (6).
m = t ( L d a ) t ( L w a ) ,
where t is calculated according to Equations (4) and (5). L d a is the adaptation luminance for the display observer, and L w a is the adaptation luminance for the world observer. Generally, L d a is half of the maximum luminance of the display device, and L w a is half of the maximum luminance of the image.
However, as the ambient light intensity gradually increases, it will lead to a decrease in the visibility of the images on the display screen. Specifically, this visibility reduction can be quantified using the Ambient Contrast Ratio (ACR), which can be calculated as Equation (7).
A C R = L a m b w h i t e L a m b b l a c k = L m a x + L r e f l L b l a c k + L r e f l ,
where L a m b w h i t e is the white state luminance of the display under ambient light, and L a m b b l a c k is the black state luminance of the display under ambient light. These two parameters can be calculated from the white state luminance L m a x and black state luminance L b l a c k of the display in a darkroom, as well as the ambient light reflection L r e f l . Typically, when the ACR is greater than 15, the image quality is considered good; when the ACR is less than 5, the recognizability of information displayed on the screen will decrease substantially [47,48]. Accordingly, a Decay Ratio is designed based on the ACR value and modulate on the original mapping coefficient m. The specific implementation steps are as follows: First, the ACR is converted to the logarithmic domain to match the perceptual characteristics of the HVS. Then, the Decay Ratio is normalized (with a value range of [ 0 , 1 ] ) within the ACR domain, as shown in Equation (8). Finally, the Sigmoid function is employed for smoothing, as shown in Equation (9).
D R = ( log A C R log A C R b l a c k ) ( log A C R n o d e c a y log A C R b l a c k ) ,
D R = 1 1 + exp ( 10 × ( D R 0.5) ) ,
where A C R n o d e c a y and A C R b l a c k are taken as 20 and 1.5, respectively, so that the Decay Ratio satisfies the human visual perception. The relationship between the Decay Ratio and ACR is shown in Figure 2. It can be seen from the figure that with the decrease of ACR, the Decay Ratio shows an S-shaped decrease, which is also consistent with the characteristics of human visual perception [49].
Since contrast decay primarily occurs under high ambient light (low ACR) conditions, we strictly employ the photopic vision model ( t p for t in Equation (6)) when ACR is less than 20. For scenarios with low ambient light (high ACR), the decay factor is not applied, preserving the original visual performance. The final mapping equation is shown in Equation (10).
L d ( L w ) = ( m · D R ) · L w ,
The overall flowchart of human visual simulation is shown in Figure 3.

2.3. IQA Metrics

Ultimately, we input the modeled images into two metrics for evaluation and validation. Two metrics were selected: one is the signal detection probability metric, and the other is the solar IQA metric based on the image power spectrum [43].
The pseudo-code of the total algorithm proposed is shown in Algorithm 1.
Algorithm 1 An Image-Quality Assessment Algorithm for Solar Tone-mapped images Based on Visual Simulation
Require: 
Input image, Ambient intensity, characteristics of the screen display ( γ , ρ d , L m a x , L b l a c k )
  1:
Using Equations (2) and (3) to calculate the display model L d
  2:
Using Equation (7) to calculate the Ambient Contrast Ratio (ACR)
  3:
if ACR is larger than 20 then
  4:
    The modeled image L d is the image L d
  5:
else if ACR is less than or equal to 20 then
  6:
    Using Equations (4) and (6) to calculate the mapping coefficient m
  7:
    Using Equations (8) and (9) to calculate the decay ratio D R
  8:
    Using Equation (10) to calculate the modeled image L d
  9:
end if
10:
Using the modeled image L d to calculate IQA metrics

2.3.1. The Signal Detection Probability Metric

In 2013, the TMQI algorithm proposed by Yeganeh et al. [28] became an important foundation for the assessment of tone-mapped images. This algorithm assesses tone-mapped images through structural fidelity and statistical naturalness metrics. However, for solar images, due to the lack of reference images, naturalness statistical analysis cannot be performed on the images. Therefore, only the structural fidelity metric is retained in this study. The local structural fidelity used for the quality assessment of tone-mapped images is defined as Equation (11).
S l o c a l ( x , y ) = 2 σ x σ y + C 1 σ x 2 + σ y 2 + C 1 · σ x y + C 2 σ x σ y + C 2 ,
Among them, x and y correspond to the images before and after mapping, respectively. σ x , σ y , and σ x y are the local standard deviations and cross-correlation coefficient of the images before and after mapping, respectively. C 1 and C 2 are constants used to avoid outliers. σ x and σ y are the nonlinear mapping values of the local standard deviations, respectively. This is because values that are either not significant or significant in both images before and after mapping should not be penalized.
The detectability of pixel-level signals is quantified through a nonlinear mapping based on the signal detection probability of the HVS, i.e., the closer σ is to 1, the easier it is to detect the signal. The variation law of the signal detection probability is described by the psychophysical function [50], and its form is shown in Equation (12):
p ( s ) = 1 2 π θ s S exp [ ( x τ s ) 2 2 θ S 2 ] d x ,
where p is the detection probability density, S is the amplitude of the sinusoidal signal, θ s is the standard deviation of the normal distribution, τ s ( f ) = 1 λ A ( f ) is the modulation threshold ( λ is a scaling constant used to fit the actual psychophysical experiment signal), which can be determined by the CSF function A. This value is calculated based on the measurement of the contrast sensitivity of the pure sinusoidal stimulus under the assumption, and it needs to be converted into the signal intensity threshold measured using the signal standard deviation. Since the signal amplitude is proportional to both the contrast and the average signal intensity, and there is a factor of 2 between the amplitude of the sinusoidal signal and the standard deviation, the threshold defined based on the signal standard deviation can be calculated as shown in Equation (13):
τ σ = μ ¯ 2 λ A ( f ) ,
where μ ¯ is the mean value of the low dynamic range, usually taken as 128. According to Crozier’s law [51], the relationship between the threshold defined based on the signal standard deviation and the standard deviation can be obtained, as shown in Equation (14). The coefficient k is taken as 3 to reduce false alarms.
θ σ ( f ) = τ σ ( f ) k ,
From this, the final mapping relationship can be obtained as shown in Equation (15):
σ = 1 2 π θ σ S exp [ ( x τ σ ) 2 2 θ σ 2 ] d x ,
This nonlinear mapping transforms the signal standard deviation σ into a saliency value σ within the range of 0 to 1. When σ τ σ , σ 1 (indicating a salient signal). Also, when σ τ σ , σ 0 (indicating a non-salient signal). The intermediate region exhibits a smooth transition as σ increases. Thus, the perceptibility of image signals can be reflected by calculating the mean value of σ . Since the HVS perceives images with a local characteristic and typically focuses on structural and contrast changes in local regions, the sliding window operation over the image is retained in practical implementation [28]. This enables the algorithm to focus on local rather than global image regions, thereby capturing spatial detailed variations and making the calculation results more consistent with subjective visual experience. The signal detection probability metric is shown as Equation (16).
S = mean ( σ ) ,

2.3.2. The Solar IQA Metric Based on the Image Power Spectrum

For solar images captured by ground-based telescopes, the primary distortion types include the effects of atmospheric turbulence and noise. Blur induced by atmospheric turbulence results in a reduction in the image power spectrum, while image noise leads to an increase in the spectrum. To conduct an effective assessment of solar images under noise conditions, Yang et al. [43] proposed the improved metric T by incorporating the effects of the noise masking effect into the original framework and effectively estimating noise from the image. The overall process is shown in Figure 4.
The solar IQA metric T based on the image power spectrum [43] is shown in Equation (17).
T = ρ = 3 / ( 64 s ) 0.5 θ = 0 360 A 2 ( C , ρ ) · ( P ( ρ , θ ) p 0 ) ,
where the image size is M × N , A is the CSF function with noise masking effect, and C is the conversion coefficient, which is used to solve the inconsistency between the CSF curve and the unit of spatial frequency. Under conventional observation, C can be calculated to 39.3pixels/degrees [43]. ρ is the transformed spatial frequency, P is the power spectrum of the noise-free image, p 0 is the photon noise level of the image, and the angle θ = tan 1 ( v / u ) , where v , u are the frequency domain components after converting.

3. Experimental Results and Discussion

The data we used were collected on the Educational Adaptive-optics Solar Telescope (EAST) at the Shanghai Astronomy Museum [1], which is equipped with a 177-unit high-order Adaptive Optics (AO) system. The image acquisition bands are high-resolution photosphere TiO (705.8 nm) waveband with the pixel resolution of 0″12. The camera model is QHY4040, the effective resolution is 2048   pixel × 2048   pixel , the frame rate is 40 FPS@16bit, and the imaging Full Width at Half Maximum (FWHM) is 2.46 pixels@705.8 nm.

3.1. Simulation-Modeled Experiments

We first conducted simulation-modeled experiments using general display settings. For general LCD display devices in the market, L b l a c k = 0.18 nit, L m a x = 540 nit, ρ d = 0.0125 , γ = 2.2 are used in Equations (2) and (3). Figure 5 shows two cases of the displays under different light intensities obtained by simulation. In two sets of images, (a) is the original image. (b)–(g) shows the conditions of the display from indoors to the clear outdoors when the ambient light intensity increases from 5000 lux to 50,000 lux. It can be seen that with the increase in ambient light intensity, the effective dynamic range of the image is gradually compressed, the contrast ratio decreases progressively, and some information with a lower dynamic range is the first to be lost. To verify the generalizability of the proposed algorithm, experiments were also performed on two different solar images. The same trends of dynamic range compression and gradual loss of effective information are observed.
Based on this, we simulated the display screen perceived by the HVS under different ambient light intensities. Overall, as the ambient light intensity increases, the screen’s reflected light intensity gradually intensifies, leading to a substantial reduction in the ACR of the screen. Consequently, the display screen perceived by the HVS gradually darkens, with progressive loss of effective information. Figure 6 and Figure 7 show the displayed effects of two image sets under different ambient light intensities, after applying the simulated display device model and HVS model. Among them, (a) is the original image, and (b)–(g) shows the displayed effects after applying the display device model and HVS model, corresponding to an ambient light intensity increasing from 5000 lux to 50,000 lux. As the ambient light intensity gradually increases (with the ACR decreasing from 27.89 to 3.711), the proposed algorithm can effectively simulate the human visual perception. The two rows below display the corresponding zoom-in patches, where the blue area is Region 1, and the red area is Region 2.
Meanwhile, we calculated the standard deviation of the two sets of modeled images as the ambient light intensity gradually increased (the ACR gradually decreased), as shown in Table 1. Standard deviation is a metric for quantifying the detail richness of an image. Typically, images with a higher standard deviation exhibit richer detail characteristics. As observed in Table 1, the lower the ACR, the fewer details preserved in the images.

3.2. Validation and Comparison

To verify the effectiveness of the proposed algorithm, subjective experiments are essential. We conducted all our experiments on a display device with a 16-inch screen and a resolution of 2560 × 1600 (WQXGA). The device features a maximum brightness of 400 nits and an IPS panel (anti-glare screen). The reflectivity is 0.0125, L b l a c k is 0.18 nit, and a gamma value is 2.2, which are consistent with those in the simulation experiments. To ensure the reliability of the experiments, we conducted two rounds of subjective experiments in an indoor environment with an ambient light intensity of around 400 lux. The ACR was adjusted by reducing the display brightness from 400 nits to 30 nits, with the ACR values calculated as 226.69 and 17.83 for the two experiments, respectively. According to the standardized process of subjective evaluation provided by the International Telecommunication Union [52], we invited a total of 16 experts to do subjective experiments. Due to the particularity of solar images, the experts invited are all from the fields of solar image processing, general image processing, and adaptive optics, and have good visual function. Among the subjects, there were two PhD holders, eight PhD candidates, and six Master’s candidates. Five subjects are female, and 11 are male. All subjects were aged between 23 and 30 years old, with a mean age of approximately 25 years, and were in good health.
Experts were invited to score images on the fixed display, and all test images were randomly generated by changing parameters of the image enhancement algorithm proposed by Gu et al. [33]. All images were captured by EAST at the Shanghai Astronomy Museum, which was mentioned before. We selected a total of 20 solar images of good quality to form the dataset. Each subject was required to complete three experiments in each of the two different conditions, and images for each experiment were randomly selected from the dataset. Because it is difficult to perform visual assessment without a reference, the Double Stimulus Continuous Quality Scale (DSCQS, ITU-T BT.500-13) was adopted. Each test round presented a pair of perfectly aligned images (consistent in dimensions and position) on the display: An original image (unenhanced original image) on the left and an enhanced image (sample to be assessed) on the right side. Before the start of the experiments, we explained the software operation and the detailed rules of the scoring sheet to each subject. We also refrained from disturbing the subjects during their assessments. The scoring sheet developed to guide subjective assessment is shown in Table 2.
To verify the effectiveness of our modeling for image-quality assessment, we recorded the signal detection probability metric S and the solar IQA metric based on the image power spectrum T separately during the experiments. The input images were the enhanced images and the enhanced and modeled images, respectively.
After obtaining all the data, we calculated the Spearman Rank Correlation Coefficient (SRCC) between the IQA metrics and the subjective scores. The SRCC correlation coefficient quantifies the monotonic nonlinear relationship between two variables. In Table 3 and Table 4, we present the average SRCC values of the two metrics for each subject at an ACR of 226.69 and 17.83. We also calculated the corresponding p-values. The p-value represents the statistical significance level (output of the hypothesis test) and is used to determine whether the correlation coefficient is statistically significant. Typically, a correlation coefficient is considered statistically significant when p < 0.05 . To facilitate comparison, we also present the results for the two types of input images (modeled and unmodeled images).
As shown in Table 3, when the ACR is 226.69, the SRCC of metric T improved by 1.79% and the SRCC of metric S improved by 5.44%. In Table 4, when the ACR is 17.83, the SRCC of metric T improved by 1.83% and the SRCC of metric S improved by 8.38%. This is consistent with our judgment. On the one hand, an ACR of 226.69 falls within an interval with relatively high visual comfort for human observation, so the relative improvement after modeling is relatively slight. When the ACR is reduced to 17.83, the effective information perceivable by human observers is compromised, thus the modeled images exert a greater influence on the subjective consistency of the metrics. On the other hand, the solar IQA method based on the image power spectrum is designed to evaluate the sharpness of solar images, whereas the IQA metric based on the signal detection probability conforms to the psychophysical function, and thus it exhibits better performance in terms of subjective consistency.
It should be noted that no decrease in SRCC was observed in the evaluation data, but cases where SRCC values remained identical before and after the improvement were encountered. This is because the calculation of SRCC relies solely on the rank order of the data, and the improvement did not bring an essential change in the rank relationship. This phenomenon may be attributed to the random selection of our enhancement parameters, which could result in the generation of images with minor differences, thus leading to the identical rank order of the final data. In our future work, we will take this aspect into greater consideration and strive to maximize the differences between images.
Herein, we only present the mean values of the results for each subject, but we have provided the results of every individual experiment in the Supplementary Materials, which also include the statistics of the subjects’ basic information. However, for privacy considerations, we have redacted the signature of each subject.

3.3. Experiments of Enhancement Parameter Regression

To more objectively demonstrate the superiority of our proposed modeling method, we conducted experiments on enhancement parameter regression based on the enhancement algorithm proposed by Gu et al. [33] at an ACR of 226.69. Specifically, Gu et al. decompose images using a local edge-preserving filter, compress the dynamic range of each detail layer, and ultimately perform image fusion. In this study, 100 sets of fusion parameters were used to fuse the three detail layers, and the enhanced image quality was assessed using the IQA metric with and without modeling. From the subjective evaluation results, it can be found that the IQA metric based on the signal detection probability S exhibits better subjective consistency, and thus S is adopted here for image-quality assessment. Figure 8 present the assessment results of the 100 parameter sets, where (a) is the original image, (b) is the image with the highest score when using the IQA metric based on the signal detection probability S, (c) is the image with the highest score when using the S metric with the molded images input. Corresponding quantitative assessment results are shown in Table 5.
16 subjects were asked again to select the image with the best subjective effect from each group of images in Figure 8. All subjects unanimously agreed that the images enhanced by the parameters regressed from the modeled inputs have a better subjective perception. Although the images regressed from the original inputs have higher contrast and sharp edges, they exhibited significant distortion and even gradient inversion. However, this phenomenon has been effectively controlled in our improved method.

4. Conclusions

This paper argued that when assessing the quality of solar images after tone-mapping, the assessment metric needs to reflect the human visual perception under different ambient light intensities and displays. According to this, we propose an image-quality assessment algorithm for solar tone-mapped images based on visual simulation. Specifically, we first model different display devices using a display device model. Subsequently, based on the visual correlation between the ACR and human visual perception, we model photopic vision using human rod and cone cell models with a decay ratio. Finally, the modeled images are input to the IQA metric to obtain the assessment score.
We conducted subjective evaluation experiments under two conditions of ACR = 226.69 and ACR = 17.83. The experimental results demonstrated that the SRCC values of the IQA metric based on the image power spectrum T increased by 1.79% and 1.83%, respectively, after the adoption of our modeled inputs. For the IQA metric based on the signal detection probability S, its SRCC values rose by 5.44% and 8.38%, respectively, with our modeled inputs applied. This result indicates that our improvement has effectively enhanced the subjective consistency of the IQA metrics. To relatively objectively demonstrate the effectiveness of our modeled inputs, we conducted three sets of enhancement parameter regression experiments using the signal detection probability metric S before and after the adoption of modeled inputs. All three experiments demonstrated that the regression results obtained with modeled inputs are more in line with observation requirements and can effectively suppress over-enhancement.
However, it is worth noting that due to limitations of display devices and observation environments, we were unable to further validate the performance across diverse display devices and observation scenarios. This work will be further improved in the future. Additionally, we aim to deeply integrate this research with image enhancement algorithms to enable adaptive selection of enhancement parameters under varying display environments. First, our regular observation environment is a bright indoor office, which is nevertheless subject to weather influences. The specific ambient light intensity needs to be measured with professional instruments. On this basis, according to the required display devices, we can automatically utilize the characteristics of the display device and the measured ambient light intensity to conduct the experiments illustrated in Figure 8, to select appropriate mapping and enhancement parameters for different conditions.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/app16041811/s1, Table S1: Subjective Quality Score Results.

Author Contributions

Q.B.: Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Software, Validation, Visualization, Writing—original draft. C.R.: Funding acquisition, Project administration, Supervision, Resources, Writing—review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (No. 12527802).

Institutional Review Board Statement

This study did not recruit external human subjects for the subjective image quality assessment experiment; all participants were members of the research team from the laboratory. The experiment was only used for the internal validation of image quality algorithms, with no personal private information of the subjects collected. No sensitive groups such as minors or special populations were involved, and the research process complied with the regulations on academic ethics and data protection.

Informed Consent Statement

The subjective image quality assessment experiment was conducted in accordance with academic research ethics guidelines. All subjects signed informed consent forms, which clarified the purpose of the evaluation and the usage of the data. Only subjective scoring data was collected, and all data were solely used for the analysis of this study without external disclosure. Subjects participated voluntarily, and the experiment adhered to the basic ethical principles of fairness and transparency.

Data Availability Statement

The data provided in this study are available upon request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Correction Statement

This article has been republished with a minor correction to the existing affiliation information. This change does not affect the scientific content of the article.

References

  1. Rao, C.; Rao, X.; Du, Z.; Bao, H.; Li, C.; Huang, J.; Guo, Y.; Zhong, L.; Lin, Q.; Ge, X.; et al. EAST-Educational Adaptive-optics Solar Telescope. Res. Astron. Astrophys. 2022, 22, 065003. [Google Scholar] [CrossRef]
  2. Zhang, L.; Bao, H.; Rao, X.; Guo, Y.; Zhong, L.; Ran, X.; Yan, N.; Yang, J.; Wang, C.; Zhou, J.; et al. Ground-layer adaptive optics for the New Vacuum Solar Telescope: Instrument description and first results. Sci. China Phys. Mech. Astron. 2023, 4, 269611. [Google Scholar] [CrossRef]
  3. Rao, C.; Zhong, L.; Guo, Y.; Li, M.; Zhang, L.; Wei, K. Astronomical adaptive optics: A review. PhotoniX 2024, 5, 16. [Google Scholar] [CrossRef]
  4. Recommendation ITU-R BT.500-13; Methodology for the Subjective Assessment of the Quality of Television Pictures. ITU: Geneva, Switzerland, 2012.
  5. Recommendation ITU-R BT.1438; Subjective Assessment of Stereoscopic Television Pictures. ITU: Geneva, Switzerland, 2000.
  6. Chicco, D.; Warrens, M.J.; Jurman, G. The coefficient of determination R-squared is more informative than SMAPE, MAE, MAPE, MSE and RMSE in regression analysis evaluation. Peerj Comput. Sci. 2021, 7, e623. [Google Scholar] [CrossRef]
  7. Avcibas, I.; Sankur, B.l.; Sayood, K. Statistical evaluation of image quality measures. J. Electron. Imaging 2002, 11, 206–223. [Google Scholar] [CrossRef]
  8. Daly, S.J. Visible differences predictor: An algorithm for the assessment of image fidelity. In Proceedings of the Human Vision, Visual Processing, and Digital Display III; SPIE: Amsterdam, The Netherlands, 1992; Volume 1666, pp. 2–15. [Google Scholar] [CrossRef]
  9. Watson, A.B. Digital Images and Human Vision; The MIT Press: Cambridge, MA, USA, 1993; pp. 163–178. [Google Scholar]
  10. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef]
  11. Sheikh, H.R.; Bovik, A.C. Image information and visual quality. IEEE Trans. Image Process. 2006, 15, 430–444. [Google Scholar] [CrossRef]
  12. Mittal, A.; Moorthy, A.K.; Bovik, A.C. No-reference image quality assessment in the spatial domain. IEEE Trans. Image Process. 2012, 21, 4695–4708. [Google Scholar] [CrossRef]
  13. Saad, M.A.; Bovik, A.C.; Charrier, C. Blind image quality assessment: A natural scene statistics approach in the DCT domain. IEEE Trans. Image Process. 2012, 21, 3339–3352. [Google Scholar] [CrossRef]
  14. Xue, W.; Mou, X.; Zhang, L.; Bovik, A.C.; Feng, X. Blind image quality assessment using joint statistics of gradient magnitude and Laplacian features. IEEE Trans. Image Process. 2014, 23, 4850–4862. [Google Scholar] [CrossRef]
  15. Nill, N.B.; Bouzas, B. Objective image quality measure derived from digital image power spectra. Opt. Eng. 1992, 31, 813–825. [Google Scholar] [CrossRef]
  16. Zhu, X.; Milanfar, P. A no-reference sharpness metric sensitive to blur and noise. In Proceedings of the 2009 International Workshop on Quality of Multimedia Experience, San Diego, CA, USA, 29–31 July 2009; IEEE: New York, NY, USA, 2009; pp. 64–69. [Google Scholar] [CrossRef]
  17. Ferzli, R.; Karam, L.J. A no-reference objective image sharpness metric based on the notion of just noticeable blur (JNB). IEEE Trans. Image Process. 2009, 18, 717–728. [Google Scholar] [CrossRef]
  18. Narvekar, N.D.; Karam, L.J. A no-reference image blur metric based on the cumulative probability of blur detection (CPBD). IEEE Trans. Image Process. 2011, 20, 2678–2683. [Google Scholar] [CrossRef]
  19. Vu, P.V.; Chandler, D.M. A fast wavelet-based algorithm for global and local image sharpness estimation. IEEE Signal Process. Lett. 2012, 19, 423–426. [Google Scholar] [CrossRef]
  20. Hassen, R.; Wang, Z.; Salama, M.M. Image sharpness assessment based on local phase coherence. IEEE Trans. Image Process. 2013, 22, 2798–2810. [Google Scholar] [CrossRef]
  21. Li, C.; Bovik, A.C.; Wu, X. Blind image quality assessment using a general regression neural network. IEEE Trans. Neural Netw. 2011, 22, 793–799. [Google Scholar] [CrossRef]
  22. Kang, L.; Ye, P.; Li, Y.; Doermann, D. Convolutional neural networks for no-reference image quality assessment. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Columbus, OH, USA, 23–28 June 2014; pp. 1733–1740. [Google Scholar] [CrossRef]
  23. Lv, Y.; Jiang, G.; Yu, M.; Xu, H.; Shao, F.; Liu, S. Difference of Gaussian statistical features based blind image quality assessment: A deep learning approach. In Proceedings of the 2015 IEEE International Conference on Image Processing (ICIP); IEEE: New York, NY, USA, 2015; pp. 2344–2348. [Google Scholar] [CrossRef]
  24. Kim, J.; Lee, S. Fully deep blind image quality predictor. IEEE J. Sel. Top. Signal Process. 2016, 11, 206–220. [Google Scholar] [CrossRef]
  25. Ma, J.; Xu, H.; Jiang, J.; Mei, X.; Zhang, X.P. DDcGAN: A dual-discriminator conditional generative adversarial network for multi-resolution image fusion. IEEE Trans. Image Process. 2020, 29, 4980–4995. [Google Scholar] [CrossRef]
  26. Zeng, H.; Zhang, L.; Bovik, A.C. Blind image quality assessment with a probabilistic quality representation. In Proceedings of the 2018 25th IEEE International Conference on Image Processing (ICIP), Athens, Greece, 7–10 October 2018; IEEE: New York, NY, USA, 2018; pp. 609–613. [Google Scholar] [CrossRef]
  27. Aydin, T.O.; Mantiuk, R.; Myszkowski, K.; Seidel, H.P. Dynamic range independent image quality assessment. ACM Trans. Graph. (TOG) 2008, 27, 1–10. [Google Scholar] [CrossRef]
  28. Yeganeh, H.; Wang, Z. Objective quality assessment of tone-mapped images. IEEE Trans. Image Process. 2012, 22, 657–667. [Google Scholar] [CrossRef]
  29. Ma, K.; Yeganeh, H.; Zeng, K.; Wang, Z. High dynamic range image compression by optimizing tone mapped image quality index. IEEE Trans. Image Process. 2015, 24, 3086–3097. [Google Scholar] [CrossRef]
  30. Nafchi, H.Z.; Shahkolaei, A.; Moghaddam, R.F.; Cheriet, M. FSITM: A feature similarity index for tone-mapped images. IEEE Signal Process. Lett. 2014, 22, 1026–1029. [Google Scholar] [CrossRef]
  31. Hadizadeh, H.; Bajić, I.V. Full-reference objective quality assessment of tone-mapped images. IEEE Trans. Multimed. 2017, 20, 392–404. [Google Scholar] [CrossRef]
  32. Krasula, L.; Fliegel, K.; Le Callet, P. FFTMI: Features fusion for natural tone-mapped images quality evaluation. IEEE Trans. Multimed. 2019, 22, 2038–2047. [Google Scholar] [CrossRef]
  33. Gu, K.; Wang, S.; Zhai, G.; Ma, S.; Yang, X.; Lin, W.; Zhang, W.; Gao, W. Blind quality assessment of tone-mapped images via analysis of information, naturalness, and structure. IEEE Trans. Multimed. 2016, 18, 432–443. [Google Scholar] [CrossRef]
  34. Kundu, D.; Ghadiyaram, D.; Bovik, A.C.; Evans, B.L. No-reference quality assessment of tone-mapped HDR pictures. IEEE Trans. Image Process. 2017, 26, 2957–2971. [Google Scholar] [CrossRef]
  35. Yue, G.; Hou, C.; Gu, K.; Mao, S.; Zhang, W. Biologically inspired blind quality assessment of tone-mapped images. IEEE Trans. Ind. Electron. 2017, 65, 2525–2536. [Google Scholar] [CrossRef]
  36. Chen, P.; Li, L.; Zhang, X.; Wang, S.; Tan, A. Blind quality index for tone-mapped images based on luminance partition. Pattern Recognit. 2019, 89, 108–118. [Google Scholar] [CrossRef]
  37. Liu, X.; Fang, Y.; Du, R.; Zuo, Y.; Wen, W. Blind quality assessment for tone-mapped images based on local and global features. Inf. Sci. 2020, 528, 46–57. [Google Scholar] [CrossRef]
  38. Fang, Y.; Yan, J.; Du, R.; Zuo, Y.; Wen, W.; Zeng, Y.; Li, L. Blind quality assessment for tone-mapped images by analysis of gradient and chromatic statistics. IEEE Trans. Multimed. 2020, 23, 955–966. [Google Scholar] [CrossRef]
  39. Wang, X.; Jiang, Q.; Shao, F.; Gu, K.; Zhai, G.; Yang, X. Exploiting local degradation characteristics and global statistical properties for blind quality assessment of tone-mapped HDR images. IEEE Trans. Multimed. 2020, 23, 692–705. [Google Scholar] [CrossRef]
  40. Kumar, V.A.; Gupta, S.; Chandra, S.S.; Raman, S.; Channappayya, S.S. No-reference quality assessment of tone mapped high dynamic range (HDR) images using transfer learning. In Proceedings of the 2017 Ninth International Conference on Quality of Multimedia Experience (QoMEX), Erfurt, Germany, 31 May–2 June 2017; IEEE: New York, NY, USA, 2017; pp. 1–3. [Google Scholar] [CrossRef]
  41. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar] [CrossRef]
  42. Yang, J.; Zhou, Y.; Zhao, Y.; Wen, J. Blind quality assessment of tone-mapped images using multi-exposure sequences. J. Vis. Commun. Image Represent. 2022, 87, 103553. [Google Scholar] [CrossRef]
  43. Yang, M.; Tian, Y.; Rao, C.H. Adaptive optics-corrected solar image quality assessment based on image power spectrum and human visual system. Opt. Eng. 2018, 57, 013102. [Google Scholar] [CrossRef]
  44. Mantiuk, R.; Daly, S.; Kerofsky, L. Display adaptive tone mapping. In ACM SIGGRAPH 2008 Papers; Association for Computing Machinery: New York, NY, USA, 2008; pp. 1–10. [Google Scholar] [CrossRef]
  45. Ferwerda, J.A.; Pattanaik, S.N.; Shirley, P.; Greenberg, D.P. A model of visual adaptation for realistic image synthesis. In Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, New Orleans, LA, USA, 4–9 August 1996; Association for Computing Machinery: New York, NY, USA, 1996; pp. 249–258. [Google Scholar] [CrossRef]
  46. Ward, G. A contrast-based scalefactor for luminance display. In Graphics Gems IV; Academic Press Professional, Inc.: Cambridge, MA, USA, 1994; pp. 415–421. [Google Scholar] [CrossRef]
  47. Dobrowolski, J.; Sullivan, B.T.; Bajcar, R. Optical interference, contrast-enhanced electroluminescent device. Appl. Opt. 1992, 31, 5988–5996. [Google Scholar] [CrossRef] [PubMed]
  48. Lee, J.H.; Park, K.H.; Kim, S.H.; Choi, H.C.; Kim, B.K.; Yin, Y. 5.3: Invited Paper: AH-IPS, Superb Display for Mobile Device. In Proceedings of the SID Symposium Digest of Technical Papers, Vancouver, BC, Canada, 19–24 May 2013; Wiley Online Library: Hoboken, NJ, USA, 2013; Volume 44, pp. 32–33. [Google Scholar] [CrossRef]
  49. Green, D.M.; Swets, J.A. Signal Detection Theory and Psychophysics; Wiley: New York, NY, USA, 1966; Volume 1. [Google Scholar]
  50. Green, B.F. JP Guilford. Psychometric Methods. New York: McGraw-Hill, 1954, pp. ix + 597. Psychometrika 1955, 20, 163–165. [Google Scholar] [CrossRef]
  51. Crozier, W.J. On the variability of critical illumination for flicker fusion and intensity discrimination. J. Gen. Physiol. 1936, 19, 503–522. [Google Scholar] [CrossRef]
  52. ITU-R BT.500-14; Methodology for the Subjective Assessment of the Quality of Television Pictures. International Telecommunication Union: Geneva, Switzerland, 2002.
Figure 1. The overall flowchart of the proposed algorithm.
Figure 1. The overall flowchart of the proposed algorithm.
Applsci 16 01811 g001
Figure 2. Graph of the relationship between the Decay Ratio and ACR.
Figure 2. Graph of the relationship between the Decay Ratio and ACR.
Applsci 16 01811 g002
Figure 3. The overall flowchart of human visual simulation.
Figure 3. The overall flowchart of human visual simulation.
Applsci 16 01811 g003
Figure 4. The flowchart of Yang’s solar IQA metric.
Figure 4. The flowchart of Yang’s solar IQA metric.
Applsci 16 01811 g004
Figure 5. Two sets of simulated displayed effects using the display device model under different ambient light intensities. (a1,a2) Original image. (b1,b2) Simulated image under ambient light intensity of 5000 lux. (c1,c2) Simulated image under ambient light intensity of 25,000 lux. (d1,d2) Analog image under ambient light intensity of 30,000 lux. (e1,e2) Simulated image under ambient light intensity of 35,000 lux. (f1,f2) Simulated images under ambient light intensity of 40,000 lux. (g1,g2) Simulated image under ambient light intensity of 50,000 lux.
Figure 5. Two sets of simulated displayed effects using the display device model under different ambient light intensities. (a1,a2) Original image. (b1,b2) Simulated image under ambient light intensity of 5000 lux. (c1,c2) Simulated image under ambient light intensity of 25,000 lux. (d1,d2) Analog image under ambient light intensity of 30,000 lux. (e1,e2) Simulated image under ambient light intensity of 35,000 lux. (f1,f2) Simulated images under ambient light intensity of 40,000 lux. (g1,g2) Simulated image under ambient light intensity of 50,000 lux.
Applsci 16 01811 g005
Figure 6. Case 1: The displayed effects under different ambient light intensities after applying the simulated display device model and HVS model. (a) Original image. (b) Simulated image under ambient light intensity of 5000 lux (ACR = 27.89). (c) Simulated image under ambient light intensity of 25,000 lux (ACR = 6.417). (d) Analog image under ambient light intensity of 30,000 lux (ACR = 5.516). (e) Simulated image under ambient light intensity of 35,000 lux (ACR = 4.871). (f) Simulated images under ambient light intensity of 40,000 lux (ACR = 4.388). (g) Simulated image under ambient light intensity of 50,000 lux (ACR = 3.711). The two rows below display the corresponding zoom-in patches, where the blue area is Region 1, and the red area is Region 2.
Figure 6. Case 1: The displayed effects under different ambient light intensities after applying the simulated display device model and HVS model. (a) Original image. (b) Simulated image under ambient light intensity of 5000 lux (ACR = 27.89). (c) Simulated image under ambient light intensity of 25,000 lux (ACR = 6.417). (d) Analog image under ambient light intensity of 30,000 lux (ACR = 5.516). (e) Simulated image under ambient light intensity of 35,000 lux (ACR = 4.871). (f) Simulated images under ambient light intensity of 40,000 lux (ACR = 4.388). (g) Simulated image under ambient light intensity of 50,000 lux (ACR = 3.711). The two rows below display the corresponding zoom-in patches, where the blue area is Region 1, and the red area is Region 2.
Applsci 16 01811 g006
Figure 7. Case 2: The displayed effects under different ambient light intensities after applying the simulated display device model and HVS model. (a) Original image. (b) Simulated image under ambient light intensity of 5000 lux (ACR = 27.89). (c) Simulated image under ambient light intensity of 25,000 lux (ACR = 6.417). (d) Analog image under ambient light intensity of 30,000 lux (ACR = 5.516). (e) Simulated image under ambient light intensity of 35,000 lux (ACR = 4.871). (f) Simulated images under ambient light intensity of 40,000 lux (ACR = 4.388). (g) Simulated image under ambient light intensity of 50,000 lux (ACR = 3.711). The two rows below display the corresponding zoom-in patches, where the blue area is Region 1, and the red area is Region 2.
Figure 7. Case 2: The displayed effects under different ambient light intensities after applying the simulated display device model and HVS model. (a) Original image. (b) Simulated image under ambient light intensity of 5000 lux (ACR = 27.89). (c) Simulated image under ambient light intensity of 25,000 lux (ACR = 6.417). (d) Analog image under ambient light intensity of 30,000 lux (ACR = 5.516). (e) Simulated image under ambient light intensity of 35,000 lux (ACR = 4.871). (f) Simulated images under ambient light intensity of 40,000 lux (ACR = 4.388). (g) Simulated image under ambient light intensity of 50,000 lux (ACR = 3.711). The two rows below display the corresponding zoom-in patches, where the blue area is Region 1, and the red area is Region 2.
Applsci 16 01811 g007
Figure 8. Enhancement Parameter Regression results. The serial number of the enhancement parameter within the 100 sets is labeled in the top-left corner of each image. (a) The original image. (b) The image with the highest score when using the IQA metric based on the signal detection probability S. (c) The image with the highest score when using the S metric with the molded images input.
Figure 8. Enhancement Parameter Regression results. The serial number of the enhancement parameter within the 100 sets is labeled in the top-left corner of each image. (a) The original image. (b) The image with the highest score when using the IQA metric based on the signal detection probability S. (c) The image with the highest score when using the S metric with the molded images input.
Applsci 16 01811 g008
Table 1. The standard deviation of two sets of modeled images when the ambient light intensity gradually increased (ACR gradually decreased).
Table 1. The standard deviation of two sets of modeled images when the ambient light intensity gradually increased (ACR gradually decreased).
Ambient Light Intensity500025,00030,00035,00040,00050,000
(ACR)(27.8910)(6.4171)(5.5156)(4.8713)(4.3880)(3.7110)
Case 10.11070.08440.06630.04890.03540.0191
Case 20.11300.08500.06810.05240.03910.0203
Table 2. The scoring sheet developed to guide subjective assessment.
Table 2. The scoring sheet developed to guide subjective assessment.
Score RangeDescription (Relative to the Original Image)
9–10Significant improvement:Edges are effectively enhanced with internal details fully revealed and contrast that aligns well with human visual perception.
7–8Noticeable improvement: Edges are effectively enhanced to a certain degree while internal details are partially presented.
5–6Imperceptible difference: Differences are only detectable by specialized equipment, and the image is visually consistent with the original.
3–4Noticeable degradation: Loss of core content is observed (e.g., partial loss of solar information or contrast significantly deviating from human visual perception), leading to a poor visual experience.
0–2Extreme degradation: Core content is unidentifiable, and the result is wholly unacceptable for visual interpretation.
Table 3. The calculation results of the average SRCC and p-value of the IQA metrics with different inputs at an ACR of 226.69.
Table 3. The calculation results of the average SRCC and p-value of the IQA metrics with different inputs at an ACR of 226.69.
Subject NumberSRCC (p-Value)SRCC (p-Value)
IQA Metric Based on Image
Power Spectrum T
Metric T with
Modeled image
IQA Metric Based on the
Detection Probability
of the Signal S
Metric S with
Modeled Image
10.731( 7.10× 10 3 )0.736( 7.10× 10 3 )0.752( 3.57× 10 3 )0.788( 7.55× 10 4 )
20.667( 6.89× 10 3 )0.729( 6.21× 10 4 )0.703( 1.55× 10 3 )0.762( 9.76× 10 4 )
30.685( 2.13× 10 3 )0.696( 1.43× 10 3 )0.828( 7.54× 10 6 )0.842( 3.84× 10 6 )
40.797( 2.90× 10 4 )0.800( 2.87× 10 4 )0.809( 1.91× 10 4 )0.833( 3.30× 10 5 )
50.781( 2.58× 10 4 )0.789( 2.52× 10 4 )0.789( 2.40× 10 4 )0.822( 6.44× 10 5 )
60.671( 1.07× 10 2 )0.696( 8.47× 10 3 )0.673( 8.29× 10 3 )0.771( 2.40× 10 4 )
70.648( 3.88× 10 3 )0.670( 1.82× 10 3 )0.682( 1.69× 10 3 )0.795( 8.65× 10 5 )
80.749( 3.93× 10 3 )0.772( 3.26× 10 3 )0.790( 2.06× 10 3 )0.861( 2.28× 10 5 )
90.814( 8.72× 10 5 )0.816( 8.66× 10 5 )0.852( 7.53× 10 6 )0.861( 3.14× 10 6 )
100.686( 4.88× 10 3 )0.688( 4.88× 10 3 )0.713( 3.74× 10 3 )0.786( 3.14× 10 4 )
110.862( 5.13× 10 6 )0.870( 2.86× 10 6 )0.908( 1.13× 10 7 )0.917( 3.81× 10 8 )
120.794( 1.03× 10 3 )0.798( 1.03× 10 3 )0.864( 2.73× 10 6 )0.887( 1.56× 10 6 )
130.811( 5.32× 10 5 )0.826( 2.72× 10 5 )0.814( 2.51× 10 4 )0.827( 1.57× 10 4 )
140.803( 4.88× 10 5 )0.811( 4.12× 10 5 )0.821( 2.82× 10 5 )0.825( 2.37× 10 5 )
150.700( 1.18× 10 3 )0.707( 9.02× 10 4 )0.698( 1.38× 10 3 )0.743( 1.99× 10 4 )
160.786( 2.50× 10 4 )0.792( 2.49× 10 4 )0.783( 2.15× 10 4 )0.838( 5.76× 10 6 )
Mean value0.749( 2.67× 10 3 )0.762( 1.90× 10 3 )0.780( 1.45× 10 3 )0.822( 1.80× 10 4 )
Table 4. The calculation results of the average SRCC and p-value of the IQA metrics with different inputs at an ACR of 17.83.
Table 4. The calculation results of the average SRCC and p-value of the IQA metrics with different inputs at an ACR of 17.83.
Subject NumberSRCC (p-Value)SRCC (p-Value)
IQA Metric Based on Image
Power Spectrum T
Metric T with
Modeled Image
IQA Metric Based on the
Detection Probability
of the Signal S
Metric S with
Modeled Image
10.605( 6.33× 10 3 )0.630( 4.02× 10 3 )0.628( 5.67× 10 3 )0.694( 1.50× 10 3 )
20.788( 3.46× 10 4 )0.802( 2.97× 10 4 )0.827( 1.54× 10 4 )0.843( 7.98× 10 5 )
30.566( 1.80× 10 2 )0.587( 1.52× 10 2 )0.635( 6.14× 10 3 )0.705( 2.21× 10 3 )
40.737( 3.94× 10 3 )0.743( 3.50× 10 3 )0.735( 3.04× 10 3 )0.861( 1.12× 10 5 )
50.839( 3.91× 10 6 )0.845( 2.91× 10 6 )0.828( 6.51× 10 6 )0.862( 1.59× 10 6 )
60.532( 3.31× 10 2 )0.552( 2.23× 10 2 )0.625( 1.32× 10 2 )0.719( 2.46× 10 3 )
70.551( 3.36× 10 2 )0.555( 2.92× 10 2 )0.665( 1.81× 10 2 )0.761( 1.04× 10 3 )
80.704( 7.38× 10 3 )0.723( 6.85× 10 3 )0.760( 9.23× 10 3 )0.874( 6.27× 10 6 )
90.666( 8.18× 10 3 )0.670( 8.11× 10 3 )0.645( 1.26× 10 2 )0.660( 7.71× 10 3 )
100.595( 1.89× 10 2 )0.599( 1.67× 10 2 )0.605( 1.43× 10 2 )0.668( 5.55× 10 3 )
110.854( 8.15× 10 6 )0.866( 2.92× 10 6 )0.860( 9.66× 10 6 )0.863( 7.91× 10 6 )
120.830( 2.19× 10 5 )0.833( 1.68× 10 5 )0.825( 1.52× 10 5 )0.859( 2.08× 10 6 )
130.743( 5.49× 10 4 )0.748( 5.26× 10 4 )0.807( 1.13× 10 4 )0.847( 1.74× 10 5 )
140.648( 3.19× 10 2 )0.665( 3.18× 10 2 )0.673( 3.58× 10 2 )0.783( 4.68× 10 4 )
150.571( 1.58× 10 2 )0.574( 1.36× 10 2 )0.610( 7.97× 10 3 )0.670( 2.00× 10 3 )
160.715( 1.84× 10 3 )0.752( 6.80× 10 4 )0.778( 8.92× 10 5 )0.799( 6.62× 10 5 )
Mean value0.684( 1.12× 10 2 )0.696( 9.56× 10 3 )0.719( 7.90× 10 3 )0.779( 1.45× 10 3 )
Table 5. The corresponding metric S scores for the image inputs before and after modeling in three sets of images.
Table 5. The corresponding metric S scores for the image inputs before and after modeling in three sets of images.
Serial NumberThe Metric S with the Original Images InputThe Metric S with the Molded Images Input
21/1000.00250.0023
99/1000.00220.0027
16/1000.00260.0025
90/1000.00240.0029
96/1000.00280.0024
52/1000.00250.0028
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bian, Q.; Rao, C. An Image-Quality Assessment Algorithm for Solar Tone-Mapped Images Based on Visual Simulation. Appl. Sci. 2026, 16, 1811. https://doi.org/10.3390/app16041811

AMA Style

Bian Q, Rao C. An Image-Quality Assessment Algorithm for Solar Tone-Mapped Images Based on Visual Simulation. Applied Sciences. 2026; 16(4):1811. https://doi.org/10.3390/app16041811

Chicago/Turabian Style

Bian, Qing, and Changhui Rao. 2026. "An Image-Quality Assessment Algorithm for Solar Tone-Mapped Images Based on Visual Simulation" Applied Sciences 16, no. 4: 1811. https://doi.org/10.3390/app16041811

APA Style

Bian, Q., & Rao, C. (2026). An Image-Quality Assessment Algorithm for Solar Tone-Mapped Images Based on Visual Simulation. Applied Sciences, 16(4), 1811. https://doi.org/10.3390/app16041811

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop