Next Article in Journal
A Novel LiDAR Echo Signal Denoising Method Based on the VMD-CPO-IWT Algorithm
Previous Article in Journal
MultiScaleSleepNet: A Hybrid CNN–BiLSTM–Transformer Architecture with Multi-Scale Feature Representation for Single-Channel EEG Sleep Stage Classification
Previous Article in Special Issue
Infrared Small Target Detection via Modified Fast Saliency and Weighted Guided Image Filtering
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Quality Assessment of Solar EUV Remote Sensing Images Using Multi-Feature Fusion

1
Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China
2
University of Chinese Academy of Sciences, Beijing 100049, China
3
College of Communication Engineering, Jilin University, Nanhu Campus, Changchun 130033, China
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(20), 6329; https://doi.org/10.3390/s25206329 (registering DOI)
Submission received: 25 August 2025 / Revised: 16 September 2025 / Accepted: 24 September 2025 / Published: 14 October 2025

Abstract

Highlights

What are the main findings?
  • A novel hybrid framework for assessing solar EUV image quality was developed, combining deep learning features from a HyperNet-based model with 22 handcrafted physical and statistical indicators.
  • The fusion of these feature types significantly improved the performance of image quality classification, achieving a high accuracy of 97.91% and an AUC of 0.9992.
What are the implications of the main findings?
  • This method provides a robust and scalable solution for the automated quality con-trol of large-scale solar EUV observation data streams, which is crucial for space weather forecasting.
  • The research demonstrates the effectiveness of a multi-feature fusion approach for complex image quality assessment tasks, offering a new direction for similar applica-tions in remote sensing.

Abstract

Accurate quality assessment of solar Extreme Ultraviolet (EUV) remote sensing imagery is critical for data reliability in space science and weather forecasting. This study introduces a hybrid framework that fuses deep semantic features from a HyperNet-based model with 22 handcrafted physical and statistical quality indicators to create a robust 24-dimensional feature vector. We used a dataset of top-quality images, i.e., quality class “Excellent”, and generated a dataset of 47,950 degraded, lower-quality images by simulating seven types of degradation including defocus, blur and noise. Experimental results show that an XGBoost classifier, when trained on these fused features, achieved superior performance with 97.91% accuracy and an AUC of 0.9992. This approach demonstrates that combining deep and handcrafted features significantly enhances the classification’s robustness and offers a scalable solution for automated quality control in solar EUV observation pipelines.

1. Introduction

Imaging in the solar Extreme Ultraviolet (EUV) band provides an indispensable observational window into the dynamic processes of the solar atmosphere, enabling critical investigations of space weather drivers such as coronal mass ejections (CMEs) and solar flares [1,2,3]. The Extreme Ultraviolet (EUV) solar telescope, operating in the high-photon-energy EUV band and often using relatively long exposure times, is particularly susceptible to image degradations caused by factors such as scattered contamination particles, mirror surface roughness, platform instability, and increased detector noise [4]. In high-resolution quantitative analyses, such as measurements of coronal loop structures [2], these degradations constitute a critical bottleneck limiting scientific research. Next-generation solar missions, such as Solar Orbiter [5] and Parker Solar Probe [6], and spacecraft on deep-space missions (e.g., spacecraft operating near the Sun-Earth L5 point or on solar orbital trajectories) [7,8] produce a huge amount of data. Their high-resolution imagers take pictures of the Sun very frequently. Over months or years, the total data easily reaches terabytes to petabytes. This creates big challenges for analysis. Automated quality checks and efficient processing are essential to keep the data reliable and useful for science. No-reference image quality assessment (NR-IQA) is an essential technology for prioritizing scientifically viable observations without ground-truth references.
However, the deployment of existing NR-IQA techniques in solar contexts reveals fundamental domain adaptation challenges. Statistical approaches exemplified by BRISQUE (Blind/Referenceless Image Spatial Quality Evaluator) [9]—though effective for terrestrial imagery—rely on Generalized Gaussian Distribution (GGD) modeling of locally normalized luminance coefficients. Such priors critically mismatch the radiation physics governing solar EUV imaging, where photon shot noise dominates in low-flux coronal regions and detector point spread functions introduce wavelength-specific blurring artifacts distinct from natural image degradations. Similarly, perceptually motivated methods like VSI [10] fail to capture the scientific saliency of solar features; for instance, their saliency maps often overlook off-limb coronal structures while overemphasizing photospheric granulation patterns irrelevant to coronal physics.
Meanwhile, deep learning-based alternatives [11,12,13,14,15,16], such as HyperIQA [17] and attention-based architectures [18,19], face complementary limitations. Although their content-adaptive networks provide strong nonlinear representation capabilities, GAN-based methods—e.g., the approach proposed by Jarolim et al. [20], which detects anomalies in solar H-alpha filtergrams via image reconstruction and deviation measurement—still rely heavily on large-scale annotated datasets, which are scarcely available in heliophysics.
In contrast, physics-driven quality metrics [21,22,23,24,25], such as the variance of Laplacian (VL) [25], median filter gradient similarity [21], and perceptual measures [24], are widely used to quantify image sharpness. For example, So et al. employed the VL method to assess the clarity of optical solar images [25]. These approaches offer clear physical interpretability but encounter significant challenges in EUV limb observations, where complex edge structures hinder reliable extraction.
Overall, deep learning methods exhibit superior performance but are constrained by data scarcity and their “black-box” nature, whereas purely physics-based approaches provide interpretability but often fail to capture subtle degradations characteristic of coronal imaging. Moreover, deep learning approaches require large-scale expert-labeled datasets, which are fundamentally at odds with typical solar physics practice. Consensus labeling of EUV image quality remains exceptionally challenging due to the following: (1) instrument-specific artifact profiles across different missions, (2) evolving scientific priorities for various solar features, and (3) the lack of standardized degradation metrics tailored to coronal studies. Consequently, directly transferring natural image quality assessment (QA) models to solar EUV images risks mischaracterizing physically significant distortions as negligible noise.
To address these challenges, this study aims to develop and validate a robust and automated quality assessment framework for solar EUV images. The framework is designed to combine deep semantic features with physics-driven statistical features specifically tailored for solar EUV data. In addition, it utilizes a large-scale, realistic simulation dataset based on real solar observations for model training and evaluation. An efficient machine learning classifier is identified to enable potential on-orbit deployment, and the model’s effectiveness is further verified using real on-orbit data, with results interpreted from a solar physics perspective.
To overcome these dual challenges of physical incompatibility and the scarcity of large, expertly annotated solar training datasets, we introduce a hybrid assessment framework that effectively combines deep semantic representations with solar-specific physical diagnostics. Our framework combines deep features with physics-based solar image features. Deep features are patterns that a neural network learns automatically from raw pixel data. They capture complex shapes, structures, and textures in solar images. Handcrafted features, on the other hand, are designed manually using domain knowledge and image processing methods. They include simple measures of texture, sharpness, and noise and directly describe physical properties of solar EUV images. Our method uses the 19.5 nm channel of the Fengyun-3E Solar X-ray and Extreme Ultraviolet Imager (X-EUVI) [26] and introduces two key innovations. First, we use a KonIQ-10k [27] pretrained network as a feature extractor, capturing high-level structural patterns without retraining the entire network. Second, we add physics-based quality metrics that focus on active regions, measuring magnetic feature preservation and thermal signature integrity. By combining these two types of features in a single feature space, our model can perform domain-adaptive quality assessment on labeled solar images.
The objective of this study was to describe and validate a novel method for automatic classification of the quality of solar EUV images.

2. Materials and Methods

2.1. Data Acquisition from the Detector

The data used in this paper is primarily from the Solar X-ray and Extreme Ultraviolet Imager (X-EUVI) on board the Fengyun-3E (FY-3E) satellite. Since the X-EUVI began its on-orbit operation on 11 July 2021, it has successfully acquired a large volume of solar images and some solar irradiance data. The instrument observes the Sun across multiple channels, including the 19.5 nm extreme ultraviolet (EUV) band and six X-ray channels with wavelength ranges of 0.6–8.0 nm (X1), 0.6–6.0 nm (X2), 0.6–5.0 nm (X3), 0.6–2.0 nm (X4), 0.6–1.6 nm (X5), and 0.6–1.2 nm (X6). These channels probe different temperature regimes of the solar corona, providing a comprehensive view of solar activity and enabling detailed diagnostics of coronal structures and events. The key technical specifications of the back-illuminated CCD detector used by the X-EUVI instrument are summarized in Table 1. It is important to note that these specifications describe the detector itself, which is sensitive to a range of EUV wavelengths. This particular study, however, focuses exclusively on data from the 19.5 nm observation channel.
To ensure the scientific integrity and accuracy of the data, the raw detector data undergoes strict on-orbit processing before release, including geometric correction (to compensate for image deviations caused by attitude changes), noise and dark current removal (subtracting dark frames and suppressing noise), and flat-field calibration (correcting position-dependent detector response to ensure uniform image brightness). After this processing, the data can be used to generate a continuous series of full-disk solar images.

2.2. Dataset and ROI Selection Strategy

2.2.1. Image Degradation Simulation and Labeling

The original high-quality FY-3E images are considered “Excellent”. The degradation parameters were designed based on typical quality issues observed in FY-3E solar EUV images, including optical defocus, telescope jitter, detector noise, and local overexposure caused by solar activity. The kernel sizes, noise levels, and overexposure intensities were chosen to realistically simulate the range of conditions encountered during on-orbit observations.
To construct a training and evaluation dataset, we applied seven types of typical degradations, each with five levels (L1–L5), ranging from mild to severe. All kernel sizes and standard deviations are expressed in pixels (px). The seven degradation types are summarized in Table 2.
Through this approach, degraded images corresponding to four quality levels (“Good”, “Moderate”, “Poor”, and “Very Poor”) were generated from the original “Excellent” images, providing a rich and controllable dataset for model training and evaluation. In total, a large-scale dataset of 47,950 images was constructed. Each image was labeled according to a five-level quality standard (Excellent, Good, Moderate, Poor, Very Poor), which served as the target classes for our classification task. The dataset was then split into training and testing sets with an 80%/20% ratio. All images were collected from the 19.5 nm band of the X-EUV imager aboard the FY-3(05) satellite.

2.2.2. Implementation Details

The experiments were conducted on a workstation equipped with an NVIDIA RTX 4060 GPU, a CPU with sufficient computational capacity, and 8 GB of RAM. The software environment includes Python 3.9 and PyTorch version 2023.3.4.
In this experiment, we employed three classical machine learning models for a multi-class classification task: SVM, XGBoost, and Random Forest. For SVM (SVC), probability = True was set to enable probability predictions, and random_state = 42 was fixed, while all other parameters were left at their default values, including the radial basis function (RBF) kernel and unlimited iterations. For XGBoost (XGBClassifier), use_label_encoder = False, eval_metric = ‘logloss’, and random_state = 42 were specified; the learning rate was set to 0.3, the number of trees to 100, and the maximum depth to 6, with all other parameters kept at their defaults. For Random Forest (RandomForestClassifier), only random_state = 42 was set, using the number of trees (100) and unlimited depth. The fixed random seed ensured the reproducibility of the experiments.
In this experiment, we employed three classical machine learning models for a multi-class classification task: SVM, XGBoost, and Random Forest.
Model performance was evaluated using standard metrics including Accuracy, Precision, Recall, F1-score, and AUC (Area Under the Receiver Operating Characteristic Curve). For multi-class classification, Precision, Recall, and F1-score are calculated for each class and then averaged to provide an overall performance measure.

2.2.3. ROI Selection Strategy

To focus on the most informative parts of the image, we designed an adaptive Region of Interest (ROI) selection strategy. The 1024 × 1024 solar image is divided into 64 × 64 patches. Only patches within the central solar disk are considered. A composite score balancing brightness and gradient is calculated for each patch, and the top 10 patches are selected as ROIs. The process is illustrated in Figure 1.
Brightness score. For patch i, the brightness score is defined as
B k = 1 | P k | ( x , y ) P K I ( x , y )
where |Pk| = 64 × 64 is the number of pixels in the patch.
Sharpness score. Sharpness is evaluated via the mean gradient magnitude:
S k = 1 | P k | ( x , y P k ) I x 2 ( x , y ) + I y 2 ( x , y )
where Ix, Iy are Sobel operators.
Composite ROI score. To balance brightness and sharpness, a weighted composite score is defined:
R k = α B k     min ( B ) max ( B )     min ( B )   +   ( 1     α ) S k     min ( S ) max ( S )     min ( S )
where α = 0.3 controls the balance between brightness and sharpness, and Bk, Sk are normalized values across all patches. Finally, the top k = 10 patches with the highest Rk are selected as ROIs.

2.3. Feature Extraction and Fusion

2.3.1. Deep Learning Feature Extraction

To capture high-level semantic information, we employed the backbone network of HyperIQA [17] for deep feature extraction. Specifically, a ResNet-50 architecture [28] pretrained on the KonIQ-10k dataset [27] was used to produce a 112-dimensional global content feature vector, denoted as fc, for each EUV image. This feature vector represents structural and perceptual quality characteristics.
To reduce dimensional redundancy, Principal Component Analysis (PCA) [29] was applied to fc. The top two principal components, explaining more than 95% of the cumulative variance, were retained as the final deep features. This process is illustrated schematically in Figure 2.

2.3.2. Handcrafted Physical–Statistical Features

Complementary to deep features, we designed a set of 22 handcrafted features to quantify the physical and statistical quality attributes of solar EUV images. These features were derived from multiple perspectives, including brightness statistics, sharpness measures, textural properties, noise/fidelity indicators, frequency responses, and spatial descriptors. The categories and representative features are summarized in Table 3. For a complete description of the 22 handcrafted features, including formulas, physical interpretations, and references, see Appendix A.

2.3.3. Feature Fusion and Classification

To validate the complementarity of the extracted features, three comparative experimental settings were designed:
Deep features only (2-dimensional, PCA-reduced);
Handcrafted features only (22-dimensional);
Fused features (24-dimensional), constructed by concatenating the deep and handcrafted feature vectors.
The three feature sets were evaluated using representative machine learning classifiers: Support Vector Machine (SVM) [38], XGBoost [39], and Random Forest [40]. The complete pipeline for feature extraction, fusion, and classification is presented in Figure 3.

3. Results

3.1. Performance Analysis

The performance of different feature–classifier combinations was systematically evaluated, and the results are summarized in Table 4.
The results indicate that the XGBoost classifier combined with fused features achieved the highest performance, with an accuracy of 97.91% and an AUC of 0.9992. Handcrafted features alone also yielded strong results with tree-based classifiers, achieving 97.50% accuracy using XGBoost. Deep features alone provided moderate performance, with Random Forest achieving 79.51% accuracy. SVM consistently exhibited lower performance across all feature sets.
The confusion matrices using each feature set are presented in Figure 4, Figure 5 and Figure 6.

3.2. Feature Analysis

The dominance of handcrafted physical features is unequivocal: with XGBoost, the 22 solar-specific metrics alone achieved 97.50% accuracy, underscoring their capacity to encode fundamental coronal quality determinants—from active region texture integrity to off-limb sharpness degradation. This exceptional performance confirms that solar EUV image quality can be effectively quantified through physics-driven metrics. These metrics are designed to capture critical quality determinants such as the integrity of thermal signatures during flares. They also capture the presence of instrument-specific noise patterns in coronal holes. Although feature fusion yielded a marginal accuracy gain (0.41%), this increment proves operationally critical. As Figure 6 verifies, fused features improved the recall for the ‘Very Poor’ class (Level 5) (these images are typically considered scientifically unusable) recall by 0.42% (98.2% → 98.6%), directly enhancing downstream data utility for flare forecasting pipelines. This precision in identifying irrecoverably degraded images is paramount for downlink optimization, preventing bandwidth waste on unusable data, event detection, ensuring viable CME/flare analysis frames, long-term monitoring, and maintaining calibration consistency.

3.3. Ablation Study

To evaluate the contribution of different features in the FY-3E X-EUV 19.5 nm image quality assessment task, we conducted ablation experiments using the XGBoost classifier and reported the classification results for the five quality levels (L1–L5).
Deep Features (HyperIQA PCA 2D): The overall accuracy was 77.3% with an F1 score of 0.773. The per-class accuracy indicates notable confusion in the middle levels (L2–L4). For instance, approximately 14.5% of L2 samples were misclassified as L1, and about 10.8% of L4 samples were misclassified as L5, suggesting that deep features alone are insufficient to effectively distinguish intermediate quality levels.
Handcrafted Features (22D): The overall accuracy increased significantly to 97.5% with an F1 score of 0.975. All classes exhibited stable performance, with almost no confusion among L2–L4. For example, the misclassification rate of L3 was below 1%, while L4 had a small portion misclassified as L5 (~2.8%). These results indicate that the handcrafted physical–statistical features possess strong discriminative power for image quality assessment.
Fused Features (22 + 2D): The overall accuracy further improved to 97.9% with an F1 score of 0.979. The fused features achieved noticeably better classification performance for the intermediate levels. For example, misclassifications in L3–L4 were further reduced, and the misclassification rate of L5 decreased to about 1.1%. This demonstrates that combining deep and handcrafted features can effectively exploit complementary information, thereby enhancing XGBoost’s discriminative ability across all image quality levels.
In summary, the ablation study shows that handcrafted features dominate the FY-3E image quality assessment task, while the addition of deep features further improves the classification of intermediate levels, achieving more precise five-level quality prediction.

3.4. On-Orbit Measurement Verification and Solar Physics Interpretation

To validate the applicability and physical relevance of the proposed image quality assessment framework in actual space observation missions, this paper selects and analyzes image data from the 19.5 nm channel of the X-EUVI instrument aboard the Fengyun-3E (FY-3E) satellite. This section not only tests the model’s evaluation capability on real images but also explores the potential impact of image degradation on solar physics observations.

3.4.1. Normal Observation Images and Stable Physical Features

Figure 7 displays a sequence of routine observation images with no significant distortion, all of which were predicted as Grade 1 quality. Figure 7 shows the physical feature curves extracted from this image sequence, including mean brightness, contrast, Mean Subtracted Contrast Normalized (MSCN) consistency, and log-Gabor response, all of which remain relatively stable. In this type of image, active region structures are clear, and details of coronal loops and bright points are rich, making them suitable for Differential Emission Measure (DEM) temperature inversion and magnetic field evolution analysis. The image quality assessment model successfully identified these high-quality images, demonstrating its capability to preserve scientifically valid data.
From Figure 8, it can be observed that the mean brightness of the image sequence is fundamentally stable throughout the observation period with minimal fluctuation, indicating a good exposure status of the imager and no significant solar irradiation changes. The image contrast also remains at a relatively uniform level, suggesting that the image edges are sharp and there is no obvious degradation in structural acuity.
The experimental results indicate that the image quality metrics exhibit only minor fluctuations, as shown in Table 5.
Further observation reveals that although the MSCN consistency coefficient is generally stable, it exhibits slight fluctuations. This may be related to changes in local image texture density during the observation process, such as dynamic evolution of structures at the edges of active regions causing local texture variations. The log-Gabor energy shows minor fluctuations, which is presumed to be associated with short-term changes in localized bright areas (e.g., coronal bright spots, faint pre-flare structures). However, the overall trend does not show a systematic decline, indicating that the images still retain key mid-to-high frequency texture structures. The observed increasing trends in GLCM-contrast and log-Gabor energy suggest a progressive enhancement in the structural richness and high-frequency content of the image sequence, which may indicate intensified solar activity or improved imaging clarity. However, the potential influence of noise should also be considered.
In summary, all physical features of this image set demonstrate strong consistency and stability, confirming that their image quality is at an optimal level. They possess the capability to fully preserve high-value information such as solar coronal structures and active region contours, supporting their use in scientific-grade data analysis tasks like DEM temperature reconstruction and magnetic flux evolution studies.

3.4.2. Exposure Variation and Image Photometric Attenuation

Figure 9 presents a sequence of images showing brightness attenuation caused by changes in exposure parameters. The images correspond to exposure times of 2400 ms, 3600 ms (×3), and 800 ms, with predicted quality grades of 2, 2, 2, 2, and 3, respectively. Figure 10 shows the corresponding extracted physical features, where brightness and contrast decrease significantly. By comparison with GOES X-ray flux data and ruling out flare events, it is further inferred that the signal attenuation may be caused by imager aging or filter contamination. This type of degradation could affect the detection and classification of faint active regions, thereby impacting the a priori judgment of flare eruptions.

3.4.3. Field-of-View Deviation

Figure 11 showcases a set of on-orbit images where the sun has deviated from the center of the field of view. The predicted quality grades are 1, 1, 1, 1, and 2. This phenomenon occurs because the satellite’s three-axis attitude cannot maintain a stable alignment with the sun at certain times due to orbital constraints. The model did not significantly lower its quality score, primarily because global offset types of distortion were not introduced into the training set, and the Region of Interest (ROI) selection was focused on local areas.
In Figure 12, supplementary features we extracted—ROI coordinate changes and MSCN fluctuations—exhibit drastic jumps within a very short period (50 s). In comparison with the spatial location of the ROI and MSCN in Figure 8, this is distinct from the change cycle of coronal activity and suggests the possible existence of platform thermal deformation or mechanical pointing anomalies.

3.5. Cross-Dataset Validation Considerations

It should be noted that, to date, there is no publicly available EUV image dataset with systematically annotated degradation levels. The solar dynamics observatory atmospheric imaging assembly (SDO/AIA) [41] subsets (e.g., 193 Å) mainly consist of standard-calibrated high-quality observations and lack annotations for degradation levels or quality metrics, making them unsuitable for direct cross-dataset validation. In addition, images from SDO/AIA wavelengths exhibit highly consistent instrument characteristics, spatial resolution, and preprocessing procedures. Therefore, a model trained on one subset is expected to show limited performance variation on other subsets. Based on these considerations, the experiments conducted in this study are sufficient to demonstrate the robustness and generalization capability of the proposed model.

4. Discussion

Our analysis demonstrates that the proposed hybrid feature-fusion framework provides a robust and practical approach for assessing solar EUV image quality. Handcrafted, physics-driven features achieved high accuracy, while the integration with deep semantic features offered a modest yet meaningful improvement, capturing structural patterns that may be overlooked by purely physical metrics. The choice of classifier further enhanced performance: XGBoost outperformed SVM in both accuracy and inference speed, making it particularly suitable for near real-time on-orbit deployment. On-orbit validation confirmed that the model can reliably distinguish high-quality observations from degraded images caused by operational factors.
Previous studies have explored deep learning-based methods and physics-driven quality metrics [6,33,34,35,36,37,38,39,40,41], which face limitations in data availability, interpretability, or performance on EUV limb observations. In contrast, our hybrid feature-fusion approach achieves high accuracy while maintaining physical interpretability and computational efficiency.
The proposed method works well for extracting local image features and assessing degradation, but it is not sensitive to global shifts. If an image is systematically translated or rotated, the model may not capture the overall position change, which can affect some quality metrics or physical measurements. The current method also analyzes single frames and does not fully use temporal information from multiple frames, which may limit its performance on fast-changing or transient events. Future work could include adding global alignment or shift compensation, such as image registration or Spatial Transformer Networks (STNs), and using multi-frame temporal modeling with LSTM or Transformer to analyze consecutive frames and improve robustness to global shifts and transient changes.

5. Conclusions

This study successfully developed and validated a novel framework for solar EUV image quality assessment by fusing deep semantic features with domain-specific, physical–statistical features. This hybrid approach, particularly when paired with an XGBoost classifier, achieved good performance with 97.91% accuracy. While well-designed handcrafted features are extremely powerful, their fusion with deep features provides a crucial performance boost, especially in identifying the most severely degraded images. This work provides an effective and robust solution for the automated quality classification of massive solar datasets, holding significant promise for operational space weather forecasting and future deep space missions.

Author Contributions

Conceptualization, S.D.; methodology, S.D. and L.H.; software, S.D. and S.Y.; validation, Y.W., L.S. and Y.X.; formal analysis, H.C.; investigation, K.W. and S.Y.; resources, L.H.; data curation, Y.W.; writing—original draft preparation, S.D.; writing—review and editing, L.H.; supervision, L.H. and S.X.; project administration, L.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The code is available at: https://github.com/liliansnail/EUV-Image-Quality (accessed on 20 September 2025).

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Table A1. Feature definitions for solar EUV image quality evaluation.
Table A1. Feature definitions for solar EUV image quality evaluation.
FeatureDefinitionSolar EUV Image Meaning
brightnessMean brightness of the image blockIndicates local radiative intensity; higher brightness corresponds to heated coronal regions.
scoreNormalized image quality scoreCombines brightness and gradient, reflecting both illumination and structural activity of solar regions.
gradientAverage gradient magnitude of the blockHighlights edges such as coronal loop boundaries, filaments, or CME fronts.
laplacianVariance of the Laplacian (focus measure)Sensitive to small-scale variations, revealing fine coronal structures.
sharpnessSharpness based on edge strengthHigh sharpness indicates well-resolved loop systems or active region cores.
mscn_0MSCN coefficient in 0° directionCaptures local structural correlation along horizontal direction.
mscn_45MSCN coefficient in 45° directionCaptures local structural correlation along diagonal direction (45°).
mscn_90MSCN coefficient in 90° directionCaptures local structural correlation along vertical direction.
mscn_135MSCN coefficient in 135° directionCaptures local structural correlation along diagonal direction (135°).
glcm_contrastGLCM contrast featureHigh contrast values reveal strong magnetic neutral lines or flare kernels.
glcm_homogeneityGLCM homogeneity featureQuiet Sun regions show high homogeneity; active regions show lower values.
glcm_energyGLCM energy featureIndicates textural regularity; high energy corresponds to repetitive loop structures.
log_gaborLog-Gabor filter responseCaptures multi-scale curved structures, useful for identifying CME fronts or loops.
MFGSMedian Filter Gradient Similarity metricA solar-specific focus measure; evaluates clarity of coronal fine structures.
PSNRPeak Signal-to-Noise RatioReflects overall fidelity of the image, useful for quantifying degradation effects.
SSIMStructural Similarity IndexSensitive to luminance, contrast, and structural changes in solar features.
SNRSignal-to-Noise RatioIndicates detectability of faint eruptions or coronal dimmings against background noise.
LH_entropyWavelet entropy of LH sub-bandReflects complexity of horizontal structures (e.g., plasma flows).
HL_entropyWavelet entropy of HL sub-bandReflects complexity of vertical structures (e.g., coronal loops).
HH_entropyWavelet entropy of HH sub-bandReflects diagonal structural complexity, often linked to turbulence.
xx-coordinate of the upper-left corner of the blockPreserves spatial context, showing whether the region lies near disk center or limb.
yy-coordinate of the upper-left corner of the blockPreserves spatial context, showing whether the region lies near disk center or limb.
brightness = 1 N i , j I ( i , j )
gradient = 1 N i , j G x ( i , j ) 2 + G y ( i , j ) 2
score = α brightness + ( 1 α ) gradient
L = 2 I = 2 I x 2 + 2 I y 2
laplacian = 1 | M | ( i , j ) M | L |
T k = G x 2 + G y 2 ,   sharpness = 1 K k = 1 K var ( T k )
I ^ ( x , y ) = I ( x , y ) μ ( x , y ) σ ( x , y ) + γ
MSCN product , θ = I ^ ( x , y ) I ^ ( x , y )
glcm _ contrast = i , j ( i - j ) 2 P ( i , j )
glcm _ homogeneity = i , j P ( i , j ) 1 + | i - j |
glcm _ energy = i , j P ( i , j ) 2
MFGS = 1 G ( I ) G ( I median ) 2 G ( I ) 2
PSNR = 10 log 10 M A X I 2 MSE
SSIM = ( 2 μ x μ y + C 1 ) ( 2 σ x y + C 2 ) ( μ x 2 + μ y 2 + C 1 ) ( σ x 2 + σ y 2 + C 2 )
SNR = mean ( signal ) std ( noise )
G ( u , v ) = exp log u 2 + v 2 f 0 2 2 ( log σ ) 2 ,   G ( 0 , 0 ) = 0
e n t r o p y = i p i log 2 ( p i )
Equation (A17) where p i denotes the probability distribution of wavelet coefficients. Although the mathematical expression is identical, the features are computed on different wavelet sub-bands (LH, HL, and HH, respectively), thereby capturing distinct directional and frequency characteristics of the solar EUV images.
x-coordinate of the upper-left corner of the block
y-coordinate of the upper-left corner of the block
The definitions of the symbols in the above formulas are as follows:
I ( x , y ) : pixel intensity at spatial coordinate ( x , y )
μ: mean intensity of the block (local mean)
σ : standard deviation of intensity in the block
I : image gradient (computed by Sobel operator)
Δ I : Laplacian of the image (second-order derivative)
V a r ( Δ I ) : variance of Laplacian, used for sharpness
G x , G y : gray-level co-occurrence matrix (GLCM) probability of intensity pai
( i , j ) : are the horizontal and vertical image gradients
I ^ ( i , j ) : normalized MSCN coefficient at pixel ( i , j )
γ : stabilizing constant to avoid division by zero in MSCN
: element-wise product between neighboring MSCN coefficients
θ : orientation of MSCN products (0°, 45°, 90°, 135°)
p ( i , j ) : the probability of the gray-level pair ( i , j ) occurring in the GLCM
G : gradient similarity index (MFGS metric)
MSE: mean squared error between original and degraded block
C 1 , C 2 : A small constant to avoid division by zero
G ( u , v ) : log-Gabor filter response, u , v : frequency domain coordinates,
f 0 : center frequency
σ : parameter controlling the bandwidth, G (0,0): used to avoid the DC component

References

  1. Benz, A.O. Flare observations. Living Rev. Sol. Phys. 2017, 14, 2. [Google Scholar] [CrossRef]
  2. Temmer, M. Space weather: The solar perspective. Living Rev. Sol. Phys. 2021, 18, 4. [Google Scholar] [CrossRef]
  3. Yashiro, S.; Gopalswamy, N.; Michalek, G.; St. Cyr, O.C.; Plunkett, S.P.; Rich, N.B.; Howard, R.A. A Catalog of White Light Coronal Mass Ejections Observed by the SOHO Spacecraft. J. Geophys. Res. Space Phys. 2004, 109, A07105. [Google Scholar] [CrossRef]
  4. BenMoussa, A.; Gissot, S.; Schühle, U.; Del Zanna, G.; Auchere, F.; Mekaoui, S.; Jones, A.R.; Dammasch, I.E.; Deutsch, W.; Dinesen, H.; et al. On-orbit degradation of solar instruments. Sol. Phys. 2013, 288, 389–434. [Google Scholar] [CrossRef]
  5. Müller, D.; St. Cyr, O.C.; Zouganelis, I.; Gilbert, H.R.; Marsden, R.; Nieves-Chinchilla, T.; Antonucci, E.; Auchere, F.; Berghmans, D.; Horbury, T.S.; et al. The Solar Orbiter mission—Science overview. Astron. Astrophys. 2020, 642, A1. [Google Scholar] [CrossRef]
  6. Raouafi, N.E.; Matteini, L.; Squire, J.; Badman, S.T.; Velli, M.; Klein, K.G.; Chen, C.H.K.; Whittlesey, P.L.; Laker, R.; Horbury, T.S.; et al. Parker Solar Probe: Four years of discoveries at solar cycle minimum. Space Sci. Rev. 2023, 219, 8. [Google Scholar] [CrossRef]
  7. Zhu, Z.M.; Leng, X.Y.; Guo, Y.; Li, C.; Li, Z.; Lu, X.; Huang, F.; You, W.; Deng, Y.; Su, J.; et al. Research on the principle of multi-perspective solar magnetic field measurement. Res. Astron. Astrophys. 2025, 25, 045011. [Google Scholar] [CrossRef]
  8. Deng, Y.; Zhou, G.; Dai, S.; Wang, Y.; Feng, X.; He, J.; Jiang, J.; Tian, H.; Yang, S.; Hou, J.; et al. Solar Polar Orbit Observatory. Sci. Bull. 2023, 68, 298–308. [Google Scholar] [CrossRef]
  9. Mittal, A.; Moorthy, A.K.; Bovik, A.C. No-reference image quality assessment in the spatial domain. IEEE Trans. Image Process. 2012, 21, 4695–4708. [Google Scholar] [CrossRef]
  10. Zhang, L.; Shen, Y.; Li, H. VSI: A visual saliency-induced index for perceptual image quality assessment. IEEE Trans. Image Process. 2015, 23, 4270–4281. [Google Scholar] [CrossRef] [PubMed]
  11. Yang, X.; Li, F.; Liu, H. A survey of DNN methods for blind image quality assessment. IEEE Access 2019, 7, 123788–123806. [Google Scholar] [CrossRef]
  12. Agnolucci, L.; Galteri, L.; Bertini, M.; Del Bimbo, A. Arniqa: Learning distortion manifold for image quality assessmentt. In Proceedings of the 2024 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Waikoloa, HI, USA, 3–8 January 2024; pp. 6189–6198. [Google Scholar] [CrossRef]
  13. Chiu, T.-Y.; Zhao, Y.; Gurari, D. Assessing Image Quality Issues for Real-World Problems. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 3643–3652. [Google Scholar] [CrossRef]
  14. Madhusudana, P.C.; Birkbeck, N.; Wang, Y.; Adsumilli, B.; Bovik, A.C. Image quality assessment using contrastive learning. In Proceedings of the 2022 IEEE/CVF Winter Conference on Applications of Computer Vision Workshops (WACVW), Waikoloa, HI, USA, 4–8 January 2022. [Google Scholar] [CrossRef]
  15. Golestaneh, S.A.; Dadsetan, S.; Kitani, K.M. No-reference image quality assessment via transformers, relative ranking, and self-consistency. In Proceedings of the 2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Waikoloa, HI, USA, 3–8 January 2022. [Google Scholar] [CrossRef]
  16. Kang, L.; Ye, P.; Li, Y.; Doermann, D. Convolutional neural networks for no-reference image quality assessment. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 1733–1740. [Google Scholar] [CrossRef]
  17. Su, S.; Yan, Q.; Zhu, Y.; Zhang, C.; Ge, S.; Sun, W.; Zhang, Y. Blindly assess image quality in the wild with a content-aware hypernetwork. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 3667–3676. [Google Scholar]
  18. Wang, C.; Lv, X.; Fan, X.; Ding, W.; Jiang, X. Two-channel deep recursive multi-scale network based on multi-attention for no-reference image quality assessment. Int. J. Mach. Learn. Cybern. 2023, 14, 2421–2437. [Google Scholar] [CrossRef]
  19. Yang, S.; Wu, T.; Shi, S.; Lao, S.; Gong, Y.; Cao, M.; Wang, J.; Yang, Y. Maniqa: Multi-dimension attention network for no-reference image quality assessment. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), New Orleans, LA, USA, 19–20 June 2022; pp. 1191–1200. [Google Scholar] [CrossRef]
  20. Jarolim, R.; Veronig, A.M.; Pötzi, W.; Podladchikova, T. Image-quality assessment for full-disk solar observations with generative adversarial networks. Astron. Astrophys. 2020, 643, A72. [Google Scholar] [CrossRef]
  21. Deng, H.; Zhang, D.; Wang, T.; Liu, Z.; Xiang, Y.; Jin, Z.; Cao, W. Objective image-quality assessment for high-resolution photospheric images by median filter-gradient similarity. Sol. Phys. 2015, 290, 1479–1489. [Google Scholar] [CrossRef]
  22. Popowicz, A.; Radlak, K.; Bernacki, K.; Orlov, V. Review of image quality measures for solar imaging. Sol. Phys. 2017, 292, 187. [Google Scholar] [CrossRef]
  23. Denker, C.; Dineva, E.; Balthasar, H.; Verma, M.; Kuckein, C.; Diercke, A.; Manrique, S.J.G. Image quality in high-resolution and high-cadence solar imaging. Sol. Phys. 2018, 293, 44. [Google Scholar] [CrossRef]
  24. Huang, Y.; Jia, P.; Cai, D.; Cai, B. Perception evaluation: A new solar image quality metric based on the multi-fractal property of texture features. Sol. Phys. 2019, 294, 133. [Google Scholar] [CrossRef]
  25. So, C.W.; Yuen, E.L.H.; Leung, E.H.F.; Pun, J.C.S. Solar image quality assessment: A proof of concept using variance of Laplacian method and its application to optical atmospheric condition monitoring. Publ. Astron. Soc. Pac. 2024, 136, 044504. [Google Scholar] [CrossRef]
  26. Chen, B.; Ding, G.X.; He, L.P. Solar X ray and Extreme Ultraviolet Imager (X EUVI) loaded onto China’s Fengyun 3E satellite. Light Sci. Appl. 2022, 11, 29. [Google Scholar] [CrossRef] [PubMed]
  27. Hosu, V.; Lin, H.; Sziranyi, T.; Saupe, D. KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Trans. Image Process. 2020, 29, 4041–4056. [Google Scholar] [CrossRef]
  28. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar] [CrossRef]
  29. Jolliffe, I. Principal component analysis. In International Encyclopedia of Statistical Science; Lovric, M., Ed.; Springer: Berlin/Heidelberg, Germany, 2011; pp. 1094–1096. [Google Scholar]
  30. Yeganeh, H.; Wang, Z. Objective quality assessment of tone-mapped images. IEEE Trans. Image Process. 2012, 22, 657–667. [Google Scholar] [CrossRef]
  31. Pertuz, S.; Puig, D.; Garcia, M.A. Analysis of focus measure operators for shape-from-focus. Pattern Recognit. 2013, 46, 1415–1432. [Google Scholar] [CrossRef]
  32. Haralick, R.M.; Shanmugam, K.; Dinstein, I.H. Textural features for image classification. IEEE Trans. Syst. Man Cybern. 1973, SMC-3, 610–621. [Google Scholar] [CrossRef]
  33. Deng, D.; Zhang, J.; Wang, T.; Su, J. A new algorithm of image quality assessment for photospheric images. Res. Astron. Astrophys. 2015, 15, 349–358. [Google Scholar]
  34. Krotkov, E. Focusing. Int. J. Comput. Vis. 1988, 1, 223–237. [Google Scholar] [CrossRef]
  35. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef]
  36. Kovesi, P. Image features from phase congruency. Videre: J. Comput. Vis. Res. 1999, 1, 1–26. [Google Scholar]
  37. Zhang, L.; Zhang, L.; Mou, X.; Zhang, D. FSIM: A feature similarity index for image quality assessment. IEEE Trans. Image Process. 2011, 20, 2378–2386. [Google Scholar] [CrossRef] [PubMed]
  38. Cortes, C.; Vapnik, V. Support-vector networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
  39. Chen, T.; Guestrin, C. XGBoost: A scalable tree boosting system. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 785–794. [Google Scholar] [CrossRef]
  40. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  41. Lemen, J.R.; Title, A.M.; Akin, D.J.; Boerner, P.F.; Chou, C.; Drake, J.F.; Duncan, D.W.; Edwards, C.G.; Friedlaender, F.M.; Heyman, G.F.; et al. The atmospheric imaging assembly (AIA) on the solar dynamics observatory (SDO). Sol. Phys. 2012, 275, 17–40. [Google Scholar] [CrossRef]
Figure 1. Illustration of the ROI selection, targeting active regions (ARs), limb areas, and other texture-rich zones in a 1024 × 1024 solar EUV image. The green-bordered rectangles indicate the selected ROI (Region of Interest) areas used for feature extraction.
Figure 1. Illustration of the ROI selection, targeting active regions (ARs), limb areas, and other texture-rich zones in a 1024 × 1024 solar EUV image. The green-bordered rectangles indicate the selected ROI (Region of Interest) areas used for feature extraction.
Sensors 25 06329 g001
Figure 2. Deep learning feature extraction using HyperIQA. ResNet-50 produces a global content feature vector, which is subsequently reduced by PCA.
Figure 2. Deep learning feature extraction using HyperIQA. ResNet-50 produces a global content feature vector, which is subsequently reduced by PCA.
Sensors 25 06329 g002
Figure 3. Overview of the feature extraction and fusion framework. Handcrafted physical features and deep semantic features are concatenated into a fused feature vector for classification.
Figure 3. Overview of the feature extraction and fusion framework. Handcrafted physical features and deep semantic features are concatenated into a fused feature vector for classification.
Sensors 25 06329 g003
Figure 4. Confusion matrices for deep features only with three classifiers: SVM, XGBoost, and Random Forest (Accuracy: 77.27%).
Figure 4. Confusion matrices for deep features only with three classifiers: SVM, XGBoost, and Random Forest (Accuracy: 77.27%).
Sensors 25 06329 g004
Figure 5. Confusion matrices for handcrafted features only (22D) with three classifiers: SVM, XGBoost, and Random Forest (Accuracy: 97.50%).
Figure 5. Confusion matrices for handcrafted features only (22D) with three classifiers: SVM, XGBoost, and Random Forest (Accuracy: 97.50%).
Sensors 25 06329 g005
Figure 6. Confusion matrices for fused features (22 handcrafted + 2 deep features) with three classifiers: SVM, XGBoost, and Random Forest (Accuracy: 97.91%).
Figure 6. Confusion matrices for fused features (22 handcrafted + 2 deep features) with three classifiers: SVM, XGBoost, and Random Forest (Accuracy: 97.91%).
Sensors 25 06329 g006
Figure 7. A sequence of normal observation images, with a 7-s interval between each image download and an exposure time of 800 ms (a typical observational sequence).
Figure 7. A sequence of normal observation images, with a 7-s interval between each image download and an exposure time of 800 ms (a typical observational sequence).
Sensors 25 06329 g007
Figure 8. Extracted results of image physical features: brightness, contrast, MSCN, and log-Gabor energy.
Figure 8. Extracted results of image physical features: brightness, contrast, MSCN, and log-Gabor energy.
Sensors 25 06329 g008
Figure 9. Schematic diagram of the effect of exposure time variation on image quality (the exposure times from left to right are 2400 ms, 3600 ms, 3600 ms, 3600 ms, and 800 ms, respectively; image content displays a sequence of solar images with decreasing brightness).
Figure 9. Schematic diagram of the effect of exposure time variation on image quality (the exposure times from left to right are 2400 ms, 3600 ms, 3600 ms, 3600 ms, and 800 ms, respectively; image content displays a sequence of solar images with decreasing brightness).
Sensors 25 06329 g009
Figure 10. Corresponding changes in physical feature curves: decrease in brightness and contrast. The maximum fluctuations of the score, brightness, and gradient metrics reached 0.7, 0.3, and 0.5, respectively, which are significantly higher than those shown in Figure 8 (where fluctuations did not exceed 0.1). For texture features, the GLCM contrast exhibited a maximum fluctuation exceeding 90, compared to less than 80 in Figure 8. Similarly, the log-Gabor feature fluctuated by 0.016, which is substantially higher than the ≤0.005 variation observed in Figure 8.
Figure 10. Corresponding changes in physical feature curves: decrease in brightness and contrast. The maximum fluctuations of the score, brightness, and gradient metrics reached 0.7, 0.3, and 0.5, respectively, which are significantly higher than those shown in Figure 8 (where fluctuations did not exceed 0.1). For texture features, the GLCM contrast exhibited a maximum fluctuation exceeding 90, compared to less than 80 in Figure 8. Similarly, the log-Gabor feature fluctuated by 0.016, which is substantially higher than the ≤0.005 variation observed in Figure 8.
Sensors 25 06329 g010
Figure 11. A sequence of solar images under conditions of pointing offset (image content displays a sequence of solar images drifting towards the edge of the frame).
Figure 11. A sequence of solar images under conditions of pointing offset (image content displays a sequence of solar images drifting towards the edge of the frame).
Sensors 25 06329 g011
Figure 12. Drastic fluctuations in ROI coordinates and MSCN, indicating non-natural change factors. The spatial coordinates exhibited fluctuations exceeding (500, 500), and the MSCN features along three directions fluctuated by more than 100, far surpassing the values reported in Figure 8.
Figure 12. Drastic fluctuations in ROI coordinates and MSCN, indicating non-natural change factors. The spatial coordinates exhibited fluctuations exceeding (500, 500), and the MSCN features along three directions fluctuated by more than 100, far surpassing the values reported in Figure 8.
Sensors 25 06329 g012
Table 1. Main technical specifications of CCD on FY-3E.
Table 1. Main technical specifications of CCD on FY-3E.
ParameterValue
TypeBack-illuminated, frame transfer
Average quantum efficiency45%
Pixel resolution1024 × 1024
Pixel size13 μm × 13 μm
Peak full well capacity100 ke
Output responsivity3.5 μV/e
Readout noise8 e rms (1.33 MHz)
Output ports2
Table 2. Degradation types and parameter settings for solar EUV images.
Table 2. Degradation types and parameter settings for solar EUV images.
Degradation TypeL1L2L3L4L5
Defocus BlurRadius 3Radius 5Radius 9Radius 13Radius 17
Motion Blur3 @ 0°5 @ 30°9 @ 45°13 @ 60°17 @ 90°
Gaussian Blur3, σ = 0.55, σ = 1.09, σ = 2.013, σ = 3.017, σ = 4.0
Gaussian NoiseVar 0.0005Var 0.001Var 0.005Var 0.01Var 0.02
Salt–Pepper NoiseProb 0.0005Prob 0.001Prob 0.005Prob 0.01Prob 0.02
Mixed Blur+Noise*D3 + M3 @ 0° + G3,σ0.5 + N (G0.0005/S&P0.0005)D5 + M5 @ 30° + G5,σ1.0 + N (G0.001/S&P0.001)D9 + M9 @ 45° + G9,σ2.0 + N (G0.005/S&P0.005)D13 + M13 @ 60° + G13,σ3.0 + N (G0.01/S&P0.01)D17 + M17 @ 90° + G17,σ4.0 + N (G0.02/S&P0.02)
Overexposure+10+15+20+30+40
Notes. Radius denotes the kernel radius (pixels), σ is the standard deviation of Gaussian blur, Var represents the variance of Gaussian noise, and Prob indicates the probability of salt-and-pepper noise. Abbreviations used in mixed degradation include D for defocus blur, M for motion blur, G for Gaussian blur, and N for noise (Gaussian/Salt–Pepper).
Table 3. Categories of handcrafted features employed in solar EUV image quality assessment.
Table 3. Categories of handcrafted features employed in solar EUV image quality assessment.
CategoryFeaturesReferences
BrightnessActivity score, mean intensity[30]
SharpnessMean gradient, Laplacian, Tenengrad[31]
TextureMSCN coefficient consistency; GLCM contrast, homogeneity, energy[9,32]
Noise/FidelityMFGS; PSNR, SSIM, SNR vs. median-filtered version[33,34,35]
FrequencyLog-Gabor responses, wavelet entropy[36]
SpatialPatch coordinates[37]
Notes: MSCN (Mean Subtracted Contrast Normalization), GLCM (Gray-Level Co-occurrence Matrix), MFGS (Median Filter Gradient Similarity), PSNR (Peak Signal-to-Noise Ratio), SSIM (Structural Similarity Index), and SNR (Signal-to-Noise Ratio) are standard image quality assessment metrics used in this study.These handcrafted features are specifically tailored to solar imaging conditions, capturing degradations induced by photon shot noise, instrument-specific blur, and the preservation of coronal structures.
Table 4. Comparative classification performance for FY-3E X-EUV 19.5 nm image quality assessment.
Table 4. Comparative classification performance for FY-3E X-EUV 19.5 nm image quality assessment.
Feature TypeClassifierAccuracyPrecisionRecallF1 ScoreAUCTraining Time (s)Prediction Time (s)
Deep Features (HyperIQA PCA 2D)SVM0.53110.69540.53110.53200.8328161.1121.85
XGBoost0.77270.77340.77270.77300.96410.660.015
Random Forest0.79510.79540.79510.79520.96916.650.208
Handcrafted Features (22D)SVM0.57420.59880.57420.55600.8887209.3429.65
XGBoost0.97500.97510.97500.97500.99901.490.024
Random Forest0.96960.96970.96960.96960.998510.680.146
Fused Features (22 + 2D)SVM0.57370.59860.57370.55560.8885214.3830.93
XGBoost *0.97910.97920.97910.97920.99921.250.024
Random Forest0.97360.97360.97360.97360.998810.750.157
Notes. SVM(Support Vector Machine); * Among all classifiers, XGBoost combined with fused features achieved the best performance, with an accuracy of 97.91% and an AUC of 0.9992.
Table 5. Observed variations in image features in FY-3E solar EUV dataset.
Table 5. Observed variations in image features in FY-3E solar EUV dataset.
FeatureObserved Variation
Low-level metrics (score, brightness, gradient, Laplacian, sharpness)≤0.1 (normalized units)
PSNR(Peak Signal-to-Noise Ratio)±3–4 dB
SNR (Signal-to-Noise Ratio)±3–4 dB
GLCM-based texture features< 0.01
Spatial coordinatesStable (no geometric misregistration)
Overall contrast (Michelson contrast)80
Log-Gabor energy0.004
MFGS (median filter–gradient similarity)≤0.025
SSIM (structural similarity index)<0.011
HH entropy (high-frequency entropy)1.8
PC1 (first principal component)≤0.05
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Dai, S.; He, L.; Xu, S.; Sun, L.; Chen, H.; Yu, S.; Wu, K.; Wang, Y.; Xuan, Y. Quality Assessment of Solar EUV Remote Sensing Images Using Multi-Feature Fusion. Sensors 2025, 25, 6329. https://doi.org/10.3390/s25206329

AMA Style

Dai S, He L, Xu S, Sun L, Chen H, Yu S, Wu K, Wang Y, Xuan Y. Quality Assessment of Solar EUV Remote Sensing Images Using Multi-Feature Fusion. Sensors. 2025; 25(20):6329. https://doi.org/10.3390/s25206329

Chicago/Turabian Style

Dai, Shuang, Linping He, Shuyan Xu, Liang Sun, He Chen, Sibo Yu, Kun Wu, Yanlong Wang, and Yubo Xuan. 2025. "Quality Assessment of Solar EUV Remote Sensing Images Using Multi-Feature Fusion" Sensors 25, no. 20: 6329. https://doi.org/10.3390/s25206329

APA Style

Dai, S., He, L., Xu, S., Sun, L., Chen, H., Yu, S., Wu, K., Wang, Y., & Xuan, Y. (2025). Quality Assessment of Solar EUV Remote Sensing Images Using Multi-Feature Fusion. Sensors, 25(20), 6329. https://doi.org/10.3390/s25206329

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop