Next Article in Journal
Long-Term Time-Series Dynamics of Lake Water Storage on the Qinghai–Tibet Plateau via Multi-Source Remote Sensing and DEM-Based Underwater Bathymetry Reconstruction
Next Article in Special Issue
Striping Noise Reduction: A Detector-Selection Approach in Multi-Column Scanning Radiometers
Previous Article in Journal
TransUV: A TransNeXt-Based Model with Multi-Scale and Attention Fusion for Fine-Grained Urban Village Extraction
Previous Article in Special Issue
Temperature-Field Driven Adaptive Radiometric Calibration for Scan Mirror Thermal Radiation Interference in FY-4B GIIRS
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Element Evaluation and Selection for Multi-Column Redundant Long-Linear-Array Detectors Using a Modified Z-Score

1
Shanghai Institute of Technical Physics, Chinese Academy of Sciences, Shanghai 200083, China
2
State Key Laboratory of Infrared Physics, Shanghai Institute of Technical Physics, Chinese Academy of Sciences, 500 Yutian Road, Shanghai 200083, China
3
University of Chinese Academy of Sciences, Beijing 100049, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2026, 18(2), 224; https://doi.org/10.3390/rs18020224
Submission received: 24 November 2025 / Revised: 24 December 2025 / Accepted: 7 January 2026 / Published: 9 January 2026
(This article belongs to the Special Issue Remote Sensing Data Preprocessing and Calibration)

Highlights

What are the main findings?
  • This paper proposes a comprehensive detector evaluation method based on a modified Z-score. By employing triple metric categorization, robust normalization, introducing spectral response deviation, and utilizing interquartile range (IQR) normalization, it effectively addresses the evaluation and optimal selection challenges for multi-column redundant long-linear-array detectors.
What are the implications of the main findings?
  • Compared to traditional single-metric strategies (e.g., sensitivity-first), the optimal detectors selected by our method show significant improvement across multiple performance indicators, markedly enhancing data quality.
  • The proposed framework features low computational complexity and strong adaptability, enabling direct application to multi-column redundant long-linear-array detector payloads. This method upgrades the Best Detector Selection (BDS) process from a pre-launch one-off configuration to a sustainable optimization scheme, supporting on-orbit real-time detector selection and dynamic updates.

Abstract

New-generation geostationary meteorological satellite radiometric imagers widely employ multi-column redundant long-linear-array detectors, for which the Best Detector Selection (BDS) strategy is crucial for enhancing the quality of remote sensing data. Addressing the limitation of current BDS methods that often rely on a single metric and thus fail to fully exploit the detector’s comprehensive performance, this paper proposes a detector evaluation method based on a modified Z-score. This method systematically categorizes detector metrics into three types: positive, negative, and uniformity. It introduces, for the first time, spectral response deviation (SRD) as an effective quantitative measure for the Spectral Response Function (SRF) and employs a robust normalization strategy using the Interquartile Range (IQR) instead of standard deviation, enabling multi-dimensional detector evaluation and selection. Validation using laboratory data from the FY-4C/AGRI long-wave infrared band demonstrates that, compared to traditional single-metric optimization strategies, the best detectors selected by our method show significant improvement across multiple performance indicators, markedly enhancing both data quality and overall system performance. The proposed method features low computational complexity and strong adaptability, supporting on-orbit real-time detector optimization and dynamic updates, thereby providing reliable technical support for high-quality processing of remote sensing data from geostationary meteorological satellites.

1. Introduction

The radiometric imager aboard geostationary meteorological satellites typically perform a line-by-line scan of the target area through the coordinated movement of east–west and north–south scanning mirrors, operating in a ‘paintbrush-like’ manner (where the scan direction may be adjusted in specific operational modes) [1,2,3,4,5,6]. To enhance observational capabilities, new-generation geostationary imagers have widely adopted large-scale long-linear-array focal plane detector structures. Notable examples include the Advanced Baseline Imager (ABI) on the US GOES-R series [1], the Advanced Himawari Imager (AHI) on Japan’s Himawari-8/9 [2,3], the Advanced Meteorological Imager (AMI) on South Korea’s GEO-KOMPSAT-2A [4], the Flexible Combined Imager (FCI) on the European MTG series [5], the Geostationary High-speed Imager (GHI) on China’s Fengyun-4B (FY-4B) satellite [6], and the Advanced Geostationary Radiation Imager (AGRI) scheduled for launch aboard the Fengyun-4C satellite. By substantially increasing the number of detection elements, long-linear arrays effectively expand the coverage area of a single scan, significantly enhancing temporal resolution, spatial resolution, and the signal-to-noise ratio (SNR), demonstrating promising application prospects.
However, constrained by current infrared detector manufacturing techniques, the emergence of non-functional detectors (blind detectors) within focal plane arrays is inevitable [7,8,9]. These detectors typically exhibit either a lack of response or excessively high noise, rendering them ineffective for signal detection. With the increasing scale of detector arrays and the substantial growth in detector count, the occurrence of blind detectors becomes unavoidable, which severely compromises the usability of the acquired data. Consequently, it is imperative to adopt effective measures to mitigate or compensate for the detrimental effects caused by these blind detectors [8,9].
To address the issue of blind detectors, long-linear-array focal plane detectors typically employ a multi-column redundant architecture. This involves arranging multiple columns (commonly four or six) of detector arrays with identical specifications in parallel, enabling simultaneous observation of the same ground target (as schematically illustrated in Figure 1). For invalid data detected by a blind detector, data from a functional detector within the same row can be selected to replace it. This approach effectively circumvents the impact of blind detectors at the hardware level.
Leveraging this redundant architecture, a single detector with optimal performance must be selected from each row prior to image synthesis to construct the final image. This process is termed “Best Detector Selection” (BDS), and the resulting index map formed by these row-wise selections is referred to as the BDS map (a schematic of the workflow is shown in Figure 2). Currently, the BDS maps for most payloads are determined during the pre-launch ground testing phase. To ensure system stability, the BDS map is generally not updated during on-orbit operation, with adjustments made only when the currently selected optimal detector exhibits anomalies.
From an ideal perspective, the optimal detector should integrate high sensitivity, high calibration accuracy, good uniformity, and long-term stability to enhance the quality of the synthesized imagery. However, a unified and universally accepted evaluation standard for the “optimal detector” has not yet been established within the industry, and a general methodology for balancing diverse metrics of different types and meanings is still lacking. Existing BDS strategies often overemphasize a single metric (e.g., sensitivity) while neglecting other critical performance parameters of the detectors [10]. For example, in the operational long-wave infrared band of the Fengyun-4 GHI payload, the BDS strategy prioritizes detectors with the highest sensitivity. Although this helps improve overall sensitivity and minimize the system’s Noise-Equivalent Differential Temperature (NEdT), it overlooks other vital metrics—such as calibration accuracy and spectral response function—thus failing to fully exploit the detector’s comprehensive potential.
Therefore, establishing a rational evaluation index system and effectively integrating multiple metrics with distinct physical meanings to achieve comprehensive detector evaluation and BDS has become a critical issue warranting in-depth research. This paper proposes a detector evaluation method based on a modified Z-score, capable of accommodating diverse types of performance indicators and facilitating multi-dimensional comprehensive optimization. This method will be applied to the infrared channel processing of the AGRI aboard China’s Fengyun-4C satellite, providing technical support for its detector optimization.

2. Materials and Methods

2.1. Detector Element Evaluation

The evaluation of detector performance must rely on multiple key metrics. Based on their physical meanings and their directional contribution to image quality, these metrics can be systematically categorized into three types:
(1)
Positive Metrics: A higher value indicates better detector performance. Examples include SNR, radiometric calibration accuracy, and linearity.
(2)
Negative Metrics: A lower value signifies better detector performance. Typical examples are NEdT and calibration bias.
(3)
Uniformity Metrics: These metrics focus on the consistency of performance across different detectors. Values closer to each other indicate better uniformity. Examples include responsivity, the Spectral Response Function (SRF), and dark current.
During the process of detector evaluation, one should avoid the over-optimization of a single metric. For instance, solely pursuing the highest sensitivity (i.e., the lowest NEdT) may introduce non-uniformity issues due to the neglect of radiometric calibration bias or disparities in the spectral response functions among detectors. This often manifests as striping noise in the imagery, severely impacting subsequent applications [11,12,13,14,15,16,17,18]. Conversely, exclusively prioritizing the highest calibration accuracy might lead to an excessive sacrifice in sensitivity, thereby hindering the detection of subtle changes in the target [19,20]. Therefore, developing an evaluation methodology that holistically balances multiple performance dimensions is crucial for achieving high-quality imaging.
In the practical implementation of the proposed method, relevant metrics can be flexibly selected for comprehensive evaluation based on specific remote sensing mission objectives and actual requirements, thereby achieving optimization of overall system performance. For the BDS procedure, the selected metrics should ideally be mutually independent and exhibit no significant correlation. To concretely illustrate the proposed methodology, this paper selects four representative metrics—sensitivity, calibration bias, the Spectral Response Function (SRF), and responsivity—as the foundation for constructing the comprehensive evaluation framework. These metrics are described as follows:
(1)
Sensitivity
Sensitivity reflects the detector’s capability to discern changes in the target signal. The detection sensitivity of an infrared detector is limited by noise, specifically referring to the random electrical noise generated by the detector and circuit components. For infrared detectors, sensitivity is typically evaluated using NEdT. A lower NEdT value indicates higher sensitivity [19].
(2)
Calibration Bias
Calibration bias reflects the accuracy of the calibration or the goodness-of-fit of the calibration model. It is generally quantified as the discrepancy between the true radiance (or brightness temperature) and the radiance retrieved through the calibration process under a specific test condition (e.g., the difference between the true brightness temperature and the calibrated brightness temperature based on using a 300 K blackbody as reference). A smaller calibration bias indicates higher calibration accuracy for that specific condition. During the ground testing phase, where the true radiance is known, calibration accuracy can be assessed using the mean or root mean square of the calibration bias across multiple test conditions. However, for on-orbit calibration, obtaining true radiance values across different levels is challenging (due to difficulties in acquiring ground truth for terrestrial targets and the limited active temperature adjustment range of most onboard blackbody systems). Consequently, evaluating on-orbit calibration accuracy using the onboard blackbody often relies on the calibration bias from a single operating condition (the fixed operating temperature of the onboard blackbody) [20,21,22,23]. This study also adopts this approach to assess calibration bias. In addition, the evaluation of on-orbit calibration accuracy can also be achieved through ground calibration sites or cross-calibration with other satellite payloads; the method described in this paper is likewise applicable to such scenarios.
(3)
Spectral Response Function (SRF)
The Spectral Response Function (SRF) is a function of wavelength that describes the relative response of a detector to incident light at different wavelengths. It characterizes how the detector output varies with wavelength when subjected to incident light of unit energy or unit spectral radiance across the spectrum. The ideal SRF shape typically approximates a rectangular window. The SRF is an extremely critical parameter, directly impacting calibration accuracy, uniformity, and retrieval precision. For long-linear-array detectors, it is desirable for the SRF of each detector to be as identical as possible (i.e., having the same curve shape). Significant differences cause individual detectors to receive different effective radiances even from the same target, leading to non-uniformity and adversely affecting subsequent applications. Therefore, the selected detectors should ideally exhibit SRFs that are closely matched [24,25,26].
(4)
Responsivity
Responsivity refers to the ratio of the change in the detector’s output signal to the change in the incident target energy. Generally, it is desirable for the responsivity of different detectors to be as similar as possible. This facilitates the selection of appropriate bias settings and amplification gains for the subsequent readout circuitry, enabling the full performance potential of the detector. Furthermore, closely matched responsivity values imply that most detectors operate within the central portion of their dynamic range, thereby avoiding the larger calibration biases associated with strong non-linear effects typically found at the extremes of the dynamic range [27,28].

2.2. Blind Detector Element Identification and Removal

Blind detector identification and removal constitutes an indispensable prerequisite step for BDS. As detector performance gradually evolves over time, the instrument requires timely updates to the BDS map. Failure to accurately identify blind detectors may result in their erroneous selection as valid detectors during the update process, thereby rendering the update ineffective. Furthermore, since BDS relies on a comprehensive evaluation of the performance of all normal detectors, the construction of a rational BDS map must be founded upon the exclusion of interference from blind detectors.
Blind detector identification is a crucial preprocessing step in best detector selection. Since the performance metrics of blind detectors often exhibit significant outliers—for example, some blind elements have responsivity close to zero, or their noise and NEdT values are several orders of magnitude higher than the overall mean—evaluating the overall performance of the detector array (such as average responsivity or average sensitivity) before blind element removal yields results that lack practical meaning and comparability. Traditional blind detector detection primarily relies on a few key parameters, including detector responsivity, noise level, and noise-equivalent differential temperature. A typical identification procedure is as follows: first, elements with responsivity lower than one-quarter of the mean responsivity of the entire array are removed; second, elements whose noise amplitude exceeds twice the mean noise of the entire array are excluded; finally, elements that do not meet the preset NEdT threshold are filtered out. Through such rule-based screening, most substandard elements that fail to meet detection requirements are effectively eliminated. After removing these extreme outliers, the overall performance of the detector array is usually significantly improved.
However, with the continuously increasing demand for quantitative accuracy in remote sensing data, traditional methods have become inadequate in discriminative capability to meet the performance requirements of new-generation long-linear-array detectors. Modern blind detector identification should incorporate multidimensional metrics—such as calibration bias, dynamic range, spectral response function consistency, and long-term stability—to achieve more comprehensive and accurate blind element recognition [29]. The specific methodologies for new-type blind detector detection are beyond the scope of this paper and will not be detailed here. All data used in this study underwent blind element identification and removal during the preprocessing stage.

2.3. Z-Score

An ideal detector evaluation and selection methodology should possess the following characteristics: the capability to integrate diverse metrics of different types and units, support for assigning varying weights to these metrics to accommodate diverse application needs, and low computational complexity to meet the demands of operational processing and even on-orbit real-time requirements. The Z-score (also known as the standard score), as a statistical measure, provides a viable pathway for comprehensively comparing performance indicators with different physical meanings and units. The following section first briefly introduces the fundamental principle of the conventional Z-score, followed by an elaboration of the modified Z-score method proposed in this paper.
The Z-score is defined as the difference between an observed value and the mean of the entire dataset, divided by the standard deviation, resulting in a dimensionless number. Its mathematical expression is as follows:
z i X = X i μ X σ X
In Equation (1), X i represents the measured value or performance of detector i for a specific metric, μ ( X ) denotes the mean value of that metric across all detectors, and σ ( X ) is its standard deviation. Through Z-score transformation, metrics with different units and magnitudes are normalized to a common standard scale, thereby enabling comparability.
The magnitude of the Z-score reflects the relative position of a data point within the overall distribution: a higher Z-score indicates that the data point ranks higher among all data points; a Z-score close to zero signifies a performance level near the overall median; while a lower Z-score corresponds to a poorer relative ranking. For positive metrics, a higher Z-score indicates better performance; conversely, for negative metrics, a lower Z-score signifies better performance.
Upon completing the Z-score normalization for different metrics, unified calculation and comparison across these metrics become feasible. To concretely illustrate the comprehensive evaluation procedure, this paper selects the Signal-to-Noise Ratio (SNR, a positive metric) and the calibration bias (Δcal, a negative metric) as exemplary indicators.
First, the Z-scores for each detector concerning SNR and Δcal are calculated separately as follows:
z i S N R = S N R i μ S N R σ S N R
z i Δ c a l = Δ c a l i μ Δ c a l σ Δ c a l
Based on these, the comprehensive score Z i for each detector is constructed:
Z i = k S N R z i S N R k Δ c a l z i Δ c a l
In Equation (4), k ( S N R ) and k(Δcal) represent the weighting coefficients for SNR and Δcal, respectively. These are non-negative real numbers that can be pre-defined according to specific mission requirements. Since SNR is a positive metric, its term in the equation carries a positive sign; conversely, the calibration bias Δcal, being a negative metric, contributes with a negative sign. A higher comprehensive score Z i indicates superior overall performance of the detector. The BDS map is subsequently constructed by selecting, for each row, the detector with the highest Z i value. Proceeding further, we expand Equation (4) to obtain the following:
Z i = k S N R S N R i μ S N R σ S N R k Δ c a l Δ c a l i μ Δ c a l σ Δ c a l = k S N R S N R i σ S N R k Δ c a l Δ c a l i σ Δ c a l k S N R μ S N R σ S N R + k Δ c a l μ Δ c a l σ Δ c a l
The last two terms in Equation (5) are constants and do not affect the relative ranking of the Z i scores among detectors. Therefore, to simplify the calculation, these constant terms can be discarded, yielding the following simplified expression:
Z i = k S N R S N R i σ S N R k Δ c a l Δ c a l i σ Δ c a l
This simplified form still adheres to the optimization criterion of maximizing Z i and does not affect the resulting BDS map.
The preceding sections describe the detector evaluation and selection methodology based on the conventional Z-score. Although the Z-score-based evaluation method is theoretically intuitive and effective, it exhibits the following three limitations when applied to practical detector evaluation:
(1)
While the Z-score can conveniently incorporate both positive and negative metrics (e.g., SNR and calibration bias), it is difficult to apply directly to uniformity metrics.
(2)
The Z-score is suitable for symmetric distributions (particularly normal distributions) but performs poorly with skewed distributions. For instance, NEdT approximately follows a chi-square distribution. Using the Z-score for normalization in such cases introduces error, as the standard deviation in the denominator becomes inflated, leading to an underweighting of that metric.
(3)
The Z-score is highly sensitive to extreme outliers. A few outliers can significantly increase the standard deviation of the dataset, thereby distorting the effectiveness of the Z-score.
To address these three limitations, this paper proposes a modified Z-score based on the framework of Equation (6).

2.4. Modified Z-Score

To address the second and third limitations of the traditional Z-score in detector evaluation, this paper employs the Interquartile Range (IQR) to replace the standard deviation as the normalization denominator. The IQR is defined as the difference between the 75th percentile (Q3) and the 25th percentile (Q1). In practical computation, the values of all detectors for a given metric are first sorted. Q1 and Q3 are then determined, and their difference yields the IQR, expressed as follows:
I Q R = Q 3 Q 1
Consequently, Equation (1) is modified to:
z i X = X i μ X I Q R X
The IQR is less sensitive to extreme values and the shape of the distribution, offering enhanced robustness.
To address the first limitation—namely, the difficulty of applying the traditional Z-score to uniformity metrics—this paper proposes the following transformation strategy: The essence of a uniformity metric lies in the closeness of performance values among detectors. The further a detector’s metric value deviates from the overall expected value, the poorer its uniformity. Based on this, an intermediate quantity reflecting the degree of uniformity can be constructed by calculating the absolute deviation of each detector’s metric value X i from the expected value E ( X ) Here, E(X) is commonly represented by the mean, and this study adopted the mean for calculation. When there are many outliers in the data, the median can be used as an alternative.
d i X = X i E X
A larger value of this deviation indicates poorer detector uniformity, which aligns with the characteristics of a negative metric. Therefore, uniformity metrics can be transformed into negative metrics for processing, leading to their modified Z-score form:
z i X = d i X I Q R X
At this point, this paper has established corresponding calculation methods using the modified Z-score for each of the three metric types, thus forming a complete comprehensive detector evaluation framework.
A.
For a positive metric, the score for detector i under this metric is denoted as z m o d , i [ + ] ( X ) , calculated as shown in Equation (11):
z m o d , i + X = X i I Q R X
B.
For a negative metric, the score for detector i under this metric is denoted as z m o d , i [ ] ( X ) , calculated as shown in Equation (12):
z m o d , i X = X i I Q R X
C.
For a uniformity metric, the score for detector i under this metric is denoted as z m o d , i [ 0 ] ( X ) , calculated as shown in Equation (13):
z m o d , i 0 X = X i E X I Q R X
In Equations (11)–(13), X i is the value of metric X for detector i, IQR ( X ) is the interquartile range of this metric across all detectors, and in Equation (13), E ( X ) is the expected value of the metric for all detectors (generally represented by the mean or median).
Based on the definitions of the modified Z-score for the different metric types above, the comprehensive modified Z-score for each detector can be calculated as follows:
Z m o d , i = X k X z m o d , i s X
In Equation (14), k ( X ) represents the weighting coefficient for each corresponding metric, all of which are non-negative real numbers; the superscript [s] denotes the metric type, specifically: [+] for positive metrics, [−] for negative metrics, and [0] for uniformity metrics. A higher comprehensive modified score Z m o d , i indicates superior overall performance of the detector. Accordingly, the optimal detector selection is achieved simply by choosing the detector with the highest Z m o d , i in each row of the focal plane.
To facilitate the analysis and comparison of results, Z m o d , i is further linearly mapped to a 0–100 scale for normalization:
Z m o d , i n o m = Z m o d , i min i Z m o d , i max i Z m o d , i min i Z m o d , i × 100
With this, the complete construction of the modified Z-score evaluation model is finalized. This model systematically integrates positive, negative, and uniformity metrics, and introduces a robust normalization method based on the Interquartile Range (IQR). It thereby establishes a comprehensive quantitative framework for assessing detector performance tailored to multi-column redundant detectors. Its structure is clear and its computation efficient, laying a solid methodological foundation for subsequent experimental validation and practical engineering applications.

2.5. Evaluation Method for the Spectral Response Function

The Spectral Response Function (SRF) is a critical parameter for evaluating detector performance, directly impacting radiometric calibration accuracy and the effectiveness of subsequent quantitative applications. Ideally, all detectors within the same spectral band should possess identical SRFs to ensure unbiased and consistent output signals when observing a uniform target. However, due to limitations in manufacturing processes, variations in SRF among detectors are inevitable [12,13,14].
For a given detector i, the effective radiance L i e f f received under the condition of observing a target with spectral radiance L t a r g e t ( λ ) can be expressed as follows:
L i e f f = λ 1 λ 2 S R F i λ L t a r g e t λ d λ λ 1 λ 2 S R F i λ d λ
In Equation (16), S R F i ( λ ) is the spectral response function of detector i, λ 1 and λ 2 are the cut-on and cut-off wavelengths of the spectral band, respectively, and L t a r g e t ( λ ) is the spectral radiance function of the observed target. Even when observing an identical target, differences in the spectral response function S R F i ( λ ) lead to variations in the effective radiance L i e f f received by different detectors. This, in turn, causes non-uniformity and striping noise in the resulting imagery. Therefore, it is highly desirable for the spectral response functions of different detectors to be as closely matched as possible.
To quantify the inter-detector differences in SRF, this paper defines the Spectral Response Deviation (SRD) Δ S R F i as follows:
Δ S R F i = λ 1 λ 2 S R F i λ S R F ¯ λ d λ λ 1 λ 2 S R F ¯ λ d λ × 100 %
In Equation (17), S R F ¯ represents the average spectral response function of all valid detectors within the spectral band, which is typically published to users as the nominal SRF for that band. Δ S R F i quantifies the overall deviation of detector i’s SRF from the average SRF. A larger value indicates poorer spectral response consistency for the detector.
Through this transformation, the Spectral Response Function—originally a uniformity metric—is converted into a negative metric represented by SRD, which can be computed using the modified Z-score method for negative metrics outlined in Equation (12). Furthermore, during the blind detector screening phase, a threshold can be established based on SRD to identify and exclude detectors with excessive SRD as blind detectors, thereby preemptively removing those with anomalous spectral performance at an early stage.
In summary, the introduction of SRD as a quantitative metric not only enables an objective assessment of detector-level spectral response consistency but also provides a foundation for incorporating spectral dimension performance into the subsequent comprehensive evaluation framework. The impact of SRD on actual observational results will be further analyzed in Chapter 3 using experimental data.

3. Results

3.1. Introduction to the FY-4C AGRI

Fengyun-4C (FY-4C) is the third satellite in China’s Fengyun-4 (FY-4) series of geostationary meteorological satellites and is currently scheduled for launch. The Advanced Geostationary Radiation Imager (AGRI), serving as the primary payload aboard the FY-4 meteorological satellites, is capable of performing high-precision, multi-spectral quantitative remote sensing of Earth’s environmental parameters across terrestrial, oceanic, cloud, and atmospheric domains. It directly supports weather analysis and forecasting, climate prediction, as well as environmental and disaster monitoring. Its observational bands span the visible, near-infrared, short-wave infrared, mid-wave infrared, and long-wave infrared regions to meet the diverse requirements of various application fields. All spectral bands of the AGRI utilize large-scale long-linear-array detectors, with the infrared bands employing a four-column redundant detector configuration. The key performance requirements for the AGRI are summarized in Table 1. The “Sensitivity/SNR” column in Table 1 lists the quantitative performance requirements for each band: visible and near-infrared bands are assessed by SNR under specified reflectance conditions, while infrared bands are assessed by NEdT under specified target brightness temperatures. The symbol ρ denotes reflectance; in the context of satellite radiometric specifications it specifically refers to the top-of-atmosphere (TOA) apparent reflectance. This quantity is dimensionless and represents the ratio of reflected radiance to incident solar irradiance, with values ranging from 0% to 100%. For example, “S/N ≥ 150 (ρ = 100%)” indicates that the S/N shall be at least 150 when observing a target with a reflectance of 100%. The listed “Dynamic Range 0–100%” for the visible and NIR bands means that the sensor’s radiometric output span corresponds to radiance values equivalent to 0% through 100% apparent reflectance.

3.2. Performance Evaluation of Individual Metrics

The author team conducted comprehensive testing of the FY-4C AGRI. This section presents the performance characteristics and selection process using AGRI’s Band 19 as an example, focusing on four key metrics measured during the laboratory testing phase: NEdT, calibration bias at 300K, responsivity, and the Spectral Response Function (SRF). All reported results are based on data from which blind detectors have been removed.

3.2.1. NEdT

Figure 3 shows the distribution of NEdT measured under 300 K blackbody conditions. This metric directly characterizes the detection sensitivity of individual detectors. The results indicate that the NEdT values for the vast majority of detectors are predominantly clustered at low levels, demonstrating the excellent sensitivity of the AGRI focal plane array. Concurrently, a small number of detectors were observed to have significantly larger NEdT values, deviating substantially from the main distribution. These elements are likely candidates for exclusion during the subsequent BDS process to ensure high overall imaging sensitivity.

3.2.2. Calibration Bias

The distribution of calibration bias measured using a 300 K blackbody reference source is shown in Figure 4. This metric is key for assessing the absolute accuracy of radiometric measurements. It can be observed that the calibration bias for the vast majority of detectors is confined to low levels, validating the effectiveness and accuracy of the implemented calibration model. Simultaneously, a small number of detectors exhibit significantly larger calibration biases, deviating from the main distribution. These elements are likely candidates for exclusion during the subsequent BDS process to ensure high overall calibration accuracy.

3.2.3. Normalized Responsivity

The normalized responsivity, obtained by dividing each detector’s responsivity by the mean responsivity across the entire array, is presented in Figure 5. The ideal state of uniformity corresponds to all data points being distributed near 1.0. Measured data indicate that the normalized responsivity of most detectors’ clusters tightly around 1.0. However, a small number of detectors with significantly higher or lower responsivity values are still present. High consistency in responsivity is crucial for avoiding striping noise in the imagery. Selecting detectors with excessively high or low responsivity, even if they possess excellent sensitivity and calibration accuracy, can easily lead to prominent vertical striping in the final image. This occurs due to signal saturation or poor calibration accuracy at the extremes of the dynamic range, thereby compromising image uniformity.

3.2.4. SRF and SRD

The distribution of the Spectral Response Functions (SRFs) for all detectors is shown in Figure 6. The blue-shaded area in the figure encompasses the SRFs of all detectors, while the black-shaded area covers the SRFs of 80% of the detectors. The morphology of the curves demonstrates good consistency among the SRFs of different detectors, indicating excellent spectral response uniformity across the detector array.
The distribution of the SRD for each detector is presented in Figure 7. Statistical results show that the SRD for the vast majority of detectors is below 2%. This finding further quantitatively validates the high consistency of the spectral response functions among detectors, which aligns with the visual observation from Figure 6.
To quantitatively evaluate the impact of SRD on radiometric measurement accuracy, this study used a 300 K ideal blackbody as the input radiance source. The effective radiance received by each detector was calculated based on its actual SRF, from which the brightness temperature was subsequently retrieved.
Under ideal conditions, if each detector uses its own SRF for radiance calculation and brightness temperature retrieval, the retrieved result for every detector would be exactly 300 K when observing a 300 K blackbody. However, in an engineering data-processing system, a unified and standardized nominal SRF—typically the average SRF of all valid detectors within the spectral band—is adopted as the common SRF for all detector elements in that channel. Consequently, the effective radiance received by each element (calculated based on its true SRF) is processed through this unified nominal SRF during retrieval, which introduces a discrepancy between the retrieved brightness temperature and the true brightness temperature. This bias is defined as the spectral retrieval bias (SRB), the distribution of which is shown in Figure 8. The variation in SRB across different detectors directly reflects the impact of SRF non-uniformity on the calibration results.
Figure 9 further illustrates the correspondence between the Spectral Response Deviation (SRD) and the Spectral Retrieval Bias (SRB). A distinct positive correlation is observed, indicating that detectors with larger SRD generally exhibit correspondingly larger SRB, which validates the effectiveness of SRD as a quantitative metric for spectral consistency. It should be noted that SRD directly measures the shape deviation between a detector’s SRF and the standard reference SRF (i.e., the average SRF); as long as the SRFs differ, their SRD values are necessarily non-zero. Theoretically, special cases may exist where two SRFs with different shapes, when convolved with a specific target spectral radiance, could yield the same effective radiance and thus the same SRB. However, the key advantage of SRD lies precisely in its target-independent nature. It provides a stable and universal measure for evaluating the intrinsic consistency of a detector’s spectral response, thereby avoiding potential misjudgment caused by accidental matching of target spectral characteristics. Therefore, SRD serves as an objective and reliable indicator for assessing spectral performance.
It should be noted that the aforementioned analysis of SRB is based on ideal blackbody radiation. In practical remote sensing applications, the spectral radiance characteristics of different ground targets vary significantly. Particularly for actual targets with non-flat spectral emissivity characteristics, the SRB caused by SRF non-uniformity often substantially exceeds the estimates derived under ideal blackbody conditions. For example, Figure 10 shows the actual target spectral curve observed in-orbit by the Hyperspectral Infrared Atmospheric Sounder (HIRAS) aboard the FY-3F satellite. The corresponding relationship between the spectral retrieval bias (SRB) and the spectral response deviation (SRD) derived from this actual spectrum is presented in Figure 11. It can be observed that this relationship is similar in shape to the SRB-SRD relationship obtained under ideal blackbody radiation, but its magnitude significantly exceeds the estimates derived under ideal blackbody conditions. Consequently, using the retrieval bias under an ideal blackbody (or any other specific target) to assess SRF uniformity has inherent limitations, as it cannot fully capture the impact of spectral response non-uniformity on detection results across diverse real-world scenarios. Therefore, compared to target-specific retrieval bias, the SRD is independent of the observed target, providing a more universal and objective measure for evaluating SRF uniformity. It serves as a more reliable metric for assessing the spectral performance of detectors.
In summary, the test results presented in this section clearly delineate the performance profile of the FY-4C/AGRI detector array. On one hand, the detector demonstrates excellent performance in its core metrics. On the other hand, the inevitable performance dispersion among detectors and the inherent trade-offs between different metrics (such as sensitivity versus responsivity) adequately demonstrate the necessity of employing a multi-metric comprehensive evaluation method, like the one proposed in this paper, for detector optimization.

3.3. BDS Results

To evaluate the impact of different weighting configurations on detector selection performance, we systematically compare multiple weighting strategies, with the results summarized in Table 2. The table clearly presents four distinct strategy types: individual metric optimization (achieved by setting the weight of a single metric to 1 and all others to 0), metric prioritization (assigning a weight of 3 to one specific metric and a weight of 1 to the others), balanced weighting (assigning equal weights of 1 to all metrics), and an adaptive weighting strategy based on the entropy weight method (where weights are automatically calculated by applying this method to the Z-scores of each metric). The first column of Table 2 lists the names of the strategies, followed by four columns detailing the weighting coefficients for the metrics and their corresponding performance values. These values represent the mean NEdT, mean calibration bias, coefficient of variation for responsivity, and mean SRD of the BDS map under each respective strategy.
The results demonstrate that all optimization strategies yield significant improvements in most performance metrics compared to the baseline using all detectors. Specifically, while the individual metric optimization strategy achieves the best possible value for a single targeted metric, it does so at the expense of performance in other metrics. Taking the traditional “sensitivity-first principle” as an example, although this method minimizes NEdT, it results in limited or no improvement in the other metrics. This indicates that an optimization logic focused solely on a single metric cannot fully exploit the multi-dimensional performance potential of the detectors. The metric prioritization strategy, while achieving slightly lower improvement in its emphasized metric compared to the corresponding “optimal” single-metric strategy, significantly enhances the overall performance across the other metrics. This highlights the value of multi-metric weighting for improving performance balance and validates the rationale behind the method proposed in this paper. The balanced weighting strategy, which assigns equal weight to all metrics, achieves substantial improvement across all indicators, resulting in a relatively comprehensive enhancement of overall performance. Figure 12, Figure 13, Figure 14 and Figure 15 present a comparative analysis of the optimal detectors selected by the traditional method (sensitivity-first) and the balanced approach (all weights set to 1).
Furthermore, this study introduced the entropy weight method for adaptive weight calculation. The results show that the weights assigned by the entropy method are as follows: NEdT (0.2236), calibration bias (0.0707), responsivity (0.6050), and SRD (0.1007). Although this method objectively reflects the inherent variability in the data, the resulting weight distribution diverges from the subjective importance typically assigned to each metric in engineering practice. For instance, responsivity is assigned an excessively high weight, while the weights for sensitivity and calibration accuracy are relatively low. This discrepancy may lead to selection results that deviate from practical mission requirements. Consequently, the applicability of the entropy weight method in this study is limited. Future research should focus on developing weighting determination methods that better balance objective statistical characteristics with subjective engineering priorities.
To validate the universality of the proposed method, we performed the Best Detector Selection (BDS) process on Band 12 of AGRI, and the results are summarized in Table 3. The overall trend is consistent with the analysis for Band 19. In summary, for payloads or spectral bands with a similar multi-column redundant architecture, the proposed method demonstrates good transferability and engineering practicality.

4. Discussion

The experimental results demonstrate that the modified Z-score method proposed in this paper achieves significant comprehensive optimization in detector selection for multi-column redundant long-linear-array detectors. Compared to traditional single-metric strategies (e.g., sensitivity-first), the proposed method avoids the drawback of excessively sacrificing other performance aspects. Through the weighted fusion of multiple metrics, it simultaneously enhances sensitivity, calibration accuracy, and uniformity in a single selection process. Specifically, for Band 19 of FY-4C/AGRI, the balanced weighting configuration achieved the following improvements: the mean NEdT was reduced by 26.8% (from 106.1 mK to 77.7 mK), the mean calibration bias was reduced by 60.5% (from 102.4 mK to 40.4 mK), the coefficient of variation for responsivity was reduced by 37.1% (from 12.75% to 8.02%), and the mean SRD was reduced by 4.7% (from 1.50% to 1.43%). These quantitative improvements not only validate the effectiveness of the method but also fundamentally reduce image non-uniformity and striping noise, thereby enhancing the overall quality of remote sensing data.
From the perspective of optimization theory, the BDS problem is inherently a multi-objective decision-making task, which can be viewed as searching for the Pareto front. Traditional strategies like “sensitivity-first” or “calibration-first” reside at the extremes of this front, sacrificing comprehensiveness to pursue a single optimal metric. The core of our framework lies in providing a navigational tool along the Pareto front, allowing the selection of the best trade-off point via weight coefficients tailored to specific mission requirements (e.g., weak signal detection or high-precision calibration) [30,31,32]. The balanced weighting scheme serves as a robust default configuration when no explicit preference is specified, allowing for fine-tuning in subsequent steps. The method also enhances robustness: the introduction of IQR normalization makes the algorithm more resilient to non-normal distributions (e.g., the chi-square distribution of NEdT) and outliers, avoiding the denominator bias issue associated with the traditional Z-score.
In engineering applications, this method upgrades the BDS process from a pre-launch “one-time fixed” configuration to a sustainable optimization scheme. It enables the satellite to rapidly generate BDS maps based on pre-launch ground test data or initial on-orbit test data. Furthermore, should the performance of the optimal detector degrade, the system can automatically switch to the next best detector within the same row. Going a step further, through periodic on-orbit monitoring of payload performance, this method facilitates scheduled updates of the BDS map. It provides reliable technical support for geostationary meteorological satellites (e.g., the FY-4C/AGRI infrared channels) and can be seamlessly transferred to other payloads employing multi-column long-linear-array detector structures (e.g., ABI).
The determination of metric weights is a crucial step for the effective implementation of the method. In addition to employing several manually defined weighting combinations, this study also tested an objective weight-determination algorithm—the entropy-weight method, which serves as a data-driven strategy to automatically assign weights based on the intrinsic variability of the metrics. The results show that although the entropy-weight method can reflect the statistical dispersion of normalized metrics, when certain metrics contain extreme values or exhibit high variability, the resulting weight distribution may deviate from engineering priorities (e.g., the responsivity metric may be assigned excessively high weight). Therefore, weighting coefficients should be determined by comprehensively considering both the performance characteristics of the detectors and the specific mission objectives. From the perspective of detector performance, by comparing the BDS results under a balanced weighting strategy (i.e., all weighting coefficients set to 1) with the original results of all detector elements, the improvement potential of each metric can be evaluated: metrics that show significant improvement possess higher optimization potential and can be assigned higher weights, whereas metrics with limited improvement have lower potential and should be given correspondingly lower weights. From the perspective of mission objectives, greater emphasis should be placed on calibration-related metrics for high-precision calibration tasks, whereas sensitivity metrics become more critical and should be assigned higher importance in weak-signal detection or high-sensitivity observation scenarios. It is important to emphasize that under no circumstances should a strategy that pursues the optimization of only a single metric (e.g., solely pursuing the lowest NEdT) be adopted.
The modified Z-score Best Detector Selection (BDS) framework proposed in this paper was developed and validated specifically for multi-column long-linear-array infrared payloads and can be widely applied to payloads with similar multi-column redundant architectures. Its core components—metric categorization (positive/negative/uniformity), IQR-based robust normalization, the spectral uniformity metric SRD, and weighted multi-metric fusion—are general and can be transferred to other multi-column long-linear-array imagers through appropriate metric selection and weight configuration. A primary limitation of the method lies in the fact that some key metrics cannot be directly measured on-orbit (e.g., SRF), and usually only ground-test or laboratory-calibration results are available as substitutes. Assessing the long-term stability of such proxy metrics and their influence on the algorithm output remains a significant challenge in practical application. Furthermore, the modified Z-score calculation relies on the statistical distribution of the available detector elements. When the number of elements is small, the sample size is insufficient, or the data contain highly correlated outliers, the robustness of the statistical estimation decreases, which may compromise the reliability of the results. To mitigate these issues, engineering solutions such as using aggregated multi-epoch data or adjusting weights based on engineering priors could be considered.
Future research could explore adaptive weighting mechanisms (e.g., determining weights by integrating sensitivity analysis or AI techniques) and further validate the operational applicability of the method across different payloads and observation scenarios, thereby enhancing its adaptability in complex environments.

5. Conclusions

This paper addresses the detector selection requirements for multi-column redundant long-linear-array infrared detectors by proposing a comprehensive evaluation method based on a modified Z-score, aiming to overcome the limitations of traditional single-metric selection strategies. The main conclusions and contributions of this study are as follows:
(1)
A systematic detector evaluation framework was constructed, which clearly categorizes metrics into positive, negative, and uniformity types, and introduces, for the first time, spectral response deviation (SRD) as a quantitative measure for SRF consistency, supporting multi-dimensional integration.
(2)
Corresponding modified Z-score calculation formulas were proposed for each metric category. The use of IQR normalization enhanced applicability to non-normal distributions and outliers, making the evaluation results more stable and reliable, and achieving effective integration and comprehensive evaluation of multi-dimensional, multi-unit metrics.
Validation using FY-4C/AGRI test data shows that the balanced-weight selection strategy proposed in this paper achieves synchronous optimization of other detector metrics without excessively compromising any single metric (such as sensitivity). Its comprehensive performance is significantly superior to the traditional “sensitivity-first” strategy. Furthermore, the weighting coefficients for different metrics can be flexibly adjusted according to specific remote sensing mission requirements (e.g., high-precision calibration needs or weak signal detection requirements), enabling customized optimal detector selection.
In summary, the modified Z-score framework proposed in this study provides a systematic, robust, and engineering-feasible solution for the comprehensive detector selection of multi-column long-linear-array detectors. This solution not only significantly enhances the initial imaging quality of geostationary meteorological satellites, but its flexibility and low computational complexity also establish a key technical foundation for achieving autonomous management and continuous performance optimization of the payload throughout its on-orbit lifecycle. With the operational application and promotion of this method on payloads such as FY-4C/AGRI, it is expected to effectively advance the development of high-frequency, high-quantitative-precision meteorological remote sensing. Future work could extend to adaptive weighting and integration with AI to further enhance the method’s universality.

Author Contributions

Conceptualization, X.J.; methodology, X.J. and X.L.; software, X.J. and X.L.; validation, X.J.; formal analysis, X.J.; investigation, X.J. and X.L.; resources, X.L. and C.H.; data curation, X.L. and C.H.; writing—original draft preparation, X.J.; writing—review and editing, X.J., X.L. and C.H.; visualization, X.J.; supervision, X.L. and C.H.; project administration, C.H.; funding acquisition, X.L. and C.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the State Key Program of the National Natural Science Foundation of China (Grant No. 42330110).

Data Availability Statement

The data supporting the findings of this study are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
BDSBest Detector Selection
SNRSignal-to-Noise Ratio
NEdTNoise-Equivalent Differential Temperature
SRFSpectral Response Function
SRDSpectral Response Deviation
IQRInterquartile Range
AGRIAdvanced Geostationary Radiation Imager
FY-4CFengyun-4C
ABIAdvanced Baseline Imager
AHIAdvanced Himawari Imager
AMIAdvanced Meteorological Imager
FCIFlexible Combined Imager
GHIGeostationary High-speed Imager
HIRASHyperspectral Infrared Atmospheric Sounder

References

  1. Kalluri, S.; Alcala, C.; Carr, J.; Griffith, P.; Lebair, W.; Lindsey, D.T.; Race, R.; Wu, X.; Zierk, S. From photons to pixels: Processing data from the advanced baseline imager. Remote Sens. 2018, 10, 177. [Google Scholar] [CrossRef]
  2. Zhu, Z.; Gu, J.; Xu, B.; Shi, C. Characterization of Himawari-8/AHI to Himawari-9/AHI infrared observations continuity. Int. J. Remote Sens. 2023, 45, 121–142. [Google Scholar] [CrossRef]
  3. Zhang, B.; Ichii, K.; Li, W.; Yamamoto, Y.; Yang, W.; Sharma, R.C.; Yoshioka, H.; Obata, K.; Matsuoka, M.; Miura, T. Evaluation of Himawari-8/AHI land surface reflectance at mid-latitudes using LEO sensors with off-nadir observation. Remote Sens. Environ. 2025, 316, 114491. [Google Scholar] [CrossRef]
  4. Kim, D.; Gu, M.; Oh, T.-H.; Kim, E.-K.; Yang, H.-J. Introduction of the Advanced Meteorological Imager of Geo-Kompsat-2a: In-Orbit Tests and Performance Validation. Remote Sens. 2021, 13, 1303. [Google Scholar] [CrossRef]
  5. Mousivand, A.; Straif, C.; Burini, A.; Lekouara, M.; Debaecker, V.; Hewison, T.; Stock, S.; Bojkov, B. In-Flight Calibration of Geostationary Meteorological Imagers Using Alternative Methods: MTG-I1 FCI Case Study. Remote Sens. 2025, 17, 2369. [Google Scholar] [CrossRef]
  6. Jia, X.; Li, X.; Wang, Z.; Han, C. Detection and Correction of Crosstalk Within Channels of Long-Linear-Array Detectors. IEEE Trans. Geosci. Remote Sens. 2025, 63, 5000616. [Google Scholar] [CrossRef]
  7. Cao, Y.; Li, Y. Strip non-uniformity correction in uncooled long-wave infrared focal plane array based on noise source characterization. Opt. Commun. 2015, 339, 236–242. [Google Scholar] [CrossRef]
  8. Liu, G.; Sun, S. Analysis and suppression method of flickering pixel noise in images of infrared linear detector. J. Infrared Millim. Waves 2018, 37, 421–432. [Google Scholar] [CrossRef]
  9. Li, J.; Xie, G.; Liu, L.; Chen, X.; Dong, W.; Lei, Y. Characteristics and causes of non-effective pixels of HgCdTe FPA. Infrared Laser Eng. 2021, 50, 48–59. [Google Scholar] [CrossRef]
  10. Li, X.; Cao, Q.; Zhou, S.; Qian, J.; Wang, B.; Zou, Y.; Wang, J.; Shen, X.; Han, C.; Wang, L.; et al. Prelaunch radiometric characterization and calibration for long wave infrared band of FY-4B GHI. Acta Opt. Sin 2023, 43, 1212005. [Google Scholar] [CrossRef]
  11. Li, B.; Zhou, Y.; Xie, D.; Zheng, L.; Wu, Y.; Yue, J.; Jiang, S. Stripe noise detection of high-resolution remote sensing images using deep learning method. Remote Sens. 2022, 14, 873. [Google Scholar] [CrossRef]
  12. Wang, Z.; Wu, X.; Yu, F.; Fulbright, J.P.; Kline, E.; Yoo, H.; Schmit, T.J.; Gunshor, M.M.; Coakley, M.; Black, M.; et al. On-orbit calibration and characterization of GOES-17 ABI IR bands under dynamic thermal condition. J. Appl. Remote Sens. 2020, 14, 034527. [Google Scholar] [CrossRef]
  13. Gunshor, M.M.; Schmit, T.J.; Pogorzala, D.; Lindstrom, S.; Nelson, J.P. GOES-R series ABI imagery artifacts. J. Appl. Remote Sens. 2020, 14, 032411. [Google Scholar] [CrossRef]
  14. Xu, N.; Wu, P.; Ma, G.; Hu, Q.; Hu, X.; Wu, R.; Wang, Y.; Xu, H.; Chen, L.; Zhang, P. In-Flight Spectral Response Function Retrieval of a Multispectral Radiometer Based on the Functional Data Analysis Technique. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5604210. [Google Scholar] [CrossRef]
  15. Zhang, M.; Carder, K.; Muller-Karger, F.; Lee, Z.; Goldgof, D. Noise reduction and atmospheric correction for coastal applications of Landsat Thematic Mapper imagery. Remote Sens. Environ. 1999, 70, 167–180. [Google Scholar] [CrossRef]
  16. Huang, L.; Gao, M.; Yuan, H.; Li, M.; Nie, T. Stripe Noise Removal Algorithm for Infrared Remote Sensing Images Based on Adaptive Weighted Variable Order Model. Remote Sens. 2024, 16, 3189. [Google Scholar] [CrossRef]
  17. Kurihara, Y. A Bispectral Approach for Destriping and Denoising the Sea Surface Temperature from SGLI Thermal Infrared Data. J. Atmos. Ocean. Technol. 2023, 40, 161–173. [Google Scholar] [CrossRef]
  18. Sun, X.; Guan, L.; Lu, S. Estimation of sea surface temperature in the Arctic based on Fengyun-3D/MERSI II data. Intell. Mar. Technol. Syst. 2025, 3, 11. [Google Scholar] [CrossRef]
  19. Schmit, T.J.; Griffith, P.; Gunshor, M.M.; Daniels, J.M.; Goodman, S.J.; Lebair, W. A closer look at the ABI on the GOES-R series. Bull. Am. Meteorol. Soc. 2017, 98, 681–698. [Google Scholar] [CrossRef]
  20. Tansock, J.J.; Bancroft, D.R.; Butler, J.J.; Cao, C.; Datla, R.V.; Fox, N.P.; Walker, J.H. Guidelines for Radiometric Calibration of Electro-Optical Instruments for Remote Sensing (NIST Handbook 157); National Institute of Standards and Technology: Gaithersburg, MD, USA, 2015. [Google Scholar] [CrossRef]
  21. Qian, H.; Wu, X.; Yu, F.; Shao, X.; Iacovazzi, R.; Wang, Z.; Hyelim, H. Detection and characterization of striping in GOES-16 ABI VNIR/IR bands. In Proceedings of the Earth Observing Systems XXIII, San Diego, CA, USA, 19–23 August 2018; SPIE: Bellingham, WA, USA, 2018; Volume 10764, p. 107641N. [Google Scholar] [CrossRef]
  22. Yu, F.; Wu, X.; Yoo, H.; Qian, H.; Shao, X.; Wang, Z.; Iacovazzi, R. Radiometric calibration accuracy and stability of GOES-16 ABI Infrared radiance. J. Appl. Remote Sens. 2021, 15, 048504. [Google Scholar] [CrossRef]
  23. Chander, G.; Hewison, T.J.; Fox, N.; Wu, X.; Xiong, X.; Blackwell, W.J. Overview of intercalibration of satellite instruments. IEEE Trans. Geosci. Remote Sens. 2013, 51, 1056–1080. [Google Scholar] [CrossRef]
  24. Efremova, B.; Pearlman, A.J.; Padula, F.; Wu, X. Detector level ABI spectral response function: FM4 analysis and comparison for different ABI modules. In Proceedings of the SPIE Earth Observing Systems XXI, San Diego, CA, USA, 28 August–1 September 2016; SPIE: Bellingham, WA, USA, 2016; Volume 9972, p. 99720S. [Google Scholar] [CrossRef]
  25. Padula, F.; Cao, C. Detector-level spectral characterization of the Suomi NPP Visible Infrared Imaging Radiometer Suite long-wave infrared bands M15 and M16. Appl. Opt. 2015, 54, 5109–5116. [Google Scholar] [CrossRef]
  26. Pearlman, A.J.; Padula, F.; Cao, C.; Wu, X. The GOES-R Advanced Baseline Imager: Detector spectral response effects on thermal emissive band calibration. In Proceedings of the Sensors, Systems, and Next-Generation Satellites XIX, Toulouse, France, 21–24 September 2015; SPIE: Bellingham, WA, USA, 2015; Volume 9639, p. 963917. [Google Scholar] [CrossRef]
  27. Orżanowski, T. Nonuniformity correction algorithm with efficient pixel offset estimation for infrared focal plane arrays. Springer Plus 2016, 5, 1831. [Google Scholar] [CrossRef]
  28. Bianconi, S.; Mohseni, H. Recent advances in infrared imagers: Toward thermodynamic and quantum limits of photon sensitivity. Rep. Prog. Phys. 2020, 83, 044101. [Google Scholar] [CrossRef]
  29. Mudau, A.E.; Willers, C.J.; Griffith, D.; le Roux, F.P.J. Non-uniformity correction and bad pixel replacement on LWIR and MWIR images. In Proceedings of the 2011 Saudi International Electronics, Communications and Photonics Conference (SIECPC), Riyadh, Saudi Arabia, 24–26 April 2011; IEEE: Piscataway, NJ, USA, 2011; pp. 1–5. [Google Scholar] [CrossRef]
  30. Li, Z.; Xu, D.; Guo, X. Mult objective optimization-based hyperspectral unsupervised band selection for anomaly detection. IEEE Trans. Geosci. Remote Sens. 2024, 62, 1–15. [Google Scholar] [CrossRef]
  31. Yang, C.; Xia, Y. Interval Pareto front-based multi-objective robust optimization for sensor placement in structural modal identification. Reliab. Eng. Syst. Saf. 2024, 242, 109703. [Google Scholar] [CrossRef]
  32. Pan, B.; Shi, Z.; Xu, X. Analysis for the weakly Pareto optimum in Multobjective-based hyperspectral band selection. IEEE Trans. Geosci. Remote Sens. 2019, 57, 3729–3740. [Google Scholar] [CrossRef]
Figure 1. Schematic diagram of the multi-column detector observation principle, the arrow indicates the scanning direction of the instrument, the red dashed line indicates the boundary between swaths.
Figure 1. Schematic diagram of the multi-column detector observation principle, the arrow indicates the scanning direction of the instrument, the red dashed line indicates the boundary between swaths.
Remotesensing 18 00224 g001
Figure 2. Schematic diagram of BDS.
Figure 2. Schematic diagram of BDS.
Remotesensing 18 00224 g002
Figure 3. Sensitivity of all detector elements.
Figure 3. Sensitivity of all detector elements.
Remotesensing 18 00224 g003
Figure 4. Calibration bias of all detector elements.
Figure 4. Calibration bias of all detector elements.
Remotesensing 18 00224 g004
Figure 5. Responsivity of all detector elements.
Figure 5. Responsivity of all detector elements.
Remotesensing 18 00224 g005
Figure 6. SRF distribution of all detector elements.
Figure 6. SRF distribution of all detector elements.
Remotesensing 18 00224 g006
Figure 7. SRD of all detector elements.
Figure 7. SRD of all detector elements.
Remotesensing 18 00224 g007
Figure 8. SRB of all detector elements based on an ideal blackbody.
Figure 8. SRB of all detector elements based on an ideal blackbody.
Remotesensing 18 00224 g008
Figure 9. Relationship between SRD and ideal blackbody-based SRB.
Figure 9. Relationship between SRD and ideal blackbody-based SRB.
Remotesensing 18 00224 g009
Figure 10. Actual Target Spectral Curve Observed In-Orbit by FY-3 HIRAS.
Figure 10. Actual Target Spectral Curve Observed In-Orbit by FY-3 HIRAS.
Remotesensing 18 00224 g010
Figure 11. Relationship between SRD and actual-target-derived SRB.
Figure 11. Relationship between SRD and actual-target-derived SRB.
Remotesensing 18 00224 g011
Figure 12. Sensitivity of best detector elements.
Figure 12. Sensitivity of best detector elements.
Remotesensing 18 00224 g012
Figure 13. Calibration bias of best detector elements.
Figure 13. Calibration bias of best detector elements.
Remotesensing 18 00224 g013
Figure 14. Responsivity of best detector elements.
Figure 14. Responsivity of best detector elements.
Remotesensing 18 00224 g014
Figure 15. SRD of best detector elements.
Figure 15. SRD of best detector elements.
Remotesensing 18 00224 g015
Table 1. Key Performance Requirements for FY-4C AGRI.
Table 1. Key Performance Requirements for FY-4C AGRI.
BandCenter Wavelength (µm)Spectral Range (µm)Spatial Resolution (km)FPA ConfigurationSensitivity/SNRDynamic Range
10.470.45~0.490.51280 × 1S/N ≥ 150
(ρ = 100%)
S/N ≥ 3
(ρ = 1%)
0~100%
20.5250.50~0.550.51280 × 1S/N ≥ 200
(ρ = 100%)
S/N ≥ 5
(ρ = 1%)
0~100%
3Pan0.40~0.900.252560 × 1S/N ≥ 200
(ρ = 100%)
S/N ≥ 5
(ρ = 1%)
0~100%
40.650.55~0.750.252560 × 1S/N ≥ 150
(ρ = 100%)
S/N ≥ 3
(ρ = 1%)
0~100%
50.650.63~0.670.51280 × 1S/N ≥ 200
(ρ = 100%)
S/N ≥ 5
(ρ = 1%)
0~100%
60.8250.75~0.900.51280 × 1S/N ≥ 200
(ρ = 100%)
S/N ≥ 5
(ρ = 1%)
0~100%
71.3791.371~1.3860.51280 × 2S/N ≥ 70
(ρ = 60%)
S/N ≥ 2
(ρ = 1%)
0~60%
81.611.58~1.640.51280 × 2S/N ≥ 200
(ρ = 100%)
S/N ≥ 5
(ρ= 1%)
0~100%
92.2252.10~2.351640 × 4S/N ≥ 200
(ρ = 100%)
S/N ≥ 5
(ρ = 1%)
0~100%
103.753.50~4.001640 × 4≤0.2 K
(300 K)
≤2.0 K
(240 K)
220 K~400 K
114.053.972~4.1271640 × 4≤0.2 K
(300 K)
≤2.0 K
(270 K)
240 K~400 K
126.255.80~6.702320 × 40.1 K~0.2 K
(300 K)
0.4 K~0.9 K
(240 K)
200 K~300 K
136.956.75~7.152320 × 40.1 K~0.25 K
(300 K)
0.4 K~1.2 K
(240 K)
200 K~300 K
147.427.24~7.602320 × 40.1 K~0.25 K
(300 K)
0.4 K~1.2 K
(240 K)
200 K~320 K
158.558.30~8.802320 × 40.1 K~0.25 K
(300 K)
0.4 K~1.2 K
(240 K)
180 K~330 K
169.619.42~9.802320 × 40.1 K~0.2 K
(300 K)
0.4 K~0.9 K
(240 K)
180 K~330 K
1710.8010.30~11.302320 × 40.1 K~0.2 K
(300 K)
0.4 K~0.9 K
(240 K)
180 K~330 K
1812.0011.50~12.502320 × 40.1 K~0.2 K
(300 K)
0.4 K~0.9 K
(240 K)
180 K~330 K
1913.3013.00~13.602320 × 40.3 K~0.5 K
(300 K)
0.8 K~2.0 K
(240 K)
180 K~315 K
Table 2. Optimal Detector Element Performance under Different Weighting Coefficients of FY-4C/AGRI’s Band 19.
Table 2. Optimal Detector Element Performance under Different Weighting Coefficients of FY-4C/AGRI’s Band 19.
Mean NEdT (mK)Mean Calibration Bias (mK)CV of Responsivity (%)Mean SRD (%)
All Elements106.1102.412.75%1.50%
NEdT-Optimal1000
69.757.210.14%1.52%
NEdT-Prioritized3111
72.145.68.16%1.46%
Calibration Bias-Optimal0100
90.228.510.43%1.52%
Calibration Bias-Prioritized1311
80.433.08.42%1.46%
Responsivity-Optimal0010
99.385.15.35%1.50%
Responsivity-Prioritized1131
82.447.46.31%1.45%
SRD-Optimal0001
106.6102.712.03%1.26%
SRD-Prioritized1113
81.549.79.43%1.34%
Balanced1111
77.740.48.02%1.43%
Entropy Weight0.22360.07070.60500.1007
82.259.55.90%1.46%
Table 3. Optimal Detector Element Performance under Different Weighting Coefficients of FY-4C/AGRI’s Band 12.
Table 3. Optimal Detector Element Performance under Different Weighting Coefficients of FY-4C/AGRI’s Band 12.
Mean NEdT (mK)Mean Calibration Bias (mK)CV of Responsivity (%)Mean SRD (%)
All Elements17.892.6221.899%1.521%
NEdT-Optimal1000
17.072.3531.813%1.528%
NEdT-Prioritized3111
17.081.5251.784%1.516%
Calibration Bias-Optimal0100
17.920.8181.845%1.518%
Calibration Bias-Prioritized1311
17.430.8781.797%1.515%
Responsivity-Optimal0010
17.822.4151.462%1.521%
Responsivity-Prioritized1131
17.251.231.662%1.512%
SRD-Optimal0001
17.862.3812.020%1.454%
SRD-Prioritized1113
17.211.1681.776%1.503%
Balanced1111
17.201.1461.771%1.513%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jia, X.; Li, X.; Han, C. Element Evaluation and Selection for Multi-Column Redundant Long-Linear-Array Detectors Using a Modified Z-Score. Remote Sens. 2026, 18, 224. https://doi.org/10.3390/rs18020224

AMA Style

Jia X, Li X, Han C. Element Evaluation and Selection for Multi-Column Redundant Long-Linear-Array Detectors Using a Modified Z-Score. Remote Sensing. 2026; 18(2):224. https://doi.org/10.3390/rs18020224

Chicago/Turabian Style

Jia, Xiaowei, Xiuju Li, and Changpei Han. 2026. "Element Evaluation and Selection for Multi-Column Redundant Long-Linear-Array Detectors Using a Modified Z-Score" Remote Sensing 18, no. 2: 224. https://doi.org/10.3390/rs18020224

APA Style

Jia, X., Li, X., & Han, C. (2026). Element Evaluation and Selection for Multi-Column Redundant Long-Linear-Array Detectors Using a Modified Z-Score. Remote Sensing, 18(2), 224. https://doi.org/10.3390/rs18020224

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop