Next Article in Journal
Estimating Position, Diameter at Breast Height, and Total Height of Eucalyptus Trees Using Portable Laser Scanning
Previous Article in Journal
Integrating Remote Sensing and Ecological Modeling to Assess Marine Habitat Suitability for Endangered Chinese Sturgeon
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Digital Imaging Simulation and Closed-Loop Verification Model of Infrared Payloads in Space-Based Cloud–Sea Scenarios

1
Shanghai Institute of Technical Physics, Chinese Academy of Sciences, Shanghai 200083, China
2
National Key Laboratory of Infrared Detection Technologies, Shanghai Institute of Technical Physics, Chinese Academy of Sciences, 500 Yutian Road, Shanghai 200083, China
3
University of Chinese Academy of Sciences, Beijing 100049, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2025, 17(16), 2900; https://doi.org/10.3390/rs17162900
Submission received: 1 July 2025 / Revised: 19 August 2025 / Accepted: 19 August 2025 / Published: 20 August 2025

Abstract

Driven by the rising demand for digitalization and intelligent development of infrared payloads, next-generation systems must be developed within compressed timelines. High-precision digital modeling and simulation techniques offer essential data sources but often falter in complex space-based scenarios due to the limited availability of infrared characteristic data, hindering evaluation of the payload effectiveness. To address this, we propose a digital imaging simulation and verification (DISV) model for high-fidelity infrared image generation and closed-loop validation in the context of cloud–sea target detection. Based on on-orbit infrared imagery, we construct a cloud cluster database via morphological operations and generate physically consistent backgrounds through iterative optimization. The DISV model subsequently calculates scene infrared radiation, integrating radiance computations with an electron-count-based imaging model for radiance-to-grayscale conversion. Closed-loop verification via blackbody radiance inversion is performed to confirm the model’s accuracy. The mid-wave infrared (MWIR, 3–5 µm) system achieves mean square errors (RSMEs) < 0.004, peak signal-to-noise ratios (PSNRs) > 49 dB, and a structural similarity index measure (SSIM) > 0.997. The long-wave infrared (LWIR, 8–12 µm) system yields RMSEs < 0.255, PSNRs > 47 dB, and an SSIM > 0.994. Under 20–40% cloud coverage, the target radiance inversion errors remain below 4.81% and 7.30% for the MWIR and LWIR, respectively. The DISV model enables infrared image simulation across multi-domain scenarios, offering vital support for optimizing on-orbit payload performance.

1. Introduction

With the advancement of spaceborne remote sensing technology toward greater intelligence and autonomy, the demand for digital modeling of space-based infrared payloads is increasing [1]. When constructing high-fidelity digital models, accurate detection data under complex background conditions serve as critical inputs, directly affecting both the precision and engineering applicability of simulation systems [2,3]. Airborne observation scenarios involving dynamic targets, flexible adjustments, and optimization of infrared payload parameters have become essential strategies for improving detection performance [4]. Collecting richer perceptual data across multi-dimensional spaces enables the discovery of latent physical phenomena [5], provides high-quality training samples for deep learning algorithms [6], and establishes robust data foundations and repositories for downstream target recognition tasks [7].
Infrared systems achieve detection and imaging by capturing temperature differences between airborne aircraft and their background, enabling round-the-clock target monitoring. Accordingly, modeling infrared sensors and simulating infrared images is of crucial significance for system-level design and performance evaluations [8]. Traditional methods rely on physical models of radiative transfer, incorporating atmospheric backgrounds [9], target characteristics [10], and system MTFs [11] to provide approximations in data-limited scenarios. However, they still depend on simplified physics or empirical parameters, which limits their accuracy in complex scenes such as cloud-covered or high-altitude environments. Furthermore, their limited scalability and scene diversity constrain their effectiveness in data-driven applications.
In physics-driven models, early research on complex background modeling in infrared imaging conducted separate simulations of cloud and sea-surface backgrounds. Lashansky et al. simulated ground-based infrared cloudy sky images [12], and Dulski et al. proposed modeling infrared images of skies and clouds [13]. Wilf et al. simulated sea surface images in the infrared band [14], and Yuan et al. modeled the MWIR radiation characteristics of the sea surface using measured data [15]. Li et al. [16] simulated physical effects in the infrared imaging chain, and Ni et al. [17] modeled the thermal transfer effects on aircraft target skins. Cooke et al. [18] presented a method to characterize the electro-optical response of multi-spectral thermal imagers to user-specified parameters and spectral scenes. These works demonstrate the importance of accurate material and environmental characterization. However, physics-based methods face the following two limitations: (1) a lack of robust multi-physical coupling to represent complex backgrounds such as cloud–sea interface and (2) limited validation using real measurement data, which affects simulation credibility.
Various comprehensive scene generation platforms support infrared imaging simulation. Tools such as Vega Prime [16], CAMEO-SIM [19,20], SE-Workbench-IR [21], and ShipIR/NTCS [22] integrate targets, backgrounds, atmospheric effects, and sensor models into unified multi-band simulation workflows. Other systems, such as NVTherm [23], SYTHER [24], IGOSS [25], and SPIRITS [26], focus on sensor-level modeling, including ray tracing, image degradation, and optical transfer functions. Platforms such as IRISIM [27] and DIRSIG [28] support multi-spectral simulation and image quality assessment, while IRMA [29] emphasizes high-fidelity infrared texture synthesis and thermal radiation modeling. However, they remain limited by closed architectures, simplified background modeling, and a lack of end-to-end validation.
Data-driven methods have demonstrated significant potential in infrared image synthesis through deep learning. Deep learning-based infrared generation models fall into two categories. The first is direct infrared generation, where models trained on infrared images produce similar images based on learned distributions [30]. The second involves models that convert RGB images to infrared through cross-modal mapping. CycleGAN [31,32], Pix2Pix GAN [33,34], Attention GAN [35], and MWIRGAN [36] have proven effective in transforming visible images into infrared. ThermalGAN [37] uses two GANs to predict thermal segmentation maps and local temperature contrasts, thereby converting RGB images into long-wave infrared (LWIR) images. However, purely data-driven models often lack physical interpretability and cannot fully support high-end infrared system design or validation.
In summary, several critical challenges remain unresolved: (1) Infrared images and radiometric measurements in complex environments are difficult to simulate [38], and existing datasets lack the spatial–temporal resolution required for deep learning-based training [39,40]. (2) Most simulation models stop at entrance pupil radiance calculation [13,14,15,41,42,43,44], without incorporating sensor effects such as noise generation, radiometric calibration, and ADC quantization. (3) Existing frameworks rarely provide full-chain modeling from scene radiance to digital output, making it difficult to support detection or recognition tasks [45,46]. (4) Deep networks rarely integrate radiative transfer physics, resulting in images that are visually plausible but not radiometrically calibrated [31,32,33,34], thus limiting their use in engineering applications.
To address the reconstruction and validation of infrared detection scenarios under complex backgrounds, we propose a digital image simulation and verification (DISV) algorithm for infrared image simulation and closed-loop validation. First, we construct an on-orbit cloud database from the New Technology Satellite and Qilu Satellite-2 (QLSAT-2) [47]. An iterative optimization method generates infrared cloud mask images, enhancing the realism of cloud scene simulation. Then, using the Pierson–Moskowitz (PM) sea wave spectral model, we simulate the infrared background of the sea surface, enabling the reconstruction of cloud–sea composite backgrounds at a space-based observation scale. Subsequently, we simulate the infrared radiative characteristics of targets and superimpose them onto a complex background to produce integrated infrared radiance images. Based on this simulation framework, we establish a conversion pipeline from radiance to infrared grayscale images. Finally, using blackbody calibration principles, we apply blackbody reference data and calibration coefficients to invert the simulated grayscale images, completing closed-loop validation of the infrared radiative characteristic simulation. This method demonstrates significant advantages in modeling complex backgrounds, accurately simulating target radiation, and validating simulation results. It provides a robust foundation for performance evaluation and algorithm development in infrared imaging systems. Our main contributions are as follows.
  • A space-based, on-orbit infrared cloud cluster database is constructed using real infrared imagery from the New Technology Satellite and QLSAT-2. The database conforms to the actual distribution of clouds and supports high-credibility background modeling.
  • A complex scene model for space-based infrared detection at varying cloud coverage (CC) rates is proposed and validated through comparison with moderate-resolution atmospheric transmission (MODTRAN) simulation results. The relative error (RE) of the simulated radiance is less than 6.74% for the 3–5 µm band and 7.20% for the 8–12 µm band, demonstrating the model’s accuracy and reliability.
  • A DISV model based on a digital imaging chain is designed to transform radiance images into digital electrons and grayscale images. The model integrates multiple noise sources, enhancing its realism and applicability to actual sensor systems, and is suitable for algorithm testing and system design.
  • A closed-loop simulation validation method is developed to enable direct comparison between simulated and inverted images. Quantitative analysis shows that the root mean square error (RMSE) between these images remains below 0.255, while the peak signal-to-noise ratio (PSNR) exceeds 47 dB, and the structural similarity index measure (SSIM) surpasses 0.994, confirming the system’s superior performance over existing models. Comparative verification of the target radiance inversion yields an RE of 4.81% for the mid-wave infrared (MWIR) band and 7.30% for the LWIR band, validating the practicality and robustness of the proposed methodology.
The remainder of this article is organized as follows. Section 2 introduces the DISV model in detail. Section 3 presents the experimental results and analysis. Section 4 discusses the robustness and advantages of the method. Finally, Section 5 presents the conclusions.

2. Theoretical Modeling and Method

This study proposes the DISV model—an end-to-end framework converting infrared radiation to imagery. As illustrated in Figure 1, the model constructs space-based detection scenarios, generates infrared radiance images, converts radiance to grayscale images through electronic signal processing, and verifies the results through image/radiation inversion.

2.1. Space-Based Infrared Detection Scenario Construction

Combining on-orbit infrared cloud images, we constructed an on-orbit cloud cluster database to store cloud shapes. Image processing and data enhancement techniques were used to increase cloud data diversity and improve dataset quality, as shown in Figure 2.
Binary cloud maps were first obtained through threshold segmentation and set as i m g . Morphological reconstruction was used to fill inner holes in the binary cloud map, ensuring the integrity of cloud mass areas. The hole-filling structure was defined as S E . Morphological closure was then applied to fill small gaps at the cloud mass edges, making the target area more coherent.
i m g c l o s e = i m g · S E = ( i m g S E ) S E
The morphological opening operation is used to eliminate isolated noise in images.
i m g o p e n = i m g c l o s e S E
Subsequently, the erosion operation is applied to remove artifacts or interferences from the edges of cloud masses.
i m g e r o d e = i m g o p e n S E = ( i m g S E ) S E
Finally, the connected region labeling algorithm was used to segment and label distinct cloud masses in the image, quantifying cloud regions to support mass segmentation and cloud cluster database construction. This study utilizes data from the QLSAT-2 and the New Technology Satellite to extract cloud information from multi-source on-orbit satellite images through morphological operations, thereby constructing an on-orbit cloud cluster database. This approach demonstrates versatility and is not dependent on specific satellite data sources.
As shown in Algorithm 1, we generate the cloud mask image based on the cloud cluster database. The initial parameters for cloud mask simulation images, such as CC rate and image size, were set based on the cloud cluster database. A cloud mask image was then simulated. A cloud mass was randomly selected from the database, and the target region was checked for emptiness (cloud-free). If the region was cloud-free, the selected cloud mass was overlaid and aligned in shape and position with the target region to avoid interference. This process generated randomized cloud mask simulation images for research and testing. If CC did not meet the required threshold, the overlay process was repeated; otherwise, the current image was output as the final cloud mask simulation result.
Algorithm 1: Cloud Mask Generation Process
Remotesensing 17 02900 i001
After completing the morphological construction of cloud layers, we superimpose the cloud mask image onto the sea surface generated by PM wave spectrum simulation, thereby constructing the space-based infrared detection scenario.

2.2. Calculation of Space-Based Infrared Radiation Characteristics

First, the total radiance of the space-based infrared background can be calculated as follows:
L λ = τ a t m ε ( λ ) M ( T , λ ) π + L r e l f λ + L p a t h λ
where τ a t m is the atmospheric transmittance, T represents the temperature in Kelvin (K), ε represents the emissivity, λ represents the wavelength, M ( T , λ ) is the spectral radiant emittance, and L p a t h is the the atmospheric path radiance. Both τ a t m and L p a t h are calculated using the Modtran 5.0 software. The reflected solar radiation from the background is calculated as follows:
L r e l f = τ L s u n cos θ i B R D F ( λ , θ i , φ i , θ r , φ r ) d ω
where τ is the surface reflectivity, d ω is the solid angle subtended by solar radiation reaching the detector background, and L s u n is the solar radiance. The incidence zenith angle ( θ i ), reflection zenith angle (RZA) ( θ r ), incidence azimuth angle ( φ i ), and reflection azimuth angle ( φ r ) are also used in the calculation.
The infrared radiation of an aircraft target includes contributions from the engine exhaust plume and the airframe. The plume emits radiation based on its temperature, emissivity, and projected area, while the airframe’s radiation accounts for both its own thermal emission and reflected solar radiation (due to high-reflectivity coatings). The total infrared radiation of the target is the sum of these two components, following the theoretical models and parameter definitions detailed in [4].

2.3. Digital Imaging Simulation and Generation

After calculating the target radiation intensity and background radiance, we perform a digital simulation of the infrared imaging payload. To achieve high-fidelity infrared image generation, a physical conversion model is established to transform radiation signals into grayscale images. Focusing on photoelectric conversion characteristics, we developed a theoretical model framework for electron counts. (1) The target infrared signal electron count model characterizes the detector’s quantized response to target radiation; (2) the background irradiation electron count model quantifies environmental radiation interference; (3) the noise electron count model describes the detection system’s inherent noise characteristics. Through coupled multi-physics field calculations involving these models, we developed an infrared image numerical simulation method that spans radiation transmission, photoelectric conversion, and noise interference, forming a foundation for evaluating and optimizing subsequent imaging systems.
Due to infrared detection limitations and long-distance imaging, space targets typically appear as diffuse point sources characterized by a Gaussian point spread function (PSF). The energy concentration (EC) of a target, defined as the ratio of energy within a single pixel to the total energy distributed by the PSF, is another key characteristic [48]. To ensure spatial consistency, cloud clusters were extracted from on-orbit infrared images, rescaled to match the detector’s focal plane, and overlaid onto sea surface radiance simulated using the PM model, followed by spatial resampling to align with the sensor pixels. For point targets, a Gaussian PSF with an energy concentration coefficient modeled sub-pixel radiance spreading, reflecting the sensor’s optical response. Key detector and optical parameters, including focal length, entrance pupil diameter, and F-number, were incorporated into a pinhole camera model to accurately map scene radiance to pixel locations. This procedure ensured that the simulated radiance distribution aligned with the sensor’s field of view, providing reliable support for target detection and radiometric inversion.
The number of electrons generated by a detector element in response to the received target energy is given by
N s = Q E · λ 0 h c · I t a r g e t π D 2 τ a τ o · E C 4 H 2 · T i n t
where h = 6.6 × 10 34 J · s is Planck’s constant and c = 3 × 10 8 m / s is the speed of light. Detector-specific parameters include the operational central wavelength λ , integration time T i n t , and quantum efficiency Q E , representing the photoelectron conversion rate. Transmission path components include altitude-dependent atmospheric transmittance τ a , optical transmittance τ o , and the effective entrance pupil diameter D. The target spectral radiant intensity is I t a r g e t , and H is the detection range.
The number of electrons generated by the background radiation can be expressed by
N b g = Q E · λ h c · π d x 2 τ a τ o 4 F # 2 L b g · T i n t
where L b g represents the background radiance, F # is the F-number, and d x represents the pixel size.
The total noise electrons n t o t a l are defined as the number of electrons accumulated on the integration capacitor by the end of when the output signal equals the root mean square (RMS) noise voltage. These electrons are contributed by several noise sources, which are listed in Table 1. Photon noise n p h represents the RMS pixel noise due to signal, background, and dark current (units of e-/pixel). Temporal noise includes readout noise n r e a d , the RMS of pixel readout noise (units of e-/pixel). Spatial noise sources include response non-uniformity n M F P , dark current non-uniformity noise (e-/pixel), and dark current non-uniformity n A F P , representing the RMS of pixel dark current non-uniformity noise (e-/pixel). In subsequent sections, N refers to the number of electrons in a pixel, while n denotes the RMS distribution of noise per pixel [4].
Photon noise refers to fluctuations in the rate at which background photons reach the detector’s sensitive elements. It sets the fundamental limit on the noise performance of photon detectors. n p h represents the RMS of all photon-related noise and can be decomposed by the radiation source as follows:
n p h = N s 2 + N b g 2 + N i n s t r 2 + N d a r k 2
here, N i n s t r represents the optical instrument background noise, which is the sum of contributions from various components in the optical system, including the Dewar window radiation. Its mathematical expression is
N i n s t r = i B ( T i ) · d x 2 · λ h c · Q E · T i n t
where N i n s t r , i ( T i ) denotes the number of near-field radiation electrons generated by optical elements at their working temperatures T i , and B ( T i ) is the instrument background irradiance ( W / m 2 ) from those elements. The N d a r k comes from the dark current of the infrared detector and is positively correlated with the integration time. It is calculated as follows:
N d a r k = I d a r k · d x 2 · T i n t q
where I d a r k is the dark current density per unit area, with the unit of A / cm 2 .
Readout noise is an additive temporal noise generated in the readout circuitry of the focal plane. It primarily arises from thermal noise produced by the resistance of the reset switch during conduction. The noise voltage across the integrating capacitor C i n t is
v r e a d = R S · T j u n c t i o n C i n t
where R S is the reset switch, T j u n c t i o n is the junction temperature of the diode, and C i n t is the integrating capacitor.
Therefore, the number of readout noise electrons is
n r e a d = R S · T j u n c t i o n · C i n t q
Noise caused by response non-uniformity is classified as multiplicative noise and differs from dark current non-uniformity in that it is proportional to the exposure level. Variations in pixel quantum efficiency, uneven projection across window regions, focal plane array (FPA) structure, and wafer processing all contribute to pixel response rate non-uniformity within the FPA. The noise electrons from response non-uniformity are calculated as follows:
n r n n = U r n n · N s
where U r n n is the response non-uniformity coefficient.
Additive noise from dark current non-uniformity is independent of the exposure level and is a primary noise source in low-contrast images. The direct current through the p–n junction comprises two components: the photovoltaic diode dark current and photocurrent. Under illumination, monochromatic background radiation induces photocurrent on the photosensitive surface. In darkness, the photovoltaic diode exhibits only dark current, which becomes negligible under zero bias. The noise from dark current non-uniformity is a fixed-pattern additive noise, particularly significant in dark scenes. The number of corresponding noise electrons is calculated as follows:
n d c n = U d a r k · N d a r k
where U d a r k is the non-uniformity coefficient of the dark current.
The total number of noise electrons is
N n o i s e = n p h 2 + n r e a d 2 + n r n n 2 + n d c n 2
The digital number (DN) of each image pixel is computed from the number of electrons from the target, background, and noise. The target pixel grayscale value D N s is calculated as
D N s = N s N f u l l × 2 N b i t
where N f u l l denotes the full well capacity of a single pixel’s integrating capacitor, N b i t is the bit of electronic quantization (typically 8 to 16 bits), and D N b g is the mean grayscale value of background noise, given by
D N b g = N b g N f u l l × 2 N b i t
The total noise grayscale D N n o i s e is expressed in N b i t and is modeled as Gaussian white noise. It is superimposed onto the N×N infrared image as
D N i m g = D N b g + D N n o i s e + D N s
Sensors located in different columns of the infrared focal plane array use distinct readout circuits, and variations in their bias voltages produce stripe noise—manifesting as alternating bright and dark bands, typically in horizontal or vertical orientations [49]. Stripe noise is a common image artifact. We simulate non-periodic additive stripe noise using the following degradation model:
D N d e g r a d a t i o n = D N i m g + S
where S is the additive stripe component. Stripe noise is superimposed onto the image using signal flow summation, which more accurately reflects the practical non-uniformity switching conditions found in infrared payload systems.

2.4. Infrared Imaging Closed-Loop Inversion Verification

Radiometric calibration establishes the relationship between input radiation and output image grayscale values by computing system response coefficients, as illustrated in Figure 3. System calibration serves as the foundation for radiation measurement inversion in infrared radiometric systems. Target thermal radiation enters the infrared imaging optical system, reaches the FPA detector, and undergoes photoelectric conversion to generate the grayscale image. The infrared radiometric system then performs radiation inversion of target characteristics from the output grayscale using the system response parameters.
Blackbody calibration defines the quantitative relationship between detector digital number D N and radiance L for radiation inversion. Gray responses { D N ,   L } are acquired at different blackbody temperatures to establish the following calibration model:
L i n v e r s i o n = D N b k
where k is the system gain coefficient and b is the system offset.
In the target detection scenario, the difference in irradiance at the camera entrance pupil between an airborne target (after atmospheric transmission) and the background at the same location represents an inherent radiometric distinction. This difference depends solely on the detection wavelength, spatial resolution, and background clutter. The radiance at the target’s location is computed using Equation (20), and combined with Equations (6) and (7), the radiometric intensity of the inverted target is obtained.

3. Results

3.1. Simulation Results of Infrared Radiation Images in Complex Detection Scenarios

By integrating the on-orbit cloud cluster database with the cloud mask image generation algorithm, we can produce space-based cloud–sea images with CC ranging from 0% to 50%, as shown in Figure 4. The CC of these generated images is limited by the actual coverage observed in the on-orbit cloud images. To ensure the credibility of the simulations, the CC of the generated images does not exceed that of the original images.
Based on cloud imagery acquired by the Qilu-2 and New Technology satellites, our analysis revealed that CC predominantly falls within the 20–40% range. Accordingly, five experimental groups with representative CC ratios spanning this range were configured to validate the cloud–sea background radiation model. The experimental setup was as follows: geographic coordinates of 150°E, 35°N; date and time of 20 April 2019, at 12:30 PM; and solar zenith angle of 24.39°, solar azimuth angle of 198.72°, reflection azimuth angle of 18.72°, and reflection zenith angle of 0°. Cloud–sea temperatures were referenced from an inverted temperature dataset. Images were generated with a resolution of 320 × 256 pixels, and the cloud type was set as cumulus. An aircraft model was placed at a cruising altitude of 10 km, with a pitch angle of 5°, yaw angle of 20°, and roll angle of 0°. The ambient temperature was 230 K, and the aircraft speed was Mach 0.8, resulting in a calculated skin temperature of 254 K. The plume temperature was set to 700 K.
Based on Wien’s displacement law, the peak wavelength of radiation from the target falls within the MWIR and LWIR bands. Specifically, the peak wavelength for the 254 K skin lies in the LWIR band, while that for the 700 K plume falls in the MWIR band. Using these spectral characteristics, the radiation intensity in these two bands was calculated, yielding 1.3567 × 10 3 W/sr in the MWIR band and 1.3671 × 10 4 W/sr in the LWIR band [42,43,44,50]. Simulated infrared radiance images for complex background conditions were obtained and are shown in Figure 5 and Figure 6. Specifically, Figure 5 presents the MWIR radiance image (3–5 µm), and Figure 6 shows the LWIR radiance image (8–12 µm).
To validate the method accuracy in complex sea–cloud backgrounds, we employed MODTRAN 5.0 as the benchmark for comparative experiments across the MWIR and LWIR bands. This selection derives from MODTRAN’s distinctive capabilities, which are a high spectral resolution and the ability to demonstrate proficiency in simulating complex atmospheric dynamics [9]. These attributes directly address the requirements for precise radiative characterization in multi-band marine cloud environments. Benchmarking against this industry-standard model enhances experimental interpretability through direct comparability with established physical simulations.
Analysis of static data from cloud–sea background average radiance simulations under varying cloud coverage (CC = 20–40%) shows that the proposed approach provides stable and accurate radiance calculations. As shown in Table 2, the MWIR band results indicate a decrease in average radiance from 0.581 W · s r 1 · m 2 (CC = 20%) to 0.580 W · s r 1 · m 2 (CC = 40%), representing a 0.17% reduction. The simulation confirms the observation in Figure 5, where clouds exhibit lower radiance than the sea surface, appearing as “dark clouds” [43]. Additionally, the image’s total radiance decreases with increasing CC, validating the expected radiative behavior. The MODTRAN simulations yield a constant radiance of 0.568 W · s r 1 · m 2 in the MWIR band. In contrast, our model achieves a highly consistent result, with an error margin below 2.43%, confirming its reliability. In the LWIR band, the results similarly reveal an inverse correlation between average radiance and CC, decreasing from 24.971 W · s r 1 · m 2 at CC = 20% to 24.844 W · s r 1 · m 2 at CC = 40%. Unlike MODTRAN, which does not account for CC variation, our model overcomes this limitation by integrating cloud parameters into the simulation, enabling more realistic results under dynamic background conditions. This method offers a robust framework for generating infrared radiation images in complex, multi-spectral detection scenarios with controllable CC. Leveraging real data sources and advanced algorithms, it produces accurate cloud–sea infrared background imagery applicable to a wide range of detection environments.

3.2. Simulation Results of Infrared Gray Images

For engineering applications, we defined two parameter sets for the MWIR and LWIR systems, as presented in Table 3. Using the simulated MWIR and LWIR radiance images, we performed infrared imaging simulations to generate the corresponding grayscale images.
Based on the target’s PSF, the radiance data of the target and background were processed using our proposed digital image simulation model. First, the radiance data were converted into electronic counts, then quantized, and ultimately transformed into grayscale images. As shown in Figure 7, in the MWIR band the target’s radiance intensity is significantly higher than that of the background, appearing as a bright spot in the image. In contrast, in the LWIR band, the aircraft target appears as a dark, spot-like feature (Figure 8), which is consistent with the measured on-orbit images. The dynamic range of the simulated images in both the MWIR and LWIR systems is 72 dB. In the LWIR band the aircraft target exhibits significantly lower brightness temperature than the background, resulting in negative contrast [51,52]. This occurs because the aircraft’s temperature is typically lower than that of the surrounding environment difference, which becomes more pronounced at high altitudes due to ambient temperature, flight speed, and solar radiation effects. This negative contrast phenomenon is validated under cloudy background conditions [44]. The difference in brightness temperature between the target and the background is a key feature in infrared target detection. Our model provides simulated infrared images that offer robust data support for developing and evaluating infrared target detection algorithms.
As shown in Figure 9 and Figure 10, we simulated additive non-periodic streak noise. During this simulation, the positions of the streaks (i.e., contamination of specific columns or rows) and their intensity values are randomly distributed across the image. These streaks vary in brightness and cause pronounced vertical interference that significantly degrades visual image quality. The presence of streak noise conceals target features, making it difficult for detection algorithms to accurately identify targets. Such noise is typically caused by physical limitations within the sensor system. It interferes with radiometric correction, leading to persistent errors in the corrected image and negatively impacting subsequent quantitative analysis [49]. The proposed algorithm offers data support for the effective removal of streak noise in infrared images.

3.3. Closed-Loop Verification Results

3.3.1. Image Inversion Verification Results Based on Blackbody Calibration

The core technological challenge in space-based aerial target detection is achieving accurate infrared detection of moving targets within complex backgrounds. We employed a basic blackbody calibration method to determine the calibration temperature based on simulated radiance data. Initial validation confirmed that the cloud–sea background temperature range of 230–290 K is consistent with known physical principles. A calibration coefficient matrix was successfully generated through experimental procedures, as shown in Figure 11. A strong linear relationship was observed between the radiance and grayscale values, with fitted correlation coefficients exceeding 0.999 in both the MWIR and LWIR bands.
To verify the digital imaging model’s effectiveness, an inversion experiment was conducted in simulated complex detection scenarios. Grayscale images (Figure 7 and Figure 8) were used for inversion after imaging. To validate the accuracy of the background inversion, we calculated the average background value in the target region’s neighborhood and applied the calibration coefficient to reconstruct the radiance image. The inversion results, shown in Figure 12 and Figure 13, demonstrate strong consistency between the retrieved and simulated radiance. This consistency confirms that the model, which adopts a basic calibration technique, achieves good accuracy and reliability. Moreover, it can serve as a data source, supporting not only blackbody calibration but also other calibration methods. As research on calibration algorithms progresses, alternative methods may yield even better results, although the current blackbody calibration is already sufficiently effective.
Table 4 presents inversion results for cloud–sea grayscale images and radiance images under varying CC (20–40%) in both MWIR and LWIR bands. The MWIR band exhibits strong stability across cloud conditions, with a mean absolute error (MAE) consistently at 0.003 and an RMSE fixed at 0.004. The PSNR improves steadily from 49.72 dB at CC = 20% to 51.55 dB at CC = 40%, while the SSIM remains constant at 0.997, indicating excellent image fidelity [53]. The LWIR band shows slightly higher but stable error metrics, with the MAE ranging from 0.201 to 0.204 and the RMSE varying between 0.252 and 0.255 across CC conditions. The PSNR in the LWIR band fluctuates more, from 47.586 dB to 50.347 dB. Nonetheless, both spectral bands achieve high SSIM scores—0.997 for MWIR and 0.994 for LWIR—demonstrating strong structural fidelity. These comprehensive quantitative results validate the model’s robust performance in infrared image simulation under diverse CC conditions.

3.3.2. Target Radiation Intensity Inversion Verification Results

First, the system radiance response coefficient k and grayscale offset b are obtained through blackbody calibration. The number of target pixels n and their grayscale values D N are then extracted. An 11 × 11 pixel neighborhood surrounding the target is selected as the background region. The grayscale values of the target are subtracted from this background matrix to compute the grayscale values. Using atmospheric radiative transfer software, the spectral transmittance along the detection path is calculated, enabling the total radiation intensity to characterize the infrared radiative properties of the point target [51,52]. Inversion calculations were performed separately for the LWIR and MWIR systems. In the MWIR band, the simulated radiation intensity of the target is 1356.7 W/sr, while in the LWIR band it is 13,671 W/sr. The inversion results and corresponding error metrics are summarized in Table 5.
The inversion results of the target radiation intensity show high accuracy under 20–40% CC conditions. For both the MWIR and LWIR systems, neighborhood background at the same location was used for inverting target radiation intensity in the same scenario. The table presents the IRI values and their REs for the MWIR and LWIR bands under different CC conditions. The results indicate that the IRI values for MWIR fluctuate between 1291.5 and 1303.7 W/sr, with REs ranging from 3.91% to 4.81%, while the IRI values for LWIR are more variable, ranging from 12,673 to 14,600 W/sr, with REs from 4.73% to 7.30%. This variation arises from random noise introduced during infrared simulation, which interferes with inversion accuracy when selecting the background image in the target neighborhood. As shown in Table 3, the difference in inversion performance between the two bands stems from the imaging process. The dark current parameters and instrumental background noise of the LWIR system are higher than those of the MWIR system, and the random noise superimposed on images is difficult to eliminate during calibration, resulting in lower inversion accuracy for the LWIR system.
Our model adopts a system-level design philosophy, comprehensively accounting for interactions and constraints among system parameters. It systematically analyzes the effects of photon noise, dark current noise, and response non-uniformity on performance. Infrared image modeling uses electron count as the fundamental quantization metric. Starting from detector-level specifications, we analyze multiple noise sources: photon noise, instrumental noise, dark current, and electronic noise, within a top-down framework. This approach enables accurate characterization of each noise source’s contribution, guiding the calculation of required signal magnitudes and facilitating scientific allocation and optimization of noise indicators across system levels. Consequently, this method allows for effective infrared system optimization to ensure that the final design fully meets performance requirements.

4. Discussion

4.1. Sensitivity Analysis

4.1.1. The Infrared Cloud–Sea Scenario Simulation Analysis

To further evaluate the model sensitivity, we varied the RZA from 0° to 60° at 10° intervals. Analysis of static data from cloud–sea background radiation simulations under varying CC and RZA conditions (with complete 0° data presented in Table 2 and 10–60° results in Table 6) confirmed that the proposed model consistently delivers accurate and stable radiance calculations. As shown in Table 6, the mean background radiance peaked at an RZA of 20° and declined to a minimum at 60°. In contrast, MODTRAN consistently predicted higher values, with a maximum discrepancy of 6.739% observed at RZA = 20° and CC = 20%. This disparity arises because MODTRAN models cloud-covered surfaces as diffuse reflectors, whereas our model incorporates specular reflectance in the MWIR band.
Table 7 presents the LWIR band results, which show the average radiance declining from 24.945 W · s r 1 · m 2 (CC = 20%, RZA = 10° to 23.197 W · s r 1 · m 2 (CC = 20%, RZA = 60°), representing a 7.01% reduction. This result is consistent with the expected decrease in radiation intensity as the viewing angle deviates from the optimal observation direction. At higher cloud coverage (CC = 40%), the image average radiance further decreases to 22.971 W · s r 1 · m 2 , demonstrating the compounding effect of increased cloud cover on radiance suppression [43]. Unlike MODTRAN, which assumes angular invariance, our model follows physical principles, accurately capturing the reduction in radiance with increasing angular deviation from the solar direction. Beyond numerical simulation, the model generates high-fidelity 2D textured infrared images that effectively reflect radiative characteristics across diverse observational scenarios.

4.1.2. The Infrared Target Inversion Result Analysis

To systematically evaluate the inversion performance of target radiation intensity, we designed two experimental schemes under different observation geometries: (a) in the MWIR band with a fixed pitch angle of 0° and yaw angles varying from 0° to 180°; (b) in the MWIR band with a fixed yaw angle of 0° and pitch angles ranging from −90° to 90°. These two experiments, to the ring operator, enabled a comprehensive assessment of how varying observation geometries influence inversion performance. The corresponding inversion results and error analyses are presented in Figure 14.
In this study, we conducted a detailed performance evaluation of MWIR radiation inversion across varying viewing angles. As shown in Figure 14a, the radiation intensity distribution is presented for yaw angles ranging from 0° to 180° with a fixed pitch angle of 0°. Under these conditions, the inversion process produced a maximum error of 5.71%, a minimum error of 1.85%, an average error of 3.94%, and a standard deviation of 0.72%. These results indicate that, while errors appear at certain angles, the overall discrepancy between inverted and simulated values remains small, demonstrating the algorithm’s robustness and accuracy.
In contrast, Figure 14b presents the radiation intensity distribution when the yaw angle is fixed at 0° and the pitch angle varies from –90° to 90°. The inversion results in this configuration showed a maximum error of 7.235% (at 90° pitch), a minimum error of 0.13%, an average error of 3.13%, and a standard deviation of 1.25%. This suggests that under extreme pitch conditions, particularly at 90°, inversion errors increase, indicating the need for further optimization. Comparison between simulated and inverted radiation intensities confirms that the algorithm closely matches the simulated results across most angles. However, specific orientations still present higher discrepancies, highlighting areas for future improvement.
Overall, the inversion results demonstrate a consistent agreement between radiation intensity distributions and the simulated results across all tested angles, underscoring the algorithm’s high accuracy. When analyzing radiation intensity across various angles, the inversion algorithm reliably maintains performance under diverse conditions. The strong alignment between the inversion and simulation results at all angles highlights the algorithm’s excellent robustness and reliability. The proposed method can be calibrated for different detection tasks, scenarios, and systems to meet the inversion requirements of infrared payloads. In addition, it supports the development and evaluation of calibration algorithms by providing the necessary data.

4.2. Comparative Analysis of the Proposed Method

To evaluate the effectiveness of our method, we compared it with existing algorithms. Leja et al. [2] proposed a mathematical model for simulating infrared imaging systems using blackbodies at various temperatures. However, their simulated images exhibited non-uniformity noise that required additional correction. Our method incorporates non-uniformity response as a switchable mode, which is consistent with on-orbit payload design, and it allows users to enable or disable the correction function for more flexible operation. Konnik and Welsh [45] developed a high-level infrared noise simulation model, which we used to validate blackbody imaging across different temperatures. Using the MWIR system parameters from Table 3 and an infrared cloud–sea background target detection scenario, we applied this noise model for imaging, calibration, and inversion. However, their inversion errors were considerable, as shown in Figure 15. Figure 15a–e show the grayscale image produced by their model, while Figure 15f–j present the corresponding inversion result. Table 8 compares the imaging and inversion metrics under different CC scenarios, showing that our method outperforms theirs in RMSE, PSNR, and SSIM.
Compared with the widely used atmospheric radiative transfer software MODTRAN, our model demonstrates three significant advantages. First, as shown in Table 2, the average radiant value of images calculated using our method exhibits a regular decreasing trend with changes in cloud coverage, whereas MODTRAN does not incorporate CC into its parameter system, failing to capture the impact of this critical factor on radiative characteristics and thus struggling to adapt to complex scenarios with dynamically changing CC. Second, the data from Table 6 indicate that MODTRAN simplifies the cloud–sea surface as a Lambertian reflector, resulting in minimal variations in radiant values across the 0–60° range and overlooking the effect of reflection angles on radiative properties. In contrast, our method captures the physical characteristics of specular reflection in cloud–sea scenarios, exhibiting an initial increase followed by a decrease within the 10–30° range of the RZA, which better aligns with the complex radiative transfer mechanisms in real-world environments. Finally, MODTRAN can only output single-dimensional radiative numerical results, whereas our method can generate cloud–sea texture images with detailed features based on the calculated radiant values, providing richer support for intuitive analysis and application in infrared scenarios.
As shown in Table 9, compared with MODTRAN’s single-radiation calculation capability, traditional methods’ scene modeling constraints, and deep learning approaches’ strong data dependency, the proposed radiative transfer simulation framework demonstrates comprehensive superiority across four critical metrics: pioneering integration of scene modeling, physical imaging effect simulation, precise infrared radiation modeling, and output images. However, the proposed method is constrained by the CC of on-orbit images during scene modeling, such that the CC of generated cloud scenes cannot exceed that of the on-orbit images. Nevertheless, this constraint ensures the credibility of the simulated cloud images. This establishes a new-generation standardized solution for infrared simulation in complex environments.
In conclusion, our method offers significant advantages. It begins with infrared imaging radiance and accounts for target and background characteristics, atmospheric transmission effects, and the digital signal chain of infrared imaging. The method converts target radiation signals into electrons with a focus on payload design. Our emphasis on simulating infrared images for wide-area, space-based detection scenarios at meter-level resolution provides technical support and imaging data for developing infrared target detection, image processing algorithms, and the study of calibration algorithms.

5. Conclusions

In this study, we developed a DISV algorithm for high-confidence simulation and closed-loop verification of infrared images in space-based cloud–sea target detection scenarios. Using on-orbit image data, we constructed a cloud cluster database through morphological operations and applied an iterative optimization algorithm to generate realistic cloud backgrounds, matching satellite observations. Based on the radiance calculations for the cloud–sea target detection scene, an electron count model was established using key infrared imaging system parameters to convert radiance data into grayscale images. A closed-loop verification method based on blackbody calibration was then applied to invert image radiance. The results show that inversion achieved an RMSE below 0.004, a PSNR above 49 dB, and an SSIM above 0.997 in the 3–5 µm band and an RMSE below 0.255, a PSNR above 47 dB, and an SSIM above 0.994 in the 8–12 µm band. Verification under CC ratios of 20–40% showed relative errors below 4.81% for the MWIR band and below 7.30% for the LWIR band. Performance evaluation across different angles confirmed that the background simulation outperformed MODTRAN and that the inversion algorithm closely matched the simulation results. These findings confirm the robustness and reliability of the model. This method enables the generation of multi-scenario, multi-type cloud–sea background radiance images and implements an end-to-end digital conversion. Closed-loop verification demonstrates high accuracy, making the method a reliable source of data for calibration inversion and target detection algorithms. Future work will expand the on-orbit cloud cluster database to support infrared cloudscape generation under more scenarios, enrich target categories, and improve the model’s adaptability to extreme environments. Modeling of physical effects during imaging, such as geometric distortion, will also be enhanced alongside spatial registration capability.

Author Contributions

Conceptualization, P.R. and W.S.; methodology, W.S. and Y.L.; software, W.S. and F.L.; validation, W.S., Y.L. and F.L.; formal analysis, W.S.; investigation, W.S.; resources, W.S.; data curation, W.S. and F.L.; writing—original draft preparation, W.S.; writing—review and editing, P.R.; visualization, W.S.; supervision, P.R.; project administration, P.R.; funding acquisition, P.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (Grant No. 62175251) and the Innovation Project of Shanghai Institute of Technical Physics of the Chinese Academy of Sciences (No. CX–436).

Data Availability Statement

The data supporting the findings of this study are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

CCCloud Coverage
DISVDigital Imaging Simulation and Verification
DNDigital Number
ECEnergy Concentration
FPAFocal Plane Array
IRIInversion Radiation Intensity
LWIRLong-Wave Infrared
MAEMean Absolute Error
MODTRANModerate Resolution Atmospheric Transmission
MWIRMid-Wave Infrared
PMPierson–Moskowitz
PSFPoint Spread Function
PSNRPeak Signal-to-Noise Ratio
QLSAT-2Qilu Satellite-2
RERelative Error
RMSERoot Mean Square Error
RMSRoot Mean Square
RZAReflection Zenith Angle
SSIMStructural Similarity Index Measure

References

  1. Tao, F.; Zhang, H.; Zhang, C. Advancements and Challenges of Digital Twins in Industry. Nat. Comput. Sci. 2024, 4, 169–177. [Google Scholar] [CrossRef]
  2. Leja, L.; Purlans, V.; Novickis, R.; Cvetkovs, A.; Ozols, K. Mathematical Model and Synthetic Data Generation for Infra-Red Sensors. Sensors 2022, 22, 9458. [Google Scholar] [CrossRef]
  3. Liu, W.; Wu, M.; Wan, G.; Xu, M. Digital Twin of Space Environment: Development, Challenges, Applications, and Future Outlook. Remote Sens. 2024, 16, 3023. [Google Scholar] [CrossRef]
  4. Li, Y.; Rao, P.; Li, Z.; Ai, J. On-board parameter optimization for space-based infrared air vehicle detection based on ADS-B Data. Appl. Sci. 2023, 13, 6931. [Google Scholar] [CrossRef]
  5. Garnier, C.; Collorec, R.; Flifla, J.; Mouclier, C.; Rousee, F. Physically Based Infrared Sensor Effects Modeling. In Proceedings of the Infrared Imaging Systems: Design, Analysis, Modeling, and Testing X, Orlando, FL, USA, 5–9 April 1999; SPIE: Bellingham, WA, USA, 1999; Volume 3701, pp. 81–94. [Google Scholar]
  6. Zhao, G.; Zhu, J.; Jiang, Q.; Feng, S.; Wang, Z. Edge Feature Enhanced Transformer Network for RGB and Infrared Image Fusion Based Object Detection. Infrared Phys. Technol. 2025, 147, 105824. [Google Scholar] [CrossRef]
  7. Yang, X.; Li, S.; Zhang, L.; Yan, B.; Meng, Z. Antiocclusion Infrared Aerial Target Recognition With Vision-Inspired Dual-Stream Graph Network. IEEE Trans. Geosci. Remote Sens. 2024, 62, 1–14. [Google Scholar] [CrossRef]
  8. Jia, G.; Li, J.; Luo, W.; Zhao, H. Image-driven evaluation metric for a space-based infrared diurnal detection analysis for flying aircrafts. Appl. Opt. 2024, 63, 4848–4857. [Google Scholar] [CrossRef]
  9. Berk, A.; Bernstein, L.S.; Anderson, G.P.; Acharya, P.K.; Robertson, D.C.; Chetwynd, J.H.; Adler-Golden, S.M. MODTRAN Cloud and Multiple Scattering Upgrades with Application to AVIRIS. Remote Sens. Environ. 1998, 65, 367–375. [Google Scholar] [CrossRef]
  10. Li, H. Space target optical characteristic calculation model and method in the photoelectric detection target. Appl. Opt. 2016, 55, 3689–3694. [Google Scholar] [CrossRef]
  11. Liu, Z.; Mao, H.x.; Dai, Y.h.; Wu, J.l. A New Infrared Sensor Model Based on Imaging System Test Parameter. In Proceedings of the 2013 IEEE International Conference on Green Computing and Communications and IEEE Internet of Things and IEEE Cyber, Physical and Social Computing, Beijing, China, 20–23 August 2013; pp. 1953–1956. [Google Scholar]
  12. Lashansky, S.N.; Ben-Yosef, N.; Weitz, A. Simulation of Ground-Based Infrared Cloudy Sky Images. Opt. Eng. 1993, 32, 1290–1297. [Google Scholar] [CrossRef]
  13. Dulski, R.; Sosnowski, T.; Polakowski, H. A Method for Modelling IR Images of Sky and Clouds. Infrared Phys. Technol. 2011, 54, 53–60. [Google Scholar] [CrossRef]
  14. Wilf, I.; Manor, Y. Simulation of Sea Surface Images in the Infrared. Appl. Opt. 1984, 23, 3174–3180. [Google Scholar] [CrossRef]
  15. Yuan, H.; Wang, X.r.; Guo, B.t.; Li, K.; Zhang, W.g. Modeling of the Mid-Wave Infrared Radiation Characteristics of the Sea Surface Based on Measured Data. Infrared Phys. Technol. 2018, 93, 1–8. [Google Scholar] [CrossRef]
  16. Li, N.; Huai, W.; Wang, S.; Ren, L. A Real-Time Infrared Imaging Simulation Method with Physical Effects Modeling of Infrared Sensors. Infrared Phys. Technol. 2016, 78, 45–57. [Google Scholar] [CrossRef]
  17. Li, N.; Su, Z.; Chen, Z.; Han, D. A Real-Time Aircraft Infrared Imaging Simulation Platform. Opt.-Int. J. Light Electron Opt. 2013, 124, 2885–2893. [Google Scholar] [CrossRef]
  18. Cooke, B.; Lomheim, T.; Laubscher, B.; Rienstra, J.; Clodius, W.; Bender, S.; Weber, P.; Smith, B.; Vampola, J.; Claassen, P.; et al. Modeling the MTI Electro-Optic System Sensitivity and Resolution. IEEE Trans. Geosci. Remote Sens. 2005, 43, 1950–1963. [Google Scholar] [CrossRef]
  19. Moorhead, I.R.; Gilmore, M.A.; Houlbrook, A.W.; Oxford, D.E.; Filbee, D.R.; Stroud, C.A.; Hutchings, G.; Kirk, A. CAMEO-SIM: A Physics-Based Broadband Scene Simulation Tool for Assessment of Camouflage, Concealment, and Deception Methodologies. Opt. Eng. 2001, 40, 1896–1905. [Google Scholar]
  20. Haynes, A.W.; Gilmore, M.A.; Filbee, D.R.; Stroud, C.A. Accurate Scene Modeling Using Synthetic Imagery. In Proceedings of the Targets and Backgrounds IX: Characterization and Representation, Orlando, FL, USA, 21–25 April 2003; SPIE: Bellingham, WA, USA, 2013; Volume 5075, pp. 85–96. [Google Scholar]
  21. Le Goff, A.; Latger, J.; Cathala, T. Evolution of SE-Workbench-EO to Generate Synthetic EO/IR Image Data Sets for Machine Learning. In Proceedings of the Automatic Target Recognition XXXII, Orlando, FL, USA, 3 April–13 June 2022; SPIE: Bellingham, WA, USA, 2022; Volume 12096, pp. 188–209. [Google Scholar]
  22. Vaitekunas, D.A.; Alexan, K.; Lawrence, O.E.; Reid, F. SHIPIR/NTCS: A Naval Ship Infrared Signature Countermeasure and Threat Engagement Simulator. In Proceedings of the Infrared Technology and Applications XXII, Orlando, FL, USA, 8–12 April 1996; SPIE: Bellingham, WA, USA, 1996; Volume 2744, pp. 411–424. [Google Scholar]
  23. Bijl, P.; Hogervorst, M.A.; Valeton, J.M. TOD, NVTherm, and TRM3 Model Calculations: A Comparison. In Proceedings of the Infrared and Passive Millimeter-Wave Imaging Systems: Design, Analysis, Modeling, and Testing, Orlando, FL, USA, 1–5 April 2002; SPIE: Bellingham, WA, USA; Volume 4719, pp. 51–62. [Google Scholar]
  24. Garnier, C.; Collorec, R.; Flifla, J.; Mouclier, C.; Rousee, F. Infrared sensor modeling for realistic thermal image synthesis. In Proceedings of the 1999 IEEE International Conference on Acoustics, Speech, and Signal Processing, Phoenix, AZ, USA, 15–19 March 1999; Volume 6, pp. 3513–3516. [Google Scholar]
  25. Wigren, C. Model of Image Generation in Optronic (Electro-Optical) Sensor Systems (IGOSS). In Proceedings of the Infrared Imaging Systems: Design, Analysis, Modeling, and Testing IX, Orlando, FL, USA, 13–17 April 1998; SPIE: Bellingham, WA, USA, 1998; Volume 3377, pp. 89–96. [Google Scholar]
  26. Sullivan, S.P.; Reynolds, W.R. Validation Of The Physically Reasonable Infrared Signature Model (PRISM). In Proceedings of the Infrared Systems and Components II, Orlando, FL, USA, 11–17 January 1988; SPIE: Bellingham, WA, USA; Volume 0890, pp. 104–110. [Google Scholar]
  27. Guissin, R.; Lavi, E.; Palatnik, A.; Gronau, Y.; Repasi, E.; Wittenstein, W.; Gal, R.; Ben-Ezra, M. IRISIM: Infrared Imaging Simulator. In Proceedings of the Infrared Imaging Systems: Design, Analysis, Modeling, and Testing XVI, Orlando, FL, USA, 28 March–1 April 2005; SPIE: Bellingham, WA, USA, 2005; Volume 5784, pp. 190–200. [Google Scholar]
  28. Archer, S.; Gartley, M.; Kerekes, J.; Cosofret, B.; Giblin, J. Empirical Measurement and Model Validation of Infrared Spectra of Contaminated Surfaces. In Proceedings of the Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XXI, Orlando, FL, USA, 20–24 April 2015; SPIE: Bellingham, WA, USA, 2015; Volume 9472, pp. 382–394. [Google Scholar]
  29. Savage, J.; Coker, C.; Thai, B.; Aboutalib, O.; Pau, J. Irma 5.2 Multi-Sensor Signature Prediction Model. In Proceedings of the Modeling and Simulation for Military Operations III, Orlando, FL, USA, 16–20 March 2008; SPIE: Bellingham, WA, USA, 2008; Volume 6965, pp. 95–103. [Google Scholar]
  30. Zhang, R.; Mu, C.; Xu, M.; Xu, L.; Shi, Q.; Wang, J. Synthetic IR Image Refinement Using Adversarial Learning With Bidirectional Mappings. IEEE Access 2019, 7, 153734–153750. [Google Scholar] [CrossRef]
  31. Zhu, J.Y.; Park, T.; Isola, P.; Efros, A.A. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2223–2232. [Google Scholar]
  32. Kim, J.H.; Hwang, Y. GAN-Based Synthetic Data Augmentation for Infrared Small Target Detection. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–12. [Google Scholar] [CrossRef]
  33. Qian, X.; Zhang, M.; Zhang, F. Sparse GANs for Thermal Infrared Image Generation From Optical Image. IEEE Access 2020, 8, 180124–180132. [Google Scholar] [CrossRef]
  34. Zhang, L.; Gonzalez-Garcia, A.; van de Weijer, J.; Danelljan, M.; Khan, F.S. Synthetic Data Generation for End-to-End Thermal Infrared Tracking. IEEE Trans. Image Process. 2019, 28, 1837–1850. [Google Scholar] [CrossRef]
  35. Uddin, M.S.; Hoque, R.; Islam, K.A.; Kwan, C.; Gribben, D.; Li, J. Converting optical videos to infrared videos using attention gan and its impact on target detection and classification performance. Remote Sens. 2021, 13, 3257. [Google Scholar] [CrossRef]
  36. Uddin, M.S.; Kwan, C.; Li, J. MWIRGAN: Unsupervised visible-to-MWIR image translation with generative adversarial network. Electronics 2023, 12, 1039. [Google Scholar] [CrossRef]
  37. Kniaz, V.V.; Knyaz, V.A.; Hladůvka, J.; Kropatsch, W.G.; Mizginov, V. ThermalGAN: Multimodal Color-to-Thermal Image Translation for Person Re-identification in Multispectral Dataset. In Lecture Notes in Computer Science; Springer International Publishing: Berlin/Heidelberg, Germany, 2019; pp. 606–624. [Google Scholar]
  38. Lovejoy, S.; Schertzer, D. Multifractals, Cloud Radiances and Rain. J. Hydrol. 2006, 322, 59–88. [Google Scholar] [CrossRef]
  39. Baum, B.A.; Kratz, D.P.; Yang, P.; Ou, S.C.; Hu, Y.; Soulen, P.F.; Tsay, S.C. Remote Sensing of Cloud Properties Using MODIS Airborne Simulator Imagery during SUCCESS: 1. Data and Models. J. Geophys. Res. 2000, 105, 11767–11780. [Google Scholar] [CrossRef]
  40. Zhang, S.; Chen, X.; Zu, Y.; Rao, P. A Dynamic Imaging Simulation Method of Infrared Aero-Optical Effect Based on Continuously Varying Gaussian Superposition Model. Sensors 2022, 22, 1616. [Google Scholar] [CrossRef]
  41. Buehler, S.A.; Mendrok, J.; Eriksson, P.; Perrin, A.; Larsson, R.; Lemke, O. ARTS, the Atmospheric Radiative Transfer Simulator–Version 2.2. Geosci. Model Dev. 2018, 11, 1537–1556. [Google Scholar] [CrossRef]
  42. Zhang, J.; Qi, H.; Jiang, D.; Gao, B.; He, M.; Ren, Y.; Li, K. Integrated Infrared Radiation Characteristics of Aircraft Skin and the Exhaust Plume. Materials 2022, 15, 7726. [Google Scholar] [CrossRef] [PubMed]
  43. Sun, W.; Li, Y.; Li, F.; Liu, G.; Rao, P. Complex Cloud-Sea Background Simulation for Space-Based Infrared Payload Digital Twin. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2025, 18, 3025–3042. [Google Scholar] [CrossRef]
  44. Dong, Q.; Wang, Q.; Zhou, X.; Xiang, L.; Ma, Z.; Ni, X.; Chen, F. Simulation of aircraft skin-plume integrated infrared radiation characteristics and verification combined with SDGSAT-1 TIS data. Infrared Phys. Technol. 2025, 145, 105666. [Google Scholar] [CrossRef]
  45. Konnik, M.; Welsh, J. High-Level Numerical Simulations of Noise in CCD and CMOS Photosensors: Review and Tutorial. arXiv 2014, arXiv:1412.4031. [Google Scholar] [CrossRef]
  46. Lu, Z. A Full-Link Infrared Sensor Imaging Simulation Method. In Proceedings of the Advanced Fiber Laser Conference (AFL 2024), Changsha, China, 8–10 November 2024; SPIE: Bellingham, WA, USA, 2025; Volume 13544, pp. 251–256. [Google Scholar]
  47. Guo, L.; Rao, P.; Gao, C.; Su, Y.; Li, F.; Chen, X. Adaptive Differential Event Detection for Space-Based Infrared Aerial Targets. Remote Sens. 2025, 17, 845. [Google Scholar] [CrossRef]
  48. Zhang, Y.; Chen, X.; Rao, P.; Jia, L. Dim Moving Multi-Target Enhancement with Strong Robustness for False Enhancement. Remote Sens. 2023, 15, 4892. [Google Scholar] [CrossRef]
  49. Kim, N.; Han, S.S.; Jeong, C.S. ADOM: ADMM-Based Optimization Model for Stripe Noise Removal in Remote Sensing Image. IEEE Access 2023, 11, 106587–106606. [Google Scholar] [CrossRef]
  50. Li, N.; Lv, Z.; Wang, S.; Gong, G.; Ren, L. A Real-Time Infrared Radiation Imaging Simulation Method of Aircraft Skin with Aerodynamic Heating Effect. Infrared Phys. Technol. 2015, 71, 533–541. [Google Scholar] [CrossRef]
  51. Li, L.; Zhou, X.; Hu, Z.; Gao, L.; Li, X.; Ni, X.; Chen, F. On-orbit monitoring flying aircraft day and night based on SDGSAT-1 thermal infrared dataset. Remote Sens. Environ. 2023, 298, 113840. [Google Scholar] [CrossRef]
  52. Zhou, X.; Li, L.; Yu, J.; Gao, L.; Zhang, R.; Hu, Z.; Chen, F. Multimodal aircraft flight altitude inversion from SDGSAT-1 thermal infrared data. Remote Sens. Environ. 2024, 308, 114178. [Google Scholar] [CrossRef]
  53. Hore, A.; Ziou, D. Image Quality Metrics: PSNR vs. SSIM. In Proceedings of the 2010 20th International Conference on Pattern Recognition, Istanbul, Turkey, 23–26 August 2010; pp. 2366–2369. [Google Scholar]
Figure 1. Framework of the proposed DISV model.
Figure 1. Framework of the proposed DISV model.
Remotesensing 17 02900 g001
Figure 2. On-orbit cloud cluster database generation based on on-orbit infrared images.
Figure 2. On-orbit cloud cluster database generation based on on-orbit infrared images.
Remotesensing 17 02900 g002
Figure 3. Blackbody calibration based on the DISV model.
Figure 3. Blackbody calibration based on the DISV model.
Remotesensing 17 02900 g003
Figure 4. The images of complex spaced-based cloud–sea scenes under varying cloud coverage (CC) conditions. (a) CC = 0%, (b) CC = 10%, (c) CC = 20%, (d) CC = 30%, (e) CC = 40%, and (f) CC = 50%.
Figure 4. The images of complex spaced-based cloud–sea scenes under varying cloud coverage (CC) conditions. (a) CC = 0%, (b) CC = 10%, (c) CC = 20%, (d) CC = 30%, (e) CC = 40%, and (f) CC = 50%.
Remotesensing 17 02900 g004
Figure 5. Radiance images of complex target detection scenes under varying CC conditions in the MWIR band. (a) CC = 20%, (b) CC = 25%, (c) CC = 30%, (d) CC = 35%, and (e) CC = 40%.
Figure 5. Radiance images of complex target detection scenes under varying CC conditions in the MWIR band. (a) CC = 20%, (b) CC = 25%, (c) CC = 30%, (d) CC = 35%, and (e) CC = 40%.
Remotesensing 17 02900 g005
Figure 6. Radiance images of complex target detection scenes under varying CC conditions in the LWIR band. (a) CC = 20%, (b) CC = 25%, (c) CC = 30%, (d) CC = 35%, and (e) CC = 40%.
Figure 6. Radiance images of complex target detection scenes under varying CC conditions in the LWIR band. (a) CC = 20%, (b) CC = 25%, (c) CC = 30%, (d) CC = 35%, and (e) CC = 40%.
Remotesensing 17 02900 g006
Figure 7. Grayscale images of complex target detection scenes under varying CC conditions in the MWIR band. (a) CC = 20%, (b) CC = 25%, (c) CC = 30%, (d) CC = 35%, and (e) CC = 40%.
Figure 7. Grayscale images of complex target detection scenes under varying CC conditions in the MWIR band. (a) CC = 20%, (b) CC = 25%, (c) CC = 30%, (d) CC = 35%, and (e) CC = 40%.
Remotesensing 17 02900 g007
Figure 8. Grayscale images of complex target detection scenes under varying CC conditions in the LWIR band. (a) CC = 20%, (b) CC = 25%, (c) CC = 30%, (d) CC = 35%, and (e) CC = 40%.
Figure 8. Grayscale images of complex target detection scenes under varying CC conditions in the LWIR band. (a) CC = 20%, (b) CC = 25%, (c) CC = 30%, (d) CC = 35%, and (e) CC = 40%.
Remotesensing 17 02900 g008
Figure 9. Grayscale images of complex target detection scenes with stripe noise under varying CC conditions in the MWIR band. (a) CC = 20%, (b) CC = 25%, (c) CC = 30%, (d) CC = 35%, and (e) CC = 40%.
Figure 9. Grayscale images of complex target detection scenes with stripe noise under varying CC conditions in the MWIR band. (a) CC = 20%, (b) CC = 25%, (c) CC = 30%, (d) CC = 35%, and (e) CC = 40%.
Remotesensing 17 02900 g009
Figure 10. Grayscale images of complex target detection scenes with stripe noise under varying CC conditions in the LWIR band. (a) CC = 20%, (b) CC = 25%, (c) CC = 30%, (d) CC = 35%, and (e) CC = 40%.
Figure 10. Grayscale images of complex target detection scenes with stripe noise under varying CC conditions in the LWIR band. (a) CC = 20%, (b) CC = 25%, (c) CC = 30%, (d) CC = 35%, and (e) CC = 40%.
Remotesensing 17 02900 g010
Figure 11. Radiometric calibration of infrared detection systems: (a) MWIR and (b) LWIR systems.
Figure 11. Radiometric calibration of infrared detection systems: (a) MWIR and (b) LWIR systems.
Remotesensing 17 02900 g011
Figure 12. Inverted radiance images of complex target detection scenes under varying CC conditions in the MWIR band. (a) CC = 20%, (b) CC = 25%, (c) CC = 30%, (d) CC = 35%, and (e) CC = 40%.
Figure 12. Inverted radiance images of complex target detection scenes under varying CC conditions in the MWIR band. (a) CC = 20%, (b) CC = 25%, (c) CC = 30%, (d) CC = 35%, and (e) CC = 40%.
Remotesensing 17 02900 g012
Figure 13. Inverted radiance images of complex target detection scenes under varying CC conditions in the LWIR band. (a) CC = 20%, (b) CC = 25%, (c) CC = 30%, (d) CC = 35%, and (e) CC = 40%.
Figure 13. Inverted radiance images of complex target detection scenes under varying CC conditions in the LWIR band. (a) CC = 20%, (b) CC = 25%, (c) CC = 30%, (d) CC = 35%, and (e) CC = 40%.
Remotesensing 17 02900 g013
Figure 14. Simulation of MWIR radiation inversion performance under varying observation angles.
Figure 14. Simulation of MWIR radiation inversion performance under varying observation angles.
Remotesensing 17 02900 g014
Figure 15. Simulated images for different CC scenarios in the MWIR band (RZA = 0°): (ae) grayscale images with CC = 20%, 25%, 30%, 35%, and 40%; (fj) inverted radiation images with CC = 20%, 25%, 30%, 35%, and 40% [45].
Figure 15. Simulated images for different CC scenarios in the MWIR band (RZA = 0°): (ae) grayscale images with CC = 20%, 25%, 30%, 35%, and 40%; (fj) inverted radiation images with CC = 20%, 25%, 30%, 35%, and 40% [45].
Remotesensing 17 02900 g015
Table 1. Classification of noise and the relationship with integration time.
Table 1. Classification of noise and the relationship with integration time.
Noise TypeRelationship with T int
Temporal NoisePhoton Noise T i n t
Readout NoiseIrrelevant (Additive)
Spatial NoiseResponse Non-Uniformity Noise T i n t
Dark Current Non-Uniformity NoiseIrrelevant (Additive)
Table 2. Simulation results of average radiance for cloud–sea background images ( W · s r 1 · m 2 ).
Table 2. Simulation results of average radiance for cloud–sea background images ( W · s r 1 · m 2 ).
BandMethodCC = 20%CC = 25%CC = 30%CC = 35%CC = 40%
MWIROurs0.5810.5810.5800.5800.580
Modtran0.5680.5680.5680.5680.568
RE2.434%2.346%2.241%2.233%2.202%
LWIROurs24.97124.93624.89824.87224.844
Modtran25.76625.76625.76625.76625.766
RE3.086%3.219%3.367%3.349%3.575%
Table 3. Infrared imaging system parameters.
Table 3. Infrared imaging system parameters.
SystemSystem 1System 2
Band3–5 µm8–12 µm
Entrance pupil diameter0.15 m0.5 m
Focal length0.36 m1.0 m
F-number2.42.0
Detection distance1600 km2000 km
Optical efficiency0.70.6
Energy concentration0.50.5
Pixel size18 µm30 µm
Instrument background irradiance0.01 W / m 2 0.02 W / m 2
Dark current density1 × 10−7 A/cm21 × 10−6 A/cm2
Full-well charge1 Me10 Me
Readout noise400 e2500 e
Integration time10 ms0.5 ms
Response non-uniformity1%1%
Dark current non-uniformity0.5%1%
Digitalizing bit12 bit14 bit
Quantum efficiency0.600.60
Table 4. Inversion results of cloud–sea grayscale images and radiance images under varying CC (20–40%) conditions in the MWIR and LWIR bands.
Table 4. Inversion results of cloud–sea grayscale images and radiance images under varying CC (20–40%) conditions in the MWIR and LWIR bands.
BandIndicatorCC = 20%CC = 25%CC = 30%CC = 35%CC = 40%
MWIRMAE0.0030.0030.0030.0030.003
RMSE0.0040.0040.0040.0040.004
PSNR49.71750.15550.58051.63851.548
SSIM0.9970.9970.9970.9970.997
LWIRMAE0.2040.2020.2030.2020.201
RMSE0.2550.2530.2540.2530.252
PSNR48.94348.73148.81347.58650.347
SSIM0.9940.9940.9940.9940.994
Table 5. Target inversion radiation intensity (IRI) (W/sr) and relative error (RE) (%) under different CC conditions.
Table 5. Target inversion radiation intensity (IRI) (W/sr) and relative error (RE) (%) under different CC conditions.
BandParameterCC
20% 25% 30% 35% 40%
MWIRIRI1303.71299.11291.51301.71300.4
RE3.91%4.25%4.81%4.06%4.15%
LWIRIRI14,60014,55912,67313,02514,527
RE6.80%6.50%7.30%4.73%6.26%
Table 6. Simulation results of the average cloud–sea radiance in the MWIR band ( W · s r 1 · m 2 ).
Table 6. Simulation results of the average cloud–sea radiance in the MWIR band ( W · s r 1 · m 2 ).
ScenarioMethodRZA = 10°RZA = 20°RZA = 30°RZA = 40°RZA = 50°RZA = 60°
CC = 20%Ours0.5940.6000.5930.5710.5400.498
Modtran0.5660.5630.5560.5450.5290.503
RE4.811%6.739%6.647%4.832%2.118%0.988%
CC = 25%Ours0.5920.5990.5910.5690.5390.497
Modtran0.5660.5630.5560.5450.5290.503
RE4.598%6.461%6.282%4.461%1.799%1.307%
CC = 30%Ours0.5910.5970.5890.5670.5370.495
Modtran0.5660.5630.5560.5450.5290.503
RE4.364%6.161%5.893%4.067%1.458%1.649%
CC = 35%Ours0.5900.5960.5870.5660.5350.494
Modtran0.5660.5630.5560.5450.5290.503
RE4.219%5.945%5.581%3.745%1.187%1.931%
CC = 40%Ours0.5890.5950.5850.5640.5340.492
Modtran0.5660.5630.5560.5450.5290.503
RE4.066%5.728%5.278%3.434%0.923%2.198%
Table 7. Simulation results of average cloud–sea radiance in the LWIR band ( W · s r 1 · m 2 ).
Table 7. Simulation results of average cloud–sea radiance in the LWIR band ( W · s r 1 · m 2 ).
ScenarioMethodRZA = 10°RZA = 20°RZA = 30°RZA = 40°RZA = 50°RZA = 60°
CC = 20%Ours24.94524.86324.70824.44724.00323.197
Modtran25.74725.68825.58325.41825.16324.753
RE3.114%3.214%3.422%3.820%4.607%6.287%
CC = 25%Ours24.91024.82524.66624.39923.95023.139
Modtran25.74725.68825.58325.41825.16324.753
RE3.250%3.360%3.585%4.005%4.819%6.519%
CC = 30%Ours24.87124.78424.62024.34823.89223.078
Modtran25.74725.68825.58325.41825.16324.753
RE3.401%3.521%3.763%4.208%5.048%6.768%
CC = 35%Ours24.84424.75424.58524.30623.84323.023
Modtran25.74725.68825.58325.41825.16324.753
RE3.508%3.639%3.900%4.371%5.243%6.988%
CC = 40%Ours24.81624.72324.55124.26623.79622.971
Modtran25.74725.68825.58325.41825.16324.753
RE3.617%3.758%4.036%4.531%5.430%7.198%
Table 8. Inversion results for cloud–sea grayscale and radiance images (MWIR band, CC 20–40%) obtained with the reference algorithm of [45].
Table 8. Inversion results for cloud–sea grayscale and radiance images (MWIR band, CC 20–40%) obtained with the reference algorithm of [45].
ScenarioIndicatorRZA = 0°RZA = 10°RZA = 20°RZA = 30°RZA = 40°RZA = 50°RZA = 60°
CC = 20%MAE0.00230.00240.00250.00270.00270.00270.0027
RMSE0.00650.00700.00720.00750.00720.00640.0056
PSNR43.67643.09442.82442.45942.80243.86044.967
SSIM0.9420.9390.9350.9310.9280.9250.917
CC = 25%MAE0.00250.00260.00270.00280.00290.00290.0028
RMSE0.00640.00680.00700.00730.00710.00630.0055
PSNR43.83343.29643.04142.67743.02344.07945.180
SSIM0.9430.9410.9400.9360.9350.9320.923
CC = 30%MAE0.00320.00340.00350.00360.00370.00360.0035
RMSE0.00850.00900.00920.00950.00910.00810.0070
PSNR41.42640.92640.73240.47740.84841.86343.047
SSIM0.9430.9410.9400.9360.9350.9320.923
CC = 35%MAE0.00320.00340.00350.00360.00370.00360.0035
RMSE0.00790.00830.00850.00880.00850.00750.0066
PSNR42.09041.59341.37841.08041.44342.47343.618
SSIM0.9270.9290.9270.9240.9230.9240.918
CC = 40%MAE0.00340.00360.00370.00390.00390.00390.0038
RMSE0.00760.00810.00830.00860.00830.00730.0064
PSNR42.33441.84841.62641.30441.65442.69743.832
SSIM0.9400.9380.9350.9310.9280.9260.918
Table 9. Comparison with other methods.
Table 9. Comparison with other methods.
MethodsScene ModelImaging EffectsIR Radiation ModelingOutput ImageData-Free
MODTRAN [9]
Traditional methods [2,11,12,13,15,45]
Deep-learning methods [30,31,32,33,34,35,36,37]
Proposed method
✓—Supported. ✕—Not supported.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sun, W.; Li, Y.; Li, F.; Rao, P. Digital Imaging Simulation and Closed-Loop Verification Model of Infrared Payloads in Space-Based Cloud–Sea Scenarios. Remote Sens. 2025, 17, 2900. https://doi.org/10.3390/rs17162900

AMA Style

Sun W, Li Y, Li F, Rao P. Digital Imaging Simulation and Closed-Loop Verification Model of Infrared Payloads in Space-Based Cloud–Sea Scenarios. Remote Sensing. 2025; 17(16):2900. https://doi.org/10.3390/rs17162900

Chicago/Turabian Style

Sun, Wen, Yejin Li, Fenghong Li, and Peng Rao. 2025. "Digital Imaging Simulation and Closed-Loop Verification Model of Infrared Payloads in Space-Based Cloud–Sea Scenarios" Remote Sensing 17, no. 16: 2900. https://doi.org/10.3390/rs17162900

APA Style

Sun, W., Li, Y., Li, F., & Rao, P. (2025). Digital Imaging Simulation and Closed-Loop Verification Model of Infrared Payloads in Space-Based Cloud–Sea Scenarios. Remote Sensing, 17(16), 2900. https://doi.org/10.3390/rs17162900

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop