Deep Learning-Generated Nighttime Reﬂectance and Daytime Radiance of the Midwave Infrared Band of a Geostationary Satellite

: Midwave infrared (MWIR) band of 3.75 µ m is important in satellite remote sensing in many applications. This band observes daytime reﬂectance and nighttime radiance according to the Earth’s and the Sun’s e ﬀ ects. This study presents an algorithm to generate no-present nighttime reﬂectance and daytime radiance at MWIR band of satellite observation by adopting the conditional generative adversarial nets (CGAN) model. We used the daytime reﬂectance and nighttime radiance data in the MWIR band of the meteoritical imager (MI) onboard the Communication, Ocean and Meteorological Satellite (COMS), as well as in the longwave infrared (LWIR; 10.8 µ m) band of the COMS / MI sensor, from 1 January to 31 December 2017. This model was trained in a size of 1024 × 1024 pixels in the digital number (DN) from 0 to 255 converted from reﬂectance and radiance with a dataset of 256 images, and validated with a dataset of 107 images. Our results show a high statistical accuracy (bias = 3.539, root-mean-square-error (RMSE) = 8.924, and correlation coe ﬃ cient (CC) = 0.922 for daytime reﬂectance; bias = 0.006, RMSE = 5.842, and CC = 0.995 for nighttime radiance) between the COMS MWIR observation and artiﬁcial intelligence (AI)-generated MWIR outputs. Consequently, our ﬁndings from the real MWIR observations could be used for identiﬁcation of fog / low cloud, ﬁre / hot-spot, volcanic eruption / ash, snow and ice, low-level atmospheric vector winds, urban heat islands, and clouds.


Introduction
The global change in weather and climate has been impacting human social life, ecological environment, and natural disasters. Satellites have been playing a crucial role as one of the most important observation tools in the short-term to long-term analysis and forecasting during the past decades.
In general, geostationary meteorological satellites use VIS and IR bands to observe the Earth's surface and atmosphere within the atmospheric window, in which limited atmospheric absorption occurs, such as VIS band in the 0.55 to 0.90 µm wavelength and IR bands in the 3.5 to 4.0 µm, 10.5 to 11.5 µm, and 11.5 to 12.5 µm wavelengths [9]. The VIS band observes sunlight reflected from the 7.0 μm wavelength observes the radiance from water vapor (WV) in the upper and middle atmospheric layers.
In particular, the midwave IR (MWIR) band within the 3.5-4.0 μm wavelength receives reflected energy from the Sun and radiant energy from the Earth during the day, while it receives only the radiant energy from the Earth during the night. Accordingly, this MWIR band has an overall warmer temperature during the day compared to that during night, due to the additional reflected solar component. Therefore, the MWIR band has to be used differently for day and night [10]. This MWIR band is very useful in many applications, including the identification of fog and low clouds at night [11], fire and hot-spot [12], volcanic eruption and ash [13], daytime snow and ice, low-level atmospheric vector winds, and urban heat islands and clouds [5]. Recently, the artificial intelligence (AI) technique, in particular, deep learning techniques such as artificial neural network (ANN) [14], convolutional neural network (CNN) [15], and conditional generative adversarial nets (CGAN) [16], have been developed and applied in many research fields based on their capability to automatically learn suitable characteristics from datasets, and due to the availability of large number of datasets. In satellite remote sensing, deep learning techniques have been applied to infrared imagery [17], Synthetic Aperture Radar (SAR) imagery [18,19], nighttime VIS imagery [20], and many others [21][22][23]. Some ongoing studies are aiming to synthesize a virtual band using existing bands of satellites and deep learning techniques. For example, virtual shortwave IR (SWIR) bands were generated using existing VIS and near-IR (NIR) bands of the Indian Space Research Organization's (ISRO) Resourcesat-2A mission through the CNN technique [24]. Additionally, a virtual nighttime VIS band was generated using the existing longwave infrared (LWIR) band of the KMA's COMS mission and the CGAN technique [20].
This study proposes no satellite-observed nighttime reflectance and daytime radiance in the MWIR band, generated using the CGAN and the COMS satellite data. In this study, the CGAN technique is adopted because the generation of nighttime reflectance and daytime radiance in the MWIR band were considered as an adversarial task solved by the CGAN method, among other deep learning techniques. Unlike previous studies, this study focuses on the generation of the MWIR band using the CGAN technique based on the physical characteristics of the MWIR and LWIR bands. Our Recently, the artificial intelligence (AI) technique, in particular, deep learning techniques such as artificial neural network (ANN) [14], convolutional neural network (CNN) [15], and conditional generative adversarial nets (CGAN) [16], have been developed and applied in many research fields based on their capability to automatically learn suitable characteristics from datasets, and due to the availability of large number of datasets. In satellite remote sensing, deep learning techniques have been applied to infrared imagery [17], Synthetic Aperture Radar (SAR) imagery [18,19], nighttime VIS imagery [20], and many others [21][22][23]. Some ongoing studies are aiming to synthesize a virtual band using existing bands of satellites and deep learning techniques. For example, virtual shortwave IR (SWIR) bands were generated using existing VIS and near-IR (NIR) bands of the Indian Space Research Organization's (ISRO) Resourcesat-2A mission through the CNN technique [24]. Additionally, a virtual nighttime VIS band was generated using the existing longwave infrared (LWIR) band of the KMA's COMS mission and the CGAN technique [20].
This study proposes no satellite-observed nighttime reflectance and daytime radiance in the MWIR band, generated using the CGAN and the COMS satellite data. In this study, the CGAN technique is adopted because the generation of nighttime reflectance and daytime radiance in the MWIR band were considered as an adversarial task solved by the CGAN method, among other deep learning techniques. Unlike previous studies, this study focuses on the generation of the MWIR band using the CGAN technique based on the physical characteristics of the MWIR and LWIR bands. Our study will be useful for a variety of meteorological applications using the MWIR band, because of its capability to provide the MWIR's reflectance and radiance consistently during both day and night, regardless of the MWIR band's limitations.

Data
The KMA has been operating the COMS satellite at 128.2 • E with a meteoritical imager (MI) sensor, including one channel in the VIS spectrum (0.55-0.80 µm) with 1 km spatial resolution and four IR-sensing channels (MWIR: 3.5-4.0 µm; WV: 6.5-7.0 µm; IR1: 10.3-11.3 µm; and IR2: 11.5-12.5 µm) with 4 km spatial resolution [20,25]. The COMS/MI observes temporally the full disk every 3 h, and the Far-East Asia area every 30 min. Table 1 summarizes the characteristics of the MI sensor on COMS. In this study, we used the Far-East Asia area level 1 (L1B) image data of COMS/MI data provided by the National Meteorological Satellite Center (NMSC) of the KMA [26]. We cropped the COMS/MI data as 1024 × 1024 pixels in size during the 12 months (January to December for one year, from 1 January 2017 to 31 December 2017) to establish the AI-generated COMS images for training, validation, and test data.

CGAN
The generative adversarial nets (GAN) method [27] with a generative model (G) and a discriminative model (D) [28] has been successfully applied in computer vision and image processing [29]. In the GAN method, the generative model plays a role in producing virtual output images by training from the real input images, and the discriminative model distinguishes the virtual output image from the real input image. The GAN model is obtained through adversarial feedback to minimize the following Loss (LOSS GAN ) [20,27]: where LOSS GAN is the GAN loss function, E A,B (log D(A, B)) is the discriminator to maximize the probability of the training data, and E A (log(1 − D(A, G(A))) is the discriminator to minimize the probability of the data sampled from the generator G. A and B are the real input and real output images, respectively, C is the random noise dataset, and G(A, C) is the generated image. The log function is adopted to relax the insufficient gradient at the beginning of the training [27]. The CGAN loss function (LOSS CGAN ) consists of the GAN loss function and the CNN loss function, as follows [29]: where the CNN loss (LOSS CNN ) originated from the CNN method in the form of: The LOSS CNN is the reconstruction loss to minimize the difference between the real image (B) and the virtual image (G(A, C)).
Mathematically, G is expressed as [30]: where G is a function to generate a virtual image (B ) using the real input sample image (a i ) in the real input image dataset (A) and the sample noise (c j ) in the noise dataset (C). i denotes the number of samples; i = 1 to N in dataset A; N is the total number of samples. j is the index of noise samples; j = 1 to K in dataset C; K is the total number of noise samples. Additionally, D is an image scaling function [30]: where P(B B) is the conditional probability between the real output (B) and the generated output (B ), ranging from 0 to 1.
This study adopted the Pix2Pix [16,31] for our CGAN model development, because the Pix2Pix has an advantage of no use of a noise as an input for learning CGAN loss and CNN loss [32].

Band Selection for CGAN
In this study, we selected one of the COMS IR bands among IR1, IR2, and WV as a pair of MWIR bands for CGAN-model bands, because these bands have no sunlight dependence and the VIS band is not available at night. Figure 2 shows the daytime images of MWIR reflectance, IR1 radiance, IR2 radiance, and WV radiance from COMS/MI observation for 1 May 1 2017, 04:00UTC (13:00 KST, daytime). Figure 3 shows the nighttime images for 1 May 2017, 16:00UTC (May 2, 2017, 01:00 KST, nighttime). The IR1, IR2, and WV bands show no dependence on sunlight, unlike the MWIR band. The is the reconstruction loss to minimize the difference between the real image ( ) and the virtual image ( ( , )).
Mathematically, G is expressed as [30]: where is a function to generate a virtual image ( ′ ) using the real input sample image ( ) in the real input image dataset ( ) and the sample noise ( ) in the noise dataset ( ). i denotes the number of samples; i = 1 to N in dataset ; N is the total number of samples. j is the index of noise samples; j = 1 to K in dataset ; K is the total number of noise samples.
Additionally, D is an image scaling function [30]: where ( ′| ) is the conditional probability between the real output ( ) and the generated output This study adopted the Pix2Pix [16,31] for our CGAN model development, because the Pix2Pix has an advantage of no use of a noise as an input for learning CGAN loss and CNN loss [32].     Table 3 summarizes the daytime CCs between the MWIR image and the other IR band images from 15 January, 04:00 UTC to 15 December, 04:00 UTC. Table 4 summarizes the nighttime CCs between the MWIR image and the other IR band images from 15 January, 16:00 UTC to 15 December, 16:00 UTC. The CC values among the MWIR, IR1, IR2, and WV are generally higher in radiance images, in nighttime than in daytime observations. In both cases, the IR1 band shows the highest value of correlation coefficient than the other bands. The IR2 band shows lower correlation values than that of the IR1 band, while the CC values of the IR2 band are slightly lower compared to that of the IR1 band. The WV band shows the highest CC during the daytime periods of winter. From the result of the CC comparison between the MWIR and the IR1, IR2, and WV bands, we chose IR1 as a pair with the MWIR band, since it had the highest correlation among the IR1, IR2, and WV bands. Therefore, we used pairs of COMS IR1 radiance image and MWIR reflectance image data, corresponding to A and B, respectively, in daytime, while pairs of COMS MWIR radiance image and MWIR radiance image data, corresponding to A and B, respectively, in nighttime. It is notable that the combination of multi-bands of IR1, IR2, and WV as a pair with the MWIR band are not included in this study.

Implementation
In this study, we implemented Pix2Pix to process the pairs of daytime MWIR reflectance images and IR1 radiance image datasets, and nighttime MWIR radiance images and IR1 radiance image datasets with 8 bits to obtain our CGAN model. Our experiment of this study was implemented on TensorFlow with Python 3.54 under Linux UBUNTU16.04.5, CUDA9.0, and CUDNN7.4.1.5 systems with four NVIDIA Titan-XP D5 GPU and an Intel Xeon CPU.
In order to train the daytime and nighttime cases, the input patches were cropped in a size of 1024 × 1024 pixels, and the datasets of 256 images. The datasets were COMS/MI MWIR reflectance and IR1 radiance images, corresponding to every 04:00 UTC (daytime, 13:00 KST) for daytime; and COMS/MI MWIR radiance and IR1 radiance images corresponding to every 16:00 UTC (nighttime, 01:00 KST) for nighttime from the first day to 70% of the number of days a month in every month in 2017. Thus, the CGAN generative model (G) obtained 1024 × 1024 pixels with a batch of 256 MWIR daytime reflectance images or nighttime radiance images, while the CGAN discriminative model (D) obtained 1024 × 1024 pixels with a batch of 256 IR1 daytime radiance images or nighttime radiance images.
For test and validation, the datasets were COMS/MI MWIR and IR1 images at every 04:00 UTC (13:00 KST) or every 16:00 UTC (01:00 KST) from the 23rd day to the last day in every month in 2017. Pix2Pix used 1024 × 1024 pixels with a batch of 107 pairs of MWIR reflectance (or radiance) and IR1 radiance image datasets for daytime and nighttime, respectively. Table 5 summarizes the datasets used for the construction of the CGAN model.  Figure 4 shows the outline of our CGAN model for test, validation, and application. IR1 and MWIR are the daytime or nighttime images observed in the COMS IR1 and MWIR bands, respectively. AI-MWIR indicates the CGAN-generated daytime or nighttime images for the COMS MWIR band (C) using real IR1 daytime or nighttime images (A). IR1 indicates other daytime or nighttime IR1 images which were not used for training and validation. AI-MWIR indicates AI-generated daytime radiance or nighttime reflectance images generated from our model. During this process, the G of our model was trained to minimize the mean error between a MWIR reflectance during daytime or MWIR radiance during nighttime, and an AI-generated MWIR (AI-MWIR) reflectance during daytime or AI-MWIR radiance during nighttime, and reproduce the true data distribution of MWIR reflectance (or radiance) images from the corresponding IR1 radiance images ( ). The D of our model was trained to distinguish the real IR1 from the AI-generated AI-MWIR data.  Figure 5 shows the results of our model. Figure 5a,b shows a COMS-observed real daytime reflectance at COMS 3.75 μm and AI-generated reflectance on 25 January, 04:00 UTC, respectively. Figure 5c,d shows a COMS-observed real nighttime radiance at COMS 3.75 μm and AI-generated radiance on 25 January, 16:00 UTC, respectively. Figure 5e,f shows the difference map between COMS-observed real daytime reflectance at COMS 3.75 μm and AI-generated reflectance on 25 January, 04:00 UTC, and between COMS-observed real nighttime radiance at COMS 3.75 μm and AIgenerated radiance on 25 January, 16:00 UTC, respectively. The real COMS reflectance image and the AI-generated reflectance image show a good agreement, except for some areas of high clouds which appear white in color, while the radiance images between real COMS observation and the AIgenerated image show a better agreement than the reflectance images. Our model shows better performance in radiance than reflectance due to the different correlation between the 3.75 μm band and the IR1 band during day and night. During this process, the G of our model was trained to minimize the mean error between a MWIR reflectance during daytime or MWIR radiance during nighttime, and an AI-generated MWIR (AI-MWIR) reflectance during daytime or AI-MWIR radiance during nighttime, and reproduce the true data distribution of MWIR reflectance (or radiance) images from the corresponding IR1 radiance images (A). The D of our model was trained to distinguish the real IR1 from the AI-generated AI-MWIR data. Figure 5 shows the results of our model. Figure 5a,b shows a COMS-observed real daytime reflectance at COMS 3.75 µm and AI-generated reflectance on 25 January, 04:00 UTC, respectively. Figure 5c,d shows a COMS-observed real nighttime radiance at COMS 3.75 µm and AI-generated radiance on 25 January, 16:00 UTC, respectively. Figure 5e,f shows the difference map between COMS-observed real daytime reflectance at COMS 3.75 µm and AI-generated reflectance on 25 January, 04:00 UTC, and between COMS-observed real nighttime radiance at COMS 3.75 µm and AI-generated radiance on 25 January, 16:00 UTC, respectively. The real COMS reflectance image and the AI-generated reflectance image show a good agreement, except for some areas of high clouds which appear white in color, while the radiance images between real COMS observation and the AI-generated image show a better agreement than the reflectance images. Our model shows better performance in radiance than reflectance due to the different correlation between the 3.75 µm band and the IR1 band during day and night. Figure 6a shows a scatterplot between COMS radiance and AI-generated radiance on 25 January, 16:00 UTC (nighttime) at the COMS 3.75 µm band. The bias, root-mean-square-error (RMSE), and correlation coefficient (CC) are −2.97, 7.17, and 0.98, respectively. Figure 6b shows a scatterplot between COMS real reflectance and AI-generated reflectance for 25 January, 04:00 UTC (daytime) at the COMS 3.75 µm band. The bias, RMSE, and CC are −4.59, 8.94, and 0.82, respectively. This result also indicates that our model shows better performance in generating radiance than reflectance because of a higher correlation between MWIR radiance and IR radiance than that between MWIR reflectance and IR radiance.  shows a scatterplot between COMS radiance and AI-generated radiance on 25 January, 16:00 UTC (nighttime) at the COMS 3.75 μm band. The bias, root-mean-square-error (RMSE), and correlation coefficient (CC) are −2.97, 7.17, and 0.98, respectively. Figure 6b shows a scatterplot between COMS real reflectance and AI-generated reflectance for 25 January, 04:00 UTC (daytime) at the COMS 3.75 μm band. The bias, RMSE, and CC are −4.59, 8.94, and 0.82, respectively. This result also indicates that our model shows better performance in generating radiance than reflectance because of a higher correlation between MWIR radiance and IR radiance than that between MWIR reflectance and IR radiance.

Results
(a) (b) Figure 6. Scatterplots and statistics of (a) AI-generated reflectance on 25 January, 04:00 UTC (daytime) and (b) AI-generated radiance on 25 January, 16:00 UTC (nighttime). Figure 7 shows the time series of CC, bias, and RMSE between the COMS 3.75 μm band observation and the AI-generated COMS 3.75 μm band during daytime reflectance ( Figure 7a) and nighttime radiance (Figure 7b), respectively, at every 25th day in each month from 1 January to 31 December 2017.   Figure 8 shows the time series of the real COMS MWIR data, AI-generated COMS MWIR reflectance, and AI-generated COMS MWIR radiance during a twilight between 15 January 2017, 22:00 UTC and 16 January 2017, 01:00 UTC, respectively. We identified that the AI-generated MWIR nighttime reflectance image shows consistence in the characteristics of the real MWIR daytime reflectance, as well as the AI-generated MWIR daytime radiance image with the same feature in the real MWIR nighttime radiance, respectively. The AI-generated results complement the absence of the real COMS MWIR observations. Time series of the CC, bias, and RMSE between the COMS observation and (a) AI-generated reflectance, and (b) AI-generated radiance at every 25th day in each month from January to December 2017. Figure 8 shows the time series of the real COMS MWIR data, AI-generated COMS MWIR reflectance, and AI-generated COMS MWIR radiance during a twilight between 15 January 2017, 22:00 UTC and 16 January 2017, 01:00 UTC, respectively. We identified that the AI-generated MWIR nighttime reflectance image shows consistence in the characteristics of the real MWIR daytime reflectance, as well as the AI-generated MWIR daytime radiance image with the same feature in the real MWIR nighttime radiance, respectively. The AI-generated results complement the absence of the real COMS MWIR observations.
(b) Figure 7. Time series of the CC, bias, and RMSE at every 25th day in each month from January to December 2017. Figure 8 shows the time series of the real COMS MWIR data, AI-generated COMS MWIR reflectance, and AI-generated COMS MWIR radiance during a twilight between 15 January 2017, 22:00 UTC and 16 January 2017, 01:00 UTC, respectively. We identified that the AI-generated MWIR nighttime reflectance image shows consistence in the characteristics of the real MWIR daytime reflectance, as well as the AI-generated MWIR daytime radiance image with the same feature in the real MWIR nighttime radiance, respectively. The AI-generated results complement the absence of the real COMS MWIR observations.

Discussion
Our results show that MWIR bands have relatively better radiance than reflectance. This analysis was enabled by the differences in the physical characteristics of VIS and IR bands, and MWIR and IR bands, i.e., the differences between solar effects and the effects of the Earth's emission on MWIR and IR bands. In our study, the statistical results for the reflectance of MWIR bands during the daytime, including CC, bias, and RMSE, were similar to the results of previous studies [20], suggesting the stability of the CGAN-based model.
A COMS LWIR band was chosen as a counter band to the paired MWIR and IR1 bands used in our CGAN-based model. The performance of the IR1 band shows seasonal dependence on cloud patterns. Our CGAN-model could have produced better results if a WV band was used during the daytime in winters. In satellite remote sensing, a combination of IR bands is used, such as IR1 and IR2 bands, to distinguish clouds from dust, and WV and IR1 bands to distinguish convective and

Discussion
Our results show that MWIR bands have relatively better radiance than reflectance. This analysis was enabled by the differences in the physical characteristics of VIS and IR bands, and MWIR and IR bands, i.e., the differences between solar effects and the effects of the Earth's emission on MWIR and IR bands. In our study, the statistical results for the reflectance of MWIR bands during the daytime, including CC, bias, and RMSE, were similar to the results of previous studies [20], suggesting the stability of the CGAN-based model.
A COMS LWIR band was chosen as a counter band to the paired MWIR and IR1 bands used in our CGAN-based model. The performance of the IR1 band shows seasonal dependence on cloud patterns. Our CGAN-model could have produced better results if a WV band was used during the daytime in winters. In satellite remote sensing, a combination of IR bands is used, such as IR1 and IR2 bands, to distinguish clouds from dust, and WV and IR1 bands to distinguish convective and mixed clouds. In this regard, the weakness of our CGAN-based model, which uses a single IR band and therefore shows seasonal dependence, can be overcome by using multiple bands and training them simultaneously as a pair of MWIR bands. The effects of multiple bands on the CCAN-based model will be investigated in the future. As illustrated in Figure 7, the RMSE shows slight monthly variation compared to the CC and the bias.

Summary and Concluding Remarks
Meteorological geostationary satellites have been playing an important role in the monitoring and forecasting of weather such as clouds, precipitation, aerosols, etc., and natural disasters such as typhoons and floods. Many of the advanced geostationary meteorological satellites such as GOES-16, Himawari-8/9, and GK-2A have been operating with the five primary spectral bands from VIS to IR bands. In particular, the MWIR band with a 3.5-4.0 µm wavelength that depends on the Sun's reflection and the Earth's emission has been used for detecting wild fire, fog, and low clouds because of its sensitivity to the temperature variation. This study proposes a novel method to generate non-existent MWIR nighttime reflectance and daytime radiance using the CGAN technique with the COMS data during 12 months in 2017. The IR1 band was selected as the best pair of the MWIR band for CGAN training and validation from the statistical analysis. The combination of multiple IR bands as a pair of the MWIR band was not considered in this study. For training, the input dataset of COMS MWIR reflectance images and IR1 radiance images for daytime (every 04:00 UTC (13:00 KST)), and COMS MWIR radiance images and IR1 radiance images for nighttime (every 16:00 UTC (01:00 KST)), were cropped to a size of 1024 × 1024 pixels and were grouped into datasets of 256 for CGAN model training and 107 for validation. Accordingly, our model shows excellent statistical results, with bias = 3.539, RMSE = 8.924, and CC = 0.922 for COMS MWIR daytime reflectance, and with bias = 0.006, RMSE = 5.842, and CC = 0.995 for COMS MWIR nighttime radiance. Consequently, we applied our model to generate the COMS MWIR reflectance during night and radiance during day, which are not observed in the real COMS MWIR band. Our results show qualitatively the same characteristics of atmosphere and surface in the real COMS MWIR daytime reflectance and nighttime radiance. Thus, our study will be helpful for forecasters to analyze a variety of meteorological applications and weather phenomena such as fog, clouds, precipitation, and typhoons.