Next Article in Journal
Continuous Monitoring of Differential Reflectivity Bias for C-Band Polarimetric Radar Using Online Solar Echoes in Volume Scans
Previous Article in Journal
Lunar Calibration for ASTER VNIR and TIR with Observations of the Moon in 2003 and 2017
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Deep Learning-Generated Nighttime Reflectance and Daytime Radiance of the Midwave Infrared Band of a Geostationary Satellite

Department of Environment, Energy, and Geoinfomatics, Sejong University, Seoul 05006, Korea
*
Author to whom correspondence should be addressed.
Remote Sens. 2019, 11(22), 2713; https://doi.org/10.3390/rs11222713
Submission received: 6 November 2019 / Accepted: 18 November 2019 / Published: 19 November 2019
(This article belongs to the Section Remote Sensing Image Processing)

Abstract

:
Midwave infrared (MWIR) band of 3.75 μm is important in satellite remote sensing in many applications. This band observes daytime reflectance and nighttime radiance according to the Earth’s and the Sun’s effects. This study presents an algorithm to generate no-present nighttime reflectance and daytime radiance at MWIR band of satellite observation by adopting the conditional generative adversarial nets (CGAN) model. We used the daytime reflectance and nighttime radiance data in the MWIR band of the meteoritical imager (MI) onboard the Communication, Ocean and Meteorological Satellite (COMS), as well as in the longwave infrared (LWIR; 10.8 μm) band of the COMS/MI sensor, from 1 January to 31 December 2017. This model was trained in a size of 1024 × 1024 pixels in the digital number (DN) from 0 to 255 converted from reflectance and radiance with a dataset of 256 images, and validated with a dataset of 107 images. Our results show a high statistical accuracy (bias = 3.539, root-mean-square-error (RMSE) = 8.924, and correlation coefficient (CC) = 0.922 for daytime reflectance; bias = 0.006, RMSE = 5.842, and CC = 0.995 for nighttime radiance) between the COMS MWIR observation and artificial intelligence (AI)-generated MWIR outputs. Consequently, our findings from the real MWIR observations could be used for identification of fog/low cloud, fire/hot-spot, volcanic eruption/ash, snow and ice, low-level atmospheric vector winds, urban heat islands, and clouds.

Graphical Abstract

1. Introduction

The global change in weather and climate has been impacting human social life, ecological environment, and natural disasters. Satellites have been playing a crucial role as one of the most important observation tools in the short-term to long-term analysis and forecasting during the past decades.
In particular, geostationary meteorological satellites have important roles in nowcasting and short-term weather analysis of convective clouds [1], and providing information on natural disasters like typhoon, floods, and heavy rainfall [2,3,4]. Recently, advanced geostationary satellites with a spectral resolution of 16 from visible (VIS) to infrared (IR) bands, a spatial resolution of 2 km, and a temporal resolution of 10 minutes have been developed, such as the Geostationary Operational Environmental Satellite (GOES)-16 [5], Himawari-8 [6], Fengyun-4A (FY-4A) [7], and GeoKompsat-2 Atmosphere-2A (GK-2A) [8].
In general, geostationary meteorological satellites use VIS and IR bands to observe the Earth’s surface and atmosphere within the atmospheric window, in which limited atmospheric absorption occurs, such as VIS band in the 0.55 to 0.90 μm wavelength and IR bands in the 3.5 to 4.0 μm, 10.5 to 11.5 μm, and 11.5 to 12.5 μm wavelengths [9]. The VIS band observes sunlight reflected from the Earth’s surface and atmosphere. The IR bands, such as 10.5–11.5 μm and 11.5–12.5 μm, observe mainly the radiance emitted from the Earth’s surface and atmosphere. The IR band within the 6.5–7.0 μm wavelength observes the radiance from water vapor (WV) in the upper and middle atmospheric layers.
In particular, the midwave IR (MWIR) band within the 3.5–4.0 μm wavelength receives reflected energy from the Sun and radiant energy from the Earth during the day, while it receives only the radiant energy from the Earth during the night. Accordingly, this MWIR band has an overall warmer temperature during the day compared to that during night, due to the additional reflected solar component. Therefore, the MWIR band has to be used differently for day and night [10]. This MWIR band is very useful in many applications, including the identification of fog and low clouds at night [11], fire and hot-spot [12], volcanic eruption and ash [13], daytime snow and ice, low-level atmospheric vector winds, and urban heat islands and clouds [5]. Figure 1 shows the examples of day time (2017.05.01 04:00 UTC (13:00 Korean Standard Time (KST))) reflectance and nighttime (2017.05.01 16:00 UTC (2017.05.02 01:00 KST)) radiance at the 3.75 μm band of the Communication, Ocean and Meteorological Satellite (COMS) of the Korean Meteorological Administration (KMA).
Recently, the artificial intelligence (AI) technique, in particular, deep learning techniques such as artificial neural network (ANN) [14], convolutional neural network (CNN) [15], and conditional generative adversarial nets (CGAN) [16], have been developed and applied in many research fields based on their capability to automatically learn suitable characteristics from datasets, and due to the availability of large number of datasets. In satellite remote sensing, deep learning techniques have been applied to infrared imagery [17], Synthetic Aperture Radar (SAR) imagery [18,19], nighttime VIS imagery [20], and many others [21,22,23]. Some ongoing studies are aiming to synthesize a virtual band using existing bands of satellites and deep learning techniques. For example, virtual shortwave IR (SWIR) bands were generated using existing VIS and near-IR (NIR) bands of the Indian Space Research Organization’s (ISRO) Resourcesat-2A mission through the CNN technique [24]. Additionally, a virtual nighttime VIS band was generated using the existing longwave infrared (LWIR) band of the KMA’s COMS mission and the CGAN technique [20].
This study proposes no satellite-observed nighttime reflectance and daytime radiance in the MWIR band, generated using the CGAN and the COMS satellite data. In this study, the CGAN technique is adopted because the generation of nighttime reflectance and daytime radiance in the MWIR band were considered as an adversarial task solved by the CGAN method, among other deep learning techniques. Unlike previous studies, this study focuses on the generation of the MWIR band using the CGAN technique based on the physical characteristics of the MWIR and LWIR bands. Our study will be useful for a variety of meteorological applications using the MWIR band, because of its capability to provide the MWIR’s reflectance and radiance consistently during both day and night, regardless of the MWIR band’s limitations.

2. Data

The KMA has been operating the COMS satellite at 128.2 °E with a meteoritical imager (MI) sensor, including one channel in the VIS spectrum (0.55–0.80 μm) with 1 km spatial resolution and four IR-sensing channels (MWIR: 3.5–4.0 μm; WV: 6.5–7.0 μm; IR1: 10.3–11.3 μm; and IR2: 11.5–12.5 μm) with 4 km spatial resolution [20,25]. The COMS/MI observes temporally the full disk every 3 h, and the Far-East Asia area every 30 min. Table 1 summarizes the characteristics of the MI sensor on COMS.
In this study, we used the Far-East Asia area level 1 (L1B) image data of COMS/MI data provided by the National Meteorological Satellite Center (NMSC) of the KMA [26]. We cropped the COMS/MI data as 1024 × 1024 pixels in size during the 12 months (January to December for one year, from 1 January 2017 to 31 December 2017) to establish the AI-generated COMS images for training, validation, and test data.

3. Method

3.1. CGAN

The generative adversarial nets (GAN) method [27] with a generative model (G) and a discriminative model (D) [28] has been successfully applied in computer vision and image processing [29]. In the GAN method, the generative model plays a role in producing virtual output images by training from the real input images, and the discriminative model distinguishes the virtual output image from the real input image. The GAN model is obtained through adversarial feedback to minimize the following Loss ( L O S S G A N ) [20,27]:
L O S S G A N ( G , D ) = E A , B ( log D ( A , B ) ) + E A ( log ( 1 D ( A , G ( A , C ) ) )
where L O S S G A N is the GAN loss function, E A , B ( log D ( A , B ) ) is the discriminator to maximize the probability of the training data, and E A ( log ( 1 D ( A , G ( A ) ) ) is the discriminator to minimize the probability of the data sampled from the generator G . A and B are the real input and real output images, respectively, C is the random noise dataset, and G ( A , C ) is the generated image. The log function is adopted to relax the insufficient gradient at the beginning of the training [27].
The CGAN loss function ( L O S S C G A N ) consists of the GAN loss function and the CNN loss function, as follows [29]:
L O S S C G A N = min G   max D   L O S S G A N ( G , D ) + L O S S C N N ( G )
where the CNN loss ( L O S S C N N ) originated from the CNN method in the form of:
L O S S C N N ( G ) = E A , B ( B G ( A , C ) )
The L O S S C N N is the reconstruction loss to minimize the difference between the real image ( B ) and the virtual image ( G ( A , C ) ).
Mathematically, G is expressed as [30]:
G : ( a i ,   c j ) A × C B = G ( A , C )
where G is a function to generate a virtual image ( B ) using the real input sample image ( a i ) in the real input image dataset ( A ) and the sample noise ( c j ) in the noise dataset ( C ). i denotes the number of samples; i = 1 to N in dataset A ; N is the total number of samples. j is the index of noise samples; j = 1 to K in dataset C ; K is the total number of noise samples.
Additionally, D is an image scaling function [30]:
D : P ( B | B ) [ 0 ,   1 ]
where P ( B | B ) is the conditional probability between the real output ( B ) and the generated output ( B ), ranging from 0 to 1. P ( B | B ) = 1 when G ( A , C ) = B .
This study adopted the Pix2Pix [16,31] for our CGAN model development, because the Pix2Pix has an advantage of no use of a noise as an input for learning CGAN loss and CNN loss [32].

3.2. Band Selection for CGAN

In this study, we selected one of the COMS IR bands among IR1, IR2, and WV as a pair of MWIR bands for CGAN-model bands, because these bands have no sunlight dependence and the VIS band is not available at night. Figure 2 shows the daytime images of MWIR reflectance, IR1 radiance, IR2 radiance, and WV radiance from COMS/MI observation for 1 May 1 2017, 04:00UTC (13:00 KST, daytime). Figure 3 shows the nighttime images for 1 May 2017, 16:00UTC (May 2, 2017, 01:00 KST, nighttime). The IR1, IR2, and WV bands show no dependence on sunlight, unlike the MWIR band. Table 2 summarizes the correlation coefficient (CC) between the MWIR image and the other IR band images depicted in Figure 2 and Figure 3 for 1 May 2017. In particular, the IR1 and IR2 bands show higher CCs in nighttime than in daytime. The IR1 band shows highest CC with the MWIR band during the day and night.
Table 3 summarizes the daytime CCs between the MWIR image and the other IR band images from 15 January, 04:00 UTC to 15 December, 04:00 UTC. Table 4 summarizes the nighttime CCs between the MWIR image and the other IR band images from 15 January, 16:00 UTC to 15 December, 16:00 UTC. The CC values among the MWIR, IR1, IR2, and WV are generally higher in radiance images, in nighttime than in daytime observations. In both cases, the IR1 band shows the highest value of correlation coefficient than the other bands. The IR2 band shows lower correlation values than that of the IR1 band, while the CC values of the IR2 band are slightly lower compared to that of the IR1 band. The WV band shows the highest CC during the daytime periods of winter. From the result of the CC comparison between the MWIR and the IR1, IR2, and WV bands, we chose IR1 as a pair with the MWIR band, since it had the highest correlation among the IR1, IR2, and WV bands. Therefore, we used pairs of COMS IR1 radiance image and MWIR reflectance image data, corresponding to A and B, respectively, in daytime, while pairs of COMS MWIR radiance image and MWIR radiance image data, corresponding to A and B, respectively, in nighttime. It is notable that the combination of multi-bands of IR1, IR2, and WV as a pair with the MWIR band are not included in this study.

3.3. Implementation

In this study, we implemented Pix2Pix to process the pairs of daytime MWIR reflectance images and IR1 radiance image datasets, and nighttime MWIR radiance images and IR1 radiance image datasets with 8 bits to obtain our CGAN model. Our experiment of this study was implemented on TensorFlow with Python 3.54 under Linux UBUNTU16.04.5, CUDA9.0, and CUDNN7.4.1.5 systems with four NVIDIA Titan-XP D5 GPU and an Intel Xeon CPU.
For our model construction, we established the datasets of COMS/MI MWIR and IR1 images at every 04:00 UTC (daytime, 13:00 KST) and 16:00 UTC (nighttime, 01:00 KST) from 1 January 2017 to 31 December 2017.
In order to train the daytime and nighttime cases, the input patches were cropped in a size of 1024 × 1024 pixels, and the datasets of 256 images. The datasets were COMS/MI MWIR reflectance and IR1 radiance images, corresponding to every 04:00 UTC (daytime, 13:00 KST) for daytime; and COMS/MI MWIR radiance and IR1 radiance images corresponding to every 16:00 UTC (nighttime, 01:00 KST) for nighttime from the first day to 70% of the number of days a month in every month in 2017. Thus, the CGAN generative model (G) obtained 1024 × 1024 pixels with a batch of 256 MWIR daytime reflectance images or nighttime radiance images, while the CGAN discriminative model (D) obtained 1024 × 1024 pixels with a batch of 256 IR1 daytime radiance images or nighttime radiance images.
For test and validation, the datasets were COMS/MI MWIR and IR1 images at every 04:00 UTC (13:00 KST) or every 16:00 UTC (01:00 KST) from the 23rd day to the last day in every month in 2017. Pix2Pix used 1024 × 1024 pixels with a batch of 107 pairs of MWIR reflectance (or radiance) and IR1 radiance image datasets for daytime and nighttime, respectively. Table 5 summarizes the datasets used for the construction of the CGAN model.
Figure 4 shows the outline of our CGAN model for test, validation, and application. IR1 and MWIR are the daytime or nighttime images observed in the COMS IR1 and MWIR bands, respectively. AI-MWIR indicates the CGAN-generated daytime or nighttime images for the COMS MWIR band (C) using real IR1 daytime or nighttime images (A). IR1 indicates other daytime or nighttime IR1 images which were not used for training and validation. AI-MWIR indicates AI-generated daytime radiance or nighttime reflectance images generated from our model.
During this process, the G of our model was trained to minimize the mean error between a MWIR reflectance during daytime or MWIR radiance during nighttime, and an AI-generated MWIR (AI-MWIR) reflectance during daytime or AI-MWIR radiance during nighttime, and reproduce the true data distribution of MWIR reflectance (or radiance) images from the corresponding IR1 radiance images ( A ). The D of our model was trained to distinguish the real IR1 from the AI-generated AI-MWIR data.

4. Results

Figure 5 shows the results of our model. Figure 5a,b shows a COMS-observed real daytime reflectance at COMS 3.75 μm and AI-generated reflectance on 25 January, 04:00 UTC, respectively. Figure 5c,d shows a COMS-observed real nighttime radiance at COMS 3.75 μm and AI-generated radiance on 25 January, 16:00 UTC, respectively. Figure 5e,f shows the difference map between COMS-observed real daytime reflectance at COMS 3.75 μm and AI-generated reflectance on 25 January, 04:00 UTC, and between COMS-observed real nighttime radiance at COMS 3.75 μm and AI-generated radiance on 25 January, 16:00 UTC, respectively. The real COMS reflectance image and the AI-generated reflectance image show a good agreement, except for some areas of high clouds which appear white in color, while the radiance images between real COMS observation and the AI-generated image show a better agreement than the reflectance images. Our model shows better performance in radiance than reflectance due to the different correlation between the 3.75 μm band and the IR1 band during day and night.
Figure 6a shows a scatterplot between COMS radiance and AI-generated radiance on 25 January, 16:00 UTC (nighttime) at the COMS 3.75 μm band. The bias, root-mean-square-error (RMSE), and correlation coefficient (CC) are −2.97, 7.17, and 0.98, respectively. Figure 6b shows a scatterplot between COMS real reflectance and AI-generated reflectance for 25 January, 04:00 UTC (daytime) at the COMS 3.75 μm band. The bias, RMSE, and CC are −4.59, 8.94, and 0.82, respectively. This result also indicates that our model shows better performance in generating radiance than reflectance because of a higher correlation between MWIR radiance and IR radiance than that between MWIR reflectance and IR radiance.
Figure 7 shows the time series of CC, bias, and RMSE between the COMS 3.75 μm band observation and the AI-generated COMS 3.75 μm band during daytime reflectance (Figure 7a) and nighttime radiance (Figure 7b), respectively, at every 25th day in each month from 1 January to 31 December 2017.
Figure 8 shows the time series of the real COMS MWIR data, AI-generated COMS MWIR reflectance, and AI-generated COMS MWIR radiance during a twilight between 15 January 2017, 22:00 UTC and 16 January 2017, 01:00 UTC, respectively. We identified that the AI-generated MWIR nighttime reflectance image shows consistence in the characteristics of the real MWIR daytime reflectance, as well as the AI-generated MWIR daytime radiance image with the same feature in the real MWIR nighttime radiance, respectively. The AI-generated results complement the absence of the real COMS MWIR observations.

5. Discussion

Our results show that MWIR bands have relatively better radiance than reflectance. This analysis was enabled by the differences in the physical characteristics of VIS and IR bands, and MWIR and IR bands, i.e., the differences between solar effects and the effects of the Earth’s emission on MWIR and IR bands. In our study, the statistical results for the reflectance of MWIR bands during the daytime, including CC, bias, and RMSE, were similar to the results of previous studies [20], suggesting the stability of the CGAN-based model.
A COMS LWIR band was chosen as a counter band to the paired MWIR and IR1 bands used in our CGAN-based model. The performance of the IR1 band shows seasonal dependence on cloud patterns. Our CGAN-model could have produced better results if a WV band was used during the daytime in winters. In satellite remote sensing, a combination of IR bands is used, such as IR1 and IR2 bands, to distinguish clouds from dust, and WV and IR1 bands to distinguish convective and mixed clouds. In this regard, the weakness of our CGAN-based model, which uses a single IR band and therefore shows seasonal dependence, can be overcome by using multiple bands and training them simultaneously as a pair of MWIR bands. The effects of multiple bands on the CCAN-based model will be investigated in the future. As illustrated in Figure 7, the RMSE shows slight monthly variation compared to the CC and the bias.

6. Summary and Concluding Remarks

Meteorological geostationary satellites have been playing an important role in the monitoring and forecasting of weather such as clouds, precipitation, aerosols, etc., and natural disasters such as typhoons and floods. Many of the advanced geostationary meteorological satellites such as GOES-16, Himawari-8/9, and GK-2A have been operating with the five primary spectral bands from VIS to IR bands. In particular, the MWIR band with a 3.5–4.0 μm wavelength that depends on the Sun’s reflection and the Earth’s emission has been used for detecting wild fire, fog, and low clouds because of its sensitivity to the temperature variation. This study proposes a novel method to generate non-existent MWIR nighttime reflectance and daytime radiance using the CGAN technique with the COMS data during 12 months in 2017. The IR1 band was selected as the best pair of the MWIR band for CGAN training and validation from the statistical analysis. The combination of multiple IR bands as a pair of the MWIR band was not considered in this study. For training, the input dataset of COMS MWIR reflectance images and IR1 radiance images for daytime (every 04:00 UTC (13:00 KST)), and COMS MWIR radiance images and IR1 radiance images for nighttime (every 16:00 UTC (01:00 KST)), were cropped to a size of 1024 × 1024 pixels and were grouped into datasets of 256 for CGAN model training and 107 for validation. Accordingly, our model shows excellent statistical results, with bias = 3.539, RMSE = 8.924, and CC = 0.922 for COMS MWIR daytime reflectance, and with bias = 0.006, RMSE = 5.842, and CC = 0.995 for COMS MWIR nighttime radiance. Consequently, we applied our model to generate the COMS MWIR reflectance during night and radiance during day, which are not observed in the real COMS MWIR band. Our results show qualitatively the same characteristics of atmosphere and surface in the real COMS MWIR daytime reflectance and nighttime radiance. Thus, our study will be helpful for forecasters to analyze a variety of meteorological applications and weather phenomena such as fog, clouds, precipitation, and typhoons.

Author Contributions

Conceptualization, S.H. methodology, Y.K. and S.H.; software, Y.K.; validation, Y.K. and S.H.; formal analysis Y.K. and S.H.; investigation, Y.K. and S.H.; resources, S.H.; data curation, Y.K. and S.H.; writing—original draft preparation, S.H. and Y.K.; writing—review and editing, S.H.; visualization, Y.K.; supervision, S.H.; project administration, S.H.; funding acquisition, S.H.

Funding

This research was supported by the KMA Research and Development Program [Grant KMI2018-05710].

Acknowledgments

The authors thank the anonymous reviewers for constructive and helpful comments on the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Escrig, H.; Batlles, F.J.; Alonso, J.; Baena, F.M.; Bosch, J.L.; Salbidegoitia, I.B.; Burgaleta, J.I. Cloud detection, classification and motion estimation using geostationary satellite imagery for cloud cover forecast. Energy 2013, 55, 853–859. [Google Scholar] [CrossRef]
  2. Purdom, J.; Menzel, P. Evolution of Satellite Observation in the United States and Their Use in Meteorology. In Historical Essays on Meteorology 1919–1995; Fleming, J.R., Ed.; American Meteorological Society: Boston, MA, USA, 1996; pp. 99–156. [Google Scholar]
  3. Schmetz, J.; Pili, P.; Tjemkes, S.; Just, D.; Kerkmann, J.; Rota, S.; Ratier, R. Supplement to an introduction to Meteosat Second Generation (MSG). Bull. Am. Meteorol. Soc. 2002, 83, 991. [Google Scholar] [CrossRef]
  4. Yusuf, A.A.; Francisco, H. Climate Change Vulnerability Mapping for Southeast Asia. Economy and Environment Program for Southeast Asia (EEPSEA), Singapore with CIDA, IDRC and SIDA. 2009. Available online: https://www.idrc.ca/sites/default/files/sp/Documents%20EN/climate-change-vulnerability-mapping-sa.pdf (accessed on 23 September 2019).
  5. Schmit, T.J.; Gunshor, M.M.; Menzel, W.P.; Gurka, J.J.; Li, J.; Bachmeier, A.S. Introducing the next-generation advanced baseline imager on GOES-R. Bull. Am. Meteorol. Soc. 2005, 86, 1079–1096. [Google Scholar] [CrossRef]
  6. Bessho, K.; Date, K.; Hayashi, M.; Ikeda, A.; Imai, T.; Inoue, H.; Kumagai, Y.; Miyakawa, T.; Murata, H.; Ohno, T.; et al. An introduction to Himawari-8/9—Japan’s new-generation geostationary meteorological satellites. J. Meteor. Soc. Jpn. 2016, 94, 151–183. [Google Scholar] [CrossRef]
  7. Yang, J.; Zhang, Z.; Wei, C.; Lu, F.; Guo, Q. Introducing the new generation of Chinese geostationary weather satellites, Fengyun-4. Bull. Am. Meteorol. Soc. 2017, 98, 1637–1658. [Google Scholar] [CrossRef]
  8. Lyu, S. Satellite Programs and Applications of KMA: Current and Future. In Proceedings of the 6th Asia/Oceania Meteorological Satellite Users’ Conference, Tokyo, Japan, 9–13 November 2015. [Google Scholar]
  9. Cooperative Institute for Research in the Atmosphere (CIRA). Introduction to GOES-8. Available online: http://rammb.cira.colostate.edu/training/tutorials/goes_8_original/default.asp (accessed on 20 May 2019).
  10. Lee, J.-R.; Chung, C.-Y.; Ou, M.-L. Fog detection using geostationary satellite data: Temporally continuous algorithm. Asia-Pac. J. Atmos. Sci. 2011, 47, 113–122. [Google Scholar] [CrossRef]
  11. Ellrod, G.P.; Achutuni, R.V.; Daniels, J.M.; Prins, E.M.; Nelson, J.P., III. An assessment of GOES-8 imager data quality. Bull. Am. Meteor. Soc. 1998, 79, 2509–2526. [Google Scholar] [CrossRef]
  12. Prins, E.M.; Feltz, J.M.; Menzel, W.P.; Ward, D.E. An overview of GOES-8 diurnal fire and smoke results for SCAR-B and 1995 fire season in South America. J. Geophys. Res. 1998, 103, 31821–31835. [Google Scholar] [CrossRef]
  13. Ellrod, G.P. Loss of the 12.0 μm “split window” band on GOES-M: Impacts on volcanic ash detection. Preprints. In Proceedings of the 11th Conference on Satellite Meteorology and Oceanography, Madison, WI, USA, 15–18 October 2001. CD-ROM, PI.15. [Google Scholar]
  14. Hassoun, M.H. Fundamentals of Artificial Neural Networks, 1st ed.; MIT Press: Cambridge, MA, USA, 1995; p. 48. [Google Scholar]
  15. Liang, M.; Hu, X. Recurrent convolutional neural network for object recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015. [Google Scholar]
  16. Isola, P.; Zhu, J.Y.; Efros, A.A. Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
  17. Li, Y.; Zhang, Y. Robust infrared small target detection using local steering kernel reconstruction. Pattern Recognit. 2018, 77, 113–125. [Google Scholar] [CrossRef]
  18. Tan, Y.; Li, Q.; Li, Y.; Tian, J. Aircraft detection in high-resolution SAR images based on a gradient textural saliency map. Sensors 2015, 15, 23071–23094. [Google Scholar] [CrossRef] [PubMed]
  19. He, W.; Yokoya, N. Multi-temporal Sentinel-1 and -2 Data fusion for optical image simulation. ISPRS Int. J. Geo-Inf. 2018, 7, 389. [Google Scholar] [CrossRef]
  20. Kim, K.; Kim, J.-H.; Moon, Y.-J.; Park, E.; Shin, G.; Kim, T.; Kim, Y.; Hong, S. Nighttime reflectance generation in the visible band of satellites. Remote Sens. 2019, 11, 2087. [Google Scholar] [CrossRef]
  21. Zhang, L.; Zhang, L.; Du, B. Deep Learning for remote sensing data: A technical tutorial on the state of the art. IEEE Geosci. Remote Sens. Mag. 2016, 4, 22–40. [Google Scholar] [CrossRef]
  22. Wu, Z.; Chen, X.; Gao, Y.; Li, Y. Rapid target detection in high resolution remote sensing images using Yolo model. Int. Arch. Photogramm. Remote. Sens. Spat. Inf. Sci. 2018, 42, 1915–1920. [Google Scholar] [CrossRef]
  23. Zhen, Y.; Liu, H.; Li, J.; Hu, C.; Pan, J. Remote sensing image object recognition based on convolutional neural network. In Proceedings of the First International Conference on Electronics Instrumentation Information Systems (EIIS), Harbin, China, 3–5 June 2017. [Google Scholar]
  24. Rout, L.; Bhateja, Y.; Garg, A.; Mishra, I.; Moorthi, S.M.; Dhar, D. DeepSWIR: A deep learning based approach for the synthesis of short-wave infrared band using multi-sensor concurrent datasets. arXiv 2019, arXiv:1905.02749. [Google Scholar]
  25. Woo, H.-J.; Park, K.-A.; Li, X.; Lee, E.-Y. Sea surface temperature retrieval from the first Korean geostationary satellite COMS data: Validation and error assessment. Remote Sens. 2018, 10, 1916. [Google Scholar] [CrossRef]
  26. National Meteorological Satellite Center (NMSC). Available online: http://nmsc.kma.go.kr (accessed on 8 March 2019).
  27. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. arXiv 2014, arXiv:1406.2661. [Google Scholar]
  28. Nguyen, V.; Vicente, T.F.Y.; Zhao, M.; Hoai, M.; Samaras, D. Shadow detection with conditional generative adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017. [Google Scholar]
  29. Radford, A.; Metz, L.; Chintala, S. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv 2015, arXiv:1511.06434. [Google Scholar]
  30. Kiran, B.R.; Thomas, D.M.; Parakkal, R. An overview of deep learning based methods for unsupervised and semi-supervised anomaly detection in videos. J. Imaging 2018, 4, 36. [Google Scholar] [CrossRef] [Green Version]
  31. Lin, Y.-C. Pix2pix-Tensorflow. 2017. Available online: https://github.com/yenchenlin/pix2pix-tensorflow (accessed on 20 March 2019).
  32. Michelsanti, D.; Tan, Z.H. Conditional generative adversarial networks for speech enhancement and noise-robust speaker verification. In Proceedings of the INTERSPEECH 2017, Stockholm, Sweden, 20–24 August 2017. [Google Scholar]
Figure 1. Examples of (a) reflectance (daytime, 2017.05.01 04:00 UTC (13:00 Korean Standard Time (KST))) and (b) radiance (night time, 2017.05.01 16:00 UTC (2017.05.02 01:00 KST)) at the 3.75 μm band of the Communication, Ocean and Meteorological Satellite (COMS).
Figure 1. Examples of (a) reflectance (daytime, 2017.05.01 04:00 UTC (13:00 Korean Standard Time (KST))) and (b) radiance (night time, 2017.05.01 16:00 UTC (2017.05.02 01:00 KST)) at the 3.75 μm band of the Communication, Ocean and Meteorological Satellite (COMS).
Remotesensing 11 02713 g001
Figure 2. (a) Midwave infrared (MWIR) reflectance, (b) water vapor (WV) radiance, (c) IR1 radiance, and (d) IR2 radiance observed from COMS/MI on 1 May 2017, 04:00 UTC (13:00 KST, daytime).
Figure 2. (a) Midwave infrared (MWIR) reflectance, (b) water vapor (WV) radiance, (c) IR1 radiance, and (d) IR2 radiance observed from COMS/MI on 1 May 2017, 04:00 UTC (13:00 KST, daytime).
Remotesensing 11 02713 g002aRemotesensing 11 02713 g002b
Figure 3. Radiances at (a) MWIR, (b) WV, (c) IR1, and (d) IR2 bands of COMS/MI on 1 May 2017, 16:00 UTC (May 2, 2017, 01:00 KST, nighttime).
Figure 3. Radiances at (a) MWIR, (b) WV, (c) IR1, and (d) IR2 bands of COMS/MI on 1 May 2017, 16:00 UTC (May 2, 2017, 01:00 KST, nighttime).
Remotesensing 11 02713 g003
Figure 4. CGAN-based model structure in this study.
Figure 4. CGAN-based model structure in this study.
Remotesensing 11 02713 g004
Figure 5. (a) Real COMS MWIR reflectance and (b) artificial intelligence (AI)-generated COMS MWIR reflectance on 25 January, 04:00 UTC (day time) with 40,000 iteration, (c) real COMS MWIR radiance, and (d) AI-generated COMS MWIR radiance on 25 January, 16:00 UTC (nighttime). (e) Difference between (a) and (b), and (f) difference between (c) and (d) are shown, respectively.
Figure 5. (a) Real COMS MWIR reflectance and (b) artificial intelligence (AI)-generated COMS MWIR reflectance on 25 January, 04:00 UTC (day time) with 40,000 iteration, (c) real COMS MWIR radiance, and (d) AI-generated COMS MWIR radiance on 25 January, 16:00 UTC (nighttime). (e) Difference between (a) and (b), and (f) difference between (c) and (d) are shown, respectively.
Remotesensing 11 02713 g005
Figure 6. Scatterplots and statistics of (a) AI-generated reflectance on 25 January, 04:00 UTC (daytime) and (b) AI-generated radiance on 25 January, 16:00 UTC (nighttime).
Figure 6. Scatterplots and statistics of (a) AI-generated reflectance on 25 January, 04:00 UTC (daytime) and (b) AI-generated radiance on 25 January, 16:00 UTC (nighttime).
Remotesensing 11 02713 g006
Figure 7. Time series of the CC, bias, and RMSE between the COMS observation and (a) AI-generated reflectance, and (b) AI-generated radiance at every 25th day in each month from January to December 2017.
Figure 7. Time series of the CC, bias, and RMSE between the COMS observation and (a) AI-generated reflectance, and (b) AI-generated radiance at every 25th day in each month from January to December 2017.
Remotesensing 11 02713 g007
Figure 8. (a) Time series of the real COMS MWIR reflectance, (b) the AI-generated COMS MWIR reflectance, and (c) the AI-generated COMS MWIR radiance during the twilight. The time period was January 15, 2017, 22:00 UTC to January 16, 2017, 01:00 UTC.
Figure 8. (a) Time series of the real COMS MWIR reflectance, (b) the AI-generated COMS MWIR reflectance, and (c) the AI-generated COMS MWIR radiance during the twilight. The time period was January 15, 2017, 22:00 UTC to January 16, 2017, 01:00 UTC.
Remotesensing 11 02713 g008aRemotesensing 11 02713 g008b
Table 1. Spectral channel characteristics of the meteoritical imager (MI) on COMS.
Table 1. Spectral channel characteristics of the meteoritical imager (MI) on COMS.
ChannelWavelength (μm)Bandwidth (μm)Observation (day/night)Spatial Resolution (km)
1. VIS0.6750.55–0.80Reflectance/X1
2. MWIR3.753.5–4.0Reflectance/Radiance4
3. WV6.756.5–7.0Radiance/Radiance4
4. IR110.810.3–11.3Radiance/Radiance4
5. IR212.011.5–12.5Radiance/Radiance4
Table 2. The correlation coefficients between the COMS MWIR and the WV, IR1, IR2 channels.
Table 2. The correlation coefficients between the COMS MWIR and the WV, IR1, IR2 channels.
CasesIR1IR2WV
2017.05.01. 04:00 UTC
(Daytime)
0.63430.61520.4588
2017.05.01. 16:00 UTC
(Night time)
0.97300.96040.6794
Table 3. Daytime correlation coefficients among the COMS MWIR, WV, IR1, and IR2 bands.
Table 3. Daytime correlation coefficients among the COMS MWIR, WV, IR1, and IR2 bands.
CasesIR1IR2WV
2017.01.15. 04:00 UTC0.64970.64690.7544
2017.02.15. 04:00 UTC0.46760.46350.5929
2017.03.15. 04:00 UTC0.46190.46440.5607
2017.04.15. 04:00 UTC0.52890.51080.3981
2017.05.15. 04:00 UTC0.68060.67510.531
2017.06.15. 04:00 UTC0.79180.77510.5345
2017.07.15. 04:00 UTC0.67530.65680.5662
2017.08.15. 04:00 UTC0.54340.53440.4758
2017.09.15. 04:00 UTC0.63670.62380.6607
2017.10.15. 04:00 UTC0.63950.64420.6571
2017.11.15. 04:00 UTC0.55550.54770.6593
2017.12.15. 04:00 UTC0.70440.70030.7797
Table 4. Nighttime correlation coefficients among the COMS MWIR, WV, IR1, and IR2 bands.
Table 4. Nighttime correlation coefficients among the COMS MWIR, WV, IR1, and IR2 bands.
CasesIR1IR2WV
2017.01.15. 16:00 UTC0.98790.98280.7798
2017.02.15. 16:00 UTC0.99190.98810.7701
2017.03.15. 16:00 UTC0.97770.97070.7264
2017.04.15. 16:00 UTC0.97530.96680.8422
2017.05.15. 16:00 UTC0.94910.92620.6814
2017.06.15. 16:00 UTC0.95390.92860.7195
2017.07.15. 16:00 UTC0.95550.92910.7343
2017.08.15. 16:00 UTC0.97070.95380.6571
2017.09.15. 16:00 UTC0.95080.92380.7048
2017.10.15. 16:00 UTC0.95790.94130.6369
2017.11.15. 16:00 UTC0.99060.98460.8462
2017.12.15. 16:00 UTC0.97170.95850.7306
Table 5. Datasets for the conditional generative adversarial nets (CGAN) model (2017): Time = 04:00 UTC (daytime)/16:00 UTC (nighttime).
Table 5. Datasets for the conditional generative adversarial nets (CGAN) model (2017): Time = 04:00 UTC (daytime)/16:00 UTC (nighttime).
MonthTrainingTest (Validation)
January1–2223–31
February1–1819–28
March1–2223–31
April1–2122–30
May1–2223–31
June1–2122–30
July1–2223–31
August1–2223–31
September1–2122–30
October1–2223–31
November1–2122–30
December1–2223–31

Share and Cite

MDPI and ACS Style

Kim, Y.; Hong, S. Deep Learning-Generated Nighttime Reflectance and Daytime Radiance of the Midwave Infrared Band of a Geostationary Satellite. Remote Sens. 2019, 11, 2713. https://doi.org/10.3390/rs11222713

AMA Style

Kim Y, Hong S. Deep Learning-Generated Nighttime Reflectance and Daytime Radiance of the Midwave Infrared Band of a Geostationary Satellite. Remote Sensing. 2019; 11(22):2713. https://doi.org/10.3390/rs11222713

Chicago/Turabian Style

Kim, Yerin, and Sungwook Hong. 2019. "Deep Learning-Generated Nighttime Reflectance and Daytime Radiance of the Midwave Infrared Band of a Geostationary Satellite" Remote Sensing 11, no. 22: 2713. https://doi.org/10.3390/rs11222713

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop