A Cloud Classification Method Based on a Convolutional Neural Network for FY-4A Satellites
Abstract
:1. Introduction
2. Dataset
2.1. FY-4A Dataset
2.2. Himawari-8 and CloudSat
2.3. Data Preprocessing
3. Method
3.1. CLP-CNN
- To obtain better feature extraction capabilities, the two-dimensional convolution in the U-Net network is replaced with the residual blocks in Res-Net [35,36]. Experiments have shown that deeper convolutional networks give better results, but this improvement brings with it the problem of gradient disappearance and explosion. He et al. 2015 proposed the Res-Net network, which is well solved by the residual structure;
- To better recognize the information of satellite observation images, the CLP-CNN network adds an attention mechanism to enhance the classification abilities of the network. Xu K et al. proposed an attention mechanism that allows the network to assign higher weights to the desired regions by itself [37]. Hu J et al. 2018 proposed SE-Net, which attempts to improve the finding of the relationships between image channels, a capability well suited for multichannel satellite observation data [38]. Woo S et al. 2018 proposed the convolutional block attention module (CBAM), which enables the network to expand the self-attention along both the channel and spatial dimensions and is of great importance for cloud classification tasks [39];
- To better integrate features at different levels, information at different scales is merged using the atrous spatial pyramid pooling (ASPP) module [40]. The correlation between the peripheral pixels is exploited;
- To avoid the information loss caused by the pooling layer during the downsampling process, the CLP-CNN network replaces the pooling layer during the downsampling process with a convolutional layer with a stride of 2, which can lead to finer classification results.
3.2. Train
- The input of all 14 observation channels, including 6 VIS (visible) and 8 IR (infrared) observation channels;
- The input of all 8 IR observation channels;
- The input of partial VIS and IR channel data. The plan selects observation channels that have a greater correlation with cloud types, and the selected channels contain 3 VIS channels and 8 IR channels. It contains channels 2, 5, 6, 7, 8, 9, 10, 11, 12, 13, and 14.
4. Result Evaluation
4.1. Evaluation of CLP-CNN with Different Channel Combination Plans
4.2. Comparison of the Results of Different Cloud Classification Models
4.3. Seasonal Performance Evaluation
4.4. CloudSat
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Baker, M. Cloud microphysics and climate. Science 1997, 276, 1072–1078. [Google Scholar] [CrossRef]
- Min, M.; Wang, P.; Campbell, J.R.; Zong, X.; Li, Y. Midlatitude cirrus cloud radiative forcing over China. J. Geophys. Res. Atmos. 2010, 115, 1408–1429. [Google Scholar] [CrossRef]
- Liu, Y.; Xia, J.; Shi, C.-X.; Hong, Y. An improved cloud classification algorithm for China’s FY-2C multi-channel images using artificial neural network. Sensors 2009, 9, 5558–5579. [Google Scholar] [CrossRef] [PubMed]
- Stubenrauch, C.J.; Rossow, W.B.; Kinne, S.; Ackerman, S.; Cesana, G.; Chepfer, H.; Di Girolamo, L.; Getzewich, B.; Guignard, A.; Heidinger, A. Assessment of global cloud datasets from satellites: Project and database initiated by the GEWEX radiation panel. Bull. Am. Meteorol. Soc. 2013, 94, 1031–1049. [Google Scholar] [CrossRef]
- Reichstein, M.; Camps-Valls, G.; Stevens, B.; Jung, M.; Denzler, J.; Carvalhais, N. Deep learning and process understanding for data-driven Earth system science. Nature 2019, 566, 195–204. [Google Scholar] [CrossRef]
- Huertas-Tato, J.; Rodríguez-Benítez, F.J.; Arbizu-Barrena, C.; Aler-Mur, R.; Galvan-Leon, I.; Pozo-Vázquez, D. Automatic Cloud-Type Classification Based on the Combined Use of a Sky Camera and a Ceilometer. J. Geophys. Res. Atmos. 2017, 122, 11045–11061. [Google Scholar] [CrossRef]
- Min, M.; Bai, C.; Guo, J.; Sun, F.; Liu, C.; Wang, F.; Xu, H.; Tang, S.; Li, B.; Di, D.; et al. Estimating Summertime Precipitation from Himawari-8 and Global Forecast System Based on Machine Learning. IEEE Trans. Geosci. Remote Sens. 2019, 57, 2557–2570. [Google Scholar] [CrossRef]
- Rossow, W.B.; Schiffer, R.A. ISCCP cloud data products. Bull. Am. Meteorol. Soc. 1991, 72, 2–20. [Google Scholar] [CrossRef]
- Hahn, C.J.; Rossow, W.B.; Warren, S.G. ISCCP cloud properties associated with standard cloud types identified in individual surface observations. J. Clim. 2001, 14, 11–28. [Google Scholar] [CrossRef]
- Mouri, K.; Izumi, T.; Suzue, H.; Yoshida, R. Algorithm Theoretical Basis Document of cloud type/phase product. Meteorol. Satell. Cent. Tech. Note 2016, 61, 19–31. [Google Scholar]
- Suzue, H.; Imai, T.; Mouri, K. High-resolution cloud analysis information derived from Himawari-8 data. Meteorol. Satell. Cent. Tech. Note 2016, 61, 43–51. [Google Scholar]
- Ackerman, S.A.; Strabala, K.I.; Menzel, W.P.; Frey, R.A.; Moeller, C.C.; Gumley, L.E. Discriminating clear sky from clouds with MODIS. J. Geophys. Res. Atmos. 1998, 103, 32141–32157. [Google Scholar] [CrossRef]
- Purbantoro, B.; Aminuddin, J.; Manago, N.; Toyoshima, K.; Lagrosas, N.; Sumantyo, J.T.S.; Kuze, H. Comparison of Cloud Type Classification with Split Window Algorithm Based on Different Infrared Band Combinations of Himawari-8 Satellite. Adv. Remote Sens. 2018, 7, 218–234. [Google Scholar] [CrossRef]
- Poulsen, C.; Egede, U.; Robbins, D.; Sandeford, B.; Tazi, K.; Zhu, T. Evaluation and comparison of a machine learning cloud identification algorithm for the SLSTR in polar regions. Remote Sens. Environ. 2020, 248, 111999. [Google Scholar] [CrossRef]
- Zhang, C.; Yu, F.; Wang, C.; Yang, J. Three-dimensional extension of the unit-feature spatial classification method for cloud type. Adv. Atmos. Sci. 2011, 28, 601–611. [Google Scholar] [CrossRef]
- Gao, T.; Zhao, S.; Chen, F.; Sun, X.; Liu, L. Cloud Classification Based on Structure Features of Infrared Images. J. Atmos. Ocean. Technol. 2011, 28, 410–417. [Google Scholar] [CrossRef]
- Heinle, A.; Macke, A.; Srivastav, A. Automatic cloud classification of whole sky images. Atmos. Meas. Tech. 2010, 3, 557–567. [Google Scholar] [CrossRef]
- Gómez-Chova, L.; Camps-Valls, G.; Bruzzone, L.; Calpe-Maravilla, J. Mean map kernel methods for semisupervised cloud classification. IEEE Trans. Geosci. Remote Sens. 2009, 48, 207–220. [Google Scholar] [CrossRef]
- Taravat, A.; Del Frate, F.; Cornaro, C.; Vergari, S. Neural networks and support vector machine algorithms for automatic cloud classification of whole-sky ground-based images. IEEE Geosci. Remote Sens. Lett. 2014, 12, 666–670. [Google Scholar] [CrossRef]
- Afzali Gorooh, V.; Kalia, S.; Nguyen, P.; Hsu, K.-l.; Sorooshian, S.; Ganguly, S.; Nemani, R. Deep Neural Network Cloud-Type Classification (DeepCTC) Model and Its Application in Evaluating PERSIANN-CCS. Remote Sens. 2020, 12, 316. [Google Scholar] [CrossRef]
- Zhang, J.; Liu, P.; Zhang, F.; Song, Q. CloudNet: Ground-based cloud classification with deep convolutional neural network. Geophys. Res. Lett. 2018, 45, 8665–8672. [Google Scholar] [CrossRef]
- Zhang, Y.; Cai, P.; Tao, R.; Wang, J.; Tian, W. Cloud Detection for Remote Sensing Images Using Improved U-Net. Bull. Surv. Mapp. 2020, 3, 17–20. [Google Scholar] [CrossRef]
- Chai, D.; Newsam, S.; Zhang, H.K.; Qiu, Y.; Huang, J. Cloud and cloud shadow detection in Landsat imagery based on deep convolutional neural networks. Remote Sens. Environ. 2019, 225, 307–316. [Google Scholar] [CrossRef]
- Drönner, J.; Korfhage, N.; Egli, S.; Mühling, M.; Thies, B.; Bendix, J.; Freisleben, B.; Seeger, B. Fast Cloud Segmentation Using Convolutional Neural Networks. Remote Sens. 2018, 10, 1782. [Google Scholar] [CrossRef]
- Guo, Q.; Lu, F.; Wei, C.; Zhang, Z.; Yang, J. Introducing the New Generation of Chinese Geostationary Weather Satellites, Fengyun-4. Bull. Am. Meteorol. Soc. 2017, 98, 1637–1658. [Google Scholar] [CrossRef]
- Bessho, K.; Date, K.; Hayashi, M.; Ikeda, A.; Imai, T.; Inoue, H.; Kumagai, Y.; Miyakawa, T.; Murata, H.; Ohno, T. An introduction to Himawari-8/9—Japan’s new-generation geostationary meteorological satellites. J. Meteorol. Soc. Jpn. Ser. II 2016, 94, 151–183. [Google Scholar] [CrossRef]
- Ishida, H.; Nakajima, T.Y. Development of an unbiased cloud detection algorithm for a spaceborne multispectral imager. J. Geophys. Res. 2009, 114, 141–157. [Google Scholar] [CrossRef]
- Letu, H.; Nagao, T.M.; Nakajima, T.Y.; Riedi, J.; Ishimoto, H.; Baran, A.J.; Shang, H.; Sekiguchi, M.; Kikuchi, M. Ice Cloud Properties from Himawari-8/AHI Next-Generation Geostationary Satellite: Capability of the AHI to Monitor the DC Cloud Generation Process. IEEE Trans. Geosci. Remote Sens. 2019, 57, 3229–3239. [Google Scholar] [CrossRef]
- Min, M.; Wang, P.; Campbell, J.R.; Zong, X.; Xia, J. Cirrus cloud macrophysical and optical properties over North China from CALIOP measurements. Adv. Atmos. Sci. 2011, 28, 653–664. [Google Scholar] [CrossRef]
- Sassen, K.; Wang, Z.; Liu, D. Global distribution of cirrus clouds from CloudSat/Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) measurements. J. Geophys. Res. 2008, 113, D8. [Google Scholar] [CrossRef]
- Powell, K.A.; Hu, Y.; Omar, A.; Vaughan, M.A.; Winker, D.M.; Liu, Z.; Hunt, W.H.; Young, S.A. Overview of the CALIPSO Mission and CALIOP Data Processing Algorithms. J. Atmos. Ocean. Technol. 2009, 26, 2310–2323. [Google Scholar] [CrossRef]
- Vane, D.; Stephens, G.L. The CloudSat Mission and the A-Train: A Revolutionary Approach to Observing Earth’s Atmosphere. In Proceedings of the 2008 IEEE Aerospace Conference, Big Sky, MT, USA, 1–8 March 2008; pp. 1–5. [Google Scholar] [CrossRef]
- Wang, Z.; Sassen, K. Level 2 cloud scenario classification product process description and interface control document. CloudSat Proj. NASA Earth Syst. Sci. Pathfind. Mission 2007, 5, 50. [Google Scholar]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015; Lecture Notes in Computer Science; Springer Science + Business Media: Berlin, Germany, 2015; pp. 234–241. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 770–778. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Identity mappings in deep residual networks. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016; pp. 630–645. [Google Scholar]
- Xu, K.; Ba, J.; Kiros, R.; Cho, K.; Courville, A.; Salakhudinov, R.; Zemel, R.; Bengio, Y. Show, attend and tell: Neural image caption generation with visual attention. In Proceedings of the International Conference on Machine Learning, Lille, France, 6–11 July 2015; pp. 2048–2057. [Google Scholar]
- Hu, J.; Shen, L.; Sun, G. Squeeze-and-excitation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 7132–7141. [Google Scholar]
- Woo, S.H.; Park, J.; Lee, J.Y.; Kweon, I.S. CBAM: Convolutional Block Attention Module. In Proceedings of the Computer Vision—ECCV 2018, PT VII, Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
- Chen, L.-C.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 801–818. [Google Scholar]
- Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
Spectral Coverage | Central Wavelength | Spectral Bandwidth | Spatial Resolution | Main Applications |
---|---|---|---|---|
VIS/NIR | 0.47 µm | 0.45–0.49 µm | 1 km | Aerosol, visibility |
0.65 µm | 0.55–0.75 µm | 0.5 km | Fog, clouds | |
0.825 µm | 0.75–0.90 µm | 1 km | Aerosol, vegetation | |
Shortwave IR | 1.375 µm | 1.36–1.39 µm | 2 km | Cirrus |
1.61 µm | 1.58–1.64 µm | 2 km | Cloud, snow | |
2.25 µm | 2.1–2.35 µm | 2 km | Cloud phase, aerosol, vegetation | |
Midwave IR | 3.75 µm | 3.5–4.0 µm | 2 km | Clouds, fire, moisture, snow |
3.75 µm | 3.5–4.0 µm | 4 km | Land surface | |
Water vapor | 6.25 µm | 5.8–6.7 µm | 4 km | Upper-level WV |
7.1 µm | 6.9–7.3 µm | 4 km | Midlevel WV | |
Longwave IR | 8.5 µm | 8.0–9.0 µm | 4 km | Volcanic, ash, cloud top, phase |
10.7 µm | 10.3–11.3 µm | 4 km | SST, LST | |
12.0 µm | 11.5–12.5 µm | 4 km | Clouds, low-level WV | |
13.5 µm | 13.2–13.8 µm | 4 km | Clouds, air temperature |
FY-4A L1 | Himawari-8 Cloud Classification Products | CloudSat 2B-CLDCLASS | |
---|---|---|---|
Training dataset | 2018 and 2020 | 2018 and 2020 | Not available |
Testing dataset | 2019 | 2019 | Not used |
Evaluation dataset | 2019 | 2019 | 2019 |
Type | Plan 1 | Plan 2 | Plan 3 |
---|---|---|---|
Clear | 0.751 | 0.735 | 0.745 |
Ci | 0.501 | 0.448 | 0.490 |
Cs | 0.709 | 0.609 | 0.687 |
Dc | 0.721 | 0.568 | 0.701 |
Ac | 0.305 | 0.272 | 0.304 |
As | 0.505 | 0.415 | 0.486 |
Ns | 0.601 | 0.484 | 0.591 |
Cu | 0.364 | 0.346 | 0.369 |
Sc | 0.493 | 0.439 | 0.487 |
St | 0.460 | 0.315 | 0.461 |
Acc. | 0.768 | 0.727 | 0.759 |
Type | CS-CNN | U-Net++ | U-Net + CBAM | CLP-CNN |
---|---|---|---|---|
Clear | 0.741 | 0.734 | 0.745 | 0.751 |
Ci (Cirrus) | 0.461 | 0.447 | 0.480 | 0.501 |
CS (Cirro-Stratus) | 0.674 | 0.665 | 0.683 | 0.709 |
Dc (Deep-convection) | 0.682 | 0.677 | 0.692 | 0.721 |
AC (Alto-Cumulus) | 0.273 | 0.257 | 0.293 | 0.305 |
AS (Alto-Stratus) | 0.467 | 0.460 | 0.486 | 0.505 |
NS (Nimbo-Stratus) | 0.574 | 0.567 | 0.588 | 0.601 |
Cu (Cumulus) | 0.337 | 0.331 | 0.364 | 0.364 |
SC (Strato-Cumulus) | 0.470 | 0.467 | 0.485 | 0.493 |
St (Stratus) | 0.429 | 0.420 | 0.441 | 0.460 |
Acc. | 0.747 | 0.743 | 0.756 | 0.768 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Jiang, Y.; Cheng, W.; Gao, F.; Zhang, S.; Wang, S.; Liu, C.; Liu, J. A Cloud Classification Method Based on a Convolutional Neural Network for FY-4A Satellites. Remote Sens. 2022, 14, 2314. https://doi.org/10.3390/rs14102314
Jiang Y, Cheng W, Gao F, Zhang S, Wang S, Liu C, Liu J. A Cloud Classification Method Based on a Convolutional Neural Network for FY-4A Satellites. Remote Sensing. 2022; 14(10):2314. https://doi.org/10.3390/rs14102314
Chicago/Turabian StyleJiang, Yuhang, Wei Cheng, Feng Gao, Shaoqing Zhang, Shudong Wang, Chang Liu, and Juanjuan Liu. 2022. "A Cloud Classification Method Based on a Convolutional Neural Network for FY-4A Satellites" Remote Sensing 14, no. 10: 2314. https://doi.org/10.3390/rs14102314
APA StyleJiang, Y., Cheng, W., Gao, F., Zhang, S., Wang, S., Liu, C., & Liu, J. (2022). A Cloud Classification Method Based on a Convolutional Neural Network for FY-4A Satellites. Remote Sensing, 14(10), 2314. https://doi.org/10.3390/rs14102314