ShuffleCloudNet: A Lightweight Composite Neural Network-Based Method for Cloud Computation in Remote-Sensing Images
Abstract
:1. Introduction
- Differing from the traditional CNN method, an efficient and lightweight network model architecture for remote-sensing image cloud-amount computation is constructed, referred to as ShuffleCloudNet, and which can use fewer model parameters to obtain better results of cloud-amount computation.
- Based on the diversity of clouds and the complexity of backgrounds, a multi-scale confidence fusion module is proposed to optimize the performance of cloud-amount computation under different scales, backgrounds, and cloud types. Multi-scale confidence fusion, which fuses multi-scale confidence into the final cloud-amount results, notably conforms to a voting principle of human cloud-amount interpretation, itself inspired by the business process of multi-person cloud-amount interpretation.
- A bag-of-words loss function is designed to optimize the training process and better distinguish complex cloud edges, small clouds, and cloud-like backgrounds.
- To verify the effectiveness of the proposed model, a cloud-amount calculation dataset, CTI_RSCloud, is built using GF series satellites and ZY series images. It includes thumbnail images, browsing images, and cloud-amount information of remote-sensing images. The data is open-sourced on https://pan.baidu.com/s/1qgvc5j2dxCDl2ei05khxzg (access on 1 September 2022).
2. Related Work
2.1. CNN-Based Method for Cloud Detection and Cloud-Amount Calculation
2.2. Lightweight Convolutional Neural Networks
2.3. Complex Neural Networks
3. Proposed Method
3.1. Model Description
3.2. ShuffleCloudNet
3.3. Multi-Scale Confidence Fusion (MSCF) Module
3.3.1. Multi-Scale Classification Information
3.3.2. Rules for Fusion Evaluation
3.3.3. Probability Distribution Prediction
- Find the cloud-amount class with the highest probability in the output of the classification result, denoted as , corresponding to the probability as , where 0 ≤ n ≤ 10 is a natural number;
- Determine the selection number M of the adjacent cloud-amount levels for the highest cloud-amount level;
- Normalize the cloud-amount level probability, , where ;
- The final cloud amount is .
4. Experimental Results
4.1. Experimental Data
- CTI_RSCloud: This mainly includes 1829 browsing images of remote-sensing images and an equal number of thumbnail images, with images mainly from GF-1, GF-2, ZY series, etc.
- GF-1 WFV: This was created by the SENDIMAGE lab. The dataset contains 108 complete scenes at level 2A, where all masks are labeled with cloud, cloud shadow, clear sky, and non-value pixels. The spatial resolution of the dataset is 16 m and consists of one NIR band and three visible bands. Since our study concentrates on detecting cloud pixels, all other classes of masks are set to background pixels (i.e., non-cloud pixels). Specifically, the cloud shadows, clear sky, and non-value pixels in the original masks are set to 0, and the cloud pixels are set to 1.
- AIR-CD: This is based on GF-2 high resolution, which includes 34 complete scenes collected by GF-2 satellite from different regions of China between February 2017 and November 2017. AIR-CD is one of the earliest publicly available cloud-detection datasets of remote-sensing images collected from GF-2 satellites. In addition, the scenes in AIR-CD are more complex than in previous datasets, making it more challenging to achieve high-accuracy cloud detection and cloud-amount calculation. The dataset contains various land-cover types, including urban, wilderness, snow, and forest. The dataset consists of near-infrared and visible bands with a spatial resolution of 4 m and a size of 7300 × 6908 pixels. Considering the radiation difference between PMS1 and PMS2 sensors in the GF-2 satellite imaging system, the scenes of both sensors are used as experimental data to ensure the generalization ability of the model.
4.2. Data Preprocessing
- CTI_RSCloud: The main elements of preprocessing are data filtering, cloud-amount level classification, cloud-amount judgment rules, sample annotation, etc.
- Selection of remote-sensing images
- Cloud-amount level division
- Cloud-amount interpretation rules
- 2.
- GF-1 WFV and AIR-CD
4.3. Implemental Details
4.3.1. Evaluation Metrics
4.3.2. Comparative Models
4.3.3. Experimental Setup
4.4. Ablation Experiments
- Effect of ShuffleCloudNet-L and ShuffleCloudNet-S
- 2.
- Effect of MSCF
- 3.
- Effect of BW loss
4.5. Comparison with State-of-the-Art Methods
- Quantitative analysis
- 2.
- Qualitative analysis
4.6. Limitations
5. Discussion
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Zhang, Y.; Rossow, W.B.; Lacis, A.A.; Oinas, V.; Mishchenko, M.I. Calculation of radiative fluxes from the surface to top of atmosphere based on ISCCP and other global data sets: Refinements of the radiative transfer model and the input data. J. Geophys. Res. Atmos. 2004, 109, D19105. [Google Scholar] [CrossRef]
- Lu, D. Detection and substitution of clouds/hazes and their cast shadows on IKONOS images. Int. J. Remote Sens. 2007, 28, 4027–4035. [Google Scholar] [CrossRef]
- Mackie, S.; Merchant, C.J.; Embury, O.; Francis, P. Generalized Bayesian cloud detection for satellite imagery. Part 1: Technique and validation for night-time imagery over land and sea. Int. J. Remote Sens. 2010, 31, 2595–2621. [Google Scholar] [CrossRef]
- He, Q.-J. A daytime cloud detection algorithm for FY-3A/VIRR data. Int. J. Remote Sens. 2011, 32, 6811–6822. [Google Scholar] [CrossRef]
- Marais, I.V.Z.; Preez, J.A.D.; Steyn, W.H. An optimal image transform for threshold-based cloud detection using heteroscedastic discriminant analysis. Int. J. Remote Sens. 2011, 32, 1713–1729. [Google Scholar] [CrossRef]
- Han, Y.; Kim, B.; Kim, Y.; Lee, W.H. Automatic cloud detection for high spatial resolution multi-temporal images. Remote Sens. Lett. 2014, 5, 601–608. [Google Scholar] [CrossRef]
- Lin, C.H.; Lin, B.Y.; Lee, K.Y.; Chen, Y.C. Radiometric normalization and cloud detection of optical satellite images using invariant pixels. ISPRS J. Photogramm. Remote Sens. 2015, 106, 107–117. [Google Scholar] [CrossRef]
- Adrian, F. Cloud and Cloud-Shadow Detection in SPOT5 HRG Imagery with Automated Morphological Feature Extraction. Remote Sens. 2014, 6, 776–800. [Google Scholar]
- Bai, T.; Li, D.; Sun, K.; Chen, Y.; Li, W. Cloud Detection for High-Resolution Satellite Imagery Using Machine Learning and Multi-Feature Fusion. Remote Sens. 2016, 8, 715. [Google Scholar] [CrossRef]
- Wieland, M.; Li, Y.; Martinis, S. Multi-sensor cloud and cloud shadow segmentation with a convolutional neural network. Remote Sens. Environ. 2019, 230, 111203. [Google Scholar] [CrossRef]
- Tian, M.; Chen, H.; Liu, G. Cloud Detection and Classification for S-NPP FSR CRIS Data Using Supervised Machine Learning. In Proceedings of the IGARSS 2019—2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July–2 August 2019. [Google Scholar]
- Ji, S.; Dai, P.; Lu, M.; Zhang, Y. Simultaneous Cloud Detection and Removal From Bitemporal Remote Sensing Images Using Cascade Convolutional Neural Networks. IEEE Trans. Geosci. Remote Sens. 2020, 59, 732–748. [Google Scholar] [CrossRef]
- Sudhakaran, S.; Lanz, O. Convolutional Long Short-Term Memory Networks for Recognizing First Person Interactions. In Proceedings of the IEEE International Conference on Computer Vision Workshop, Venice, Italy, 22–29 October 2017. [Google Scholar]
- Jeppesen, J.H.; Jacobsen, R.H.; Inceoglu, F.; Toftegaard, T.S. A cloud detection algorithm for satellite imagery based on deep learning. Remote Sens. Environ. 2019, 229, 247–259. [Google Scholar] [CrossRef]
- Xie, W.; Yang, J.; Li, Y.; Lei, J.; Zhong, J.; Li, J. Discriminative feature learning constrained unsupervised network for cloud detection in remote sensing imagery. Remote Sens. 2020, 12, 456. [Google Scholar] [CrossRef]
- Dai, P.; Ji, S.; Zhang, Y. Gated convolutional networks for cloud removal from bi-temporal remote sensing images. Remote Sens. 2020, 12, 3427. [Google Scholar] [CrossRef]
- Li, X.; Zheng, H.; Han, C.; Zheng, W.; Chen, H.; Jing, Y.; Dong, K. SFRS-net: A cloud-detection method based on deep convolutional neural networks for GF-1 remote-sensing images. Remote Sens. 2021, 13, 2910. [Google Scholar] [CrossRef]
- Ma, N.; Sun, L.; Zhou, C.; He, Y. Cloud detection algorithm for multi-satellite remote sensing imagery based on a spectral library and 1D convolutional neural network. Remote Sens. 2021, 13, 3319. [Google Scholar] [CrossRef]
- Xie, F.; Shi, M.; Shi, Z.; Yin, J.; Zhao, D. Multilevel Cloud Detection in Remote Sensing Images Based on Deep Learning. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 3631–3640. [Google Scholar] [CrossRef]
- Li, Z.; Shen, H.; Cheng, Q.; Liu, Y.; You, S.; He, Z. Deep learning based cloud detection for medium and high resolution remote sensing images of different sensors. ISPRS J. Photogramm. Remote Sens. 2019, 150, 197–212. [Google Scholar] [CrossRef]
- Shao, Z.; Pan, Y.; Diao, C.; Cai, J. Cloud Detection in Remote Sensing Images Based on Multiscale Features-Convolutional Neural Network. IEEE Trans. Geosci. Remote Sens. 2019, 57, 4062–4076. [Google Scholar] [CrossRef]
- Yang, J.; Guo, J.; Yue, H.; Liu, Z.; Hu, H.; Li, K. CDnet: CNN-Based Cloud Detection for Remote Sensing Imagery. IEEE Trans. Geosci. Remote Sens. 2019, 57, 6195–6211. [Google Scholar] [CrossRef]
- Mohajerani, S.; Krammer, T.A.; Saeedi, P. Cloud Detection Algorithm for Remote Sensing Images Using Fully Convolutional Neural Networks. In Proceedings of the 2018 IEEE 20th International Workshop on Multimedia Signal Processing (MMSP), Vancouver, BC, Canada, 29–31 August 2018. [Google Scholar]
- Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
- Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.C. MobileNetV2: Inverted Residuals and Linear Bottlenecks. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018. [Google Scholar]
- Howard, A.; Sandler, M.; Chu, G.; Chen, L.-C.; Chen, B.; Tan, M.; Wang, W.; Zhu, Y.; Pang, R.; Vasudevan, V. Searching for mobilenetv3. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Korea, 27 October–2 November 2019; pp. 1314–1324. [Google Scholar]
- Iandola, F.N.; Han, S.; Moskewicz, M.W.; Ashraf, K.; Dally, W.J.; Keutzer, K. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5 MB model size. arXiv 2016, arXiv:1602.07360. [Google Scholar]
- Zhang, X.; Zhou, X.; Lin, M.; Sun, J. ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018. [Google Scholar]
- Ma, N.; Zhang, X.; Zheng, H.-T.; Sun, J. ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design. In Proceedings of the European Conference on Computer Vision (ECCV); Springer: Cham, Switzerland, 2018; pp. 122–138. [Google Scholar]
- Xia, Y.; Wang, J. A dual neural network for kinematic control of redundant robot manipulators. IEEE Trans. Syst. Man Cybern. Part B (Cybern.) 2001, 31, 147–154. [Google Scholar]
- Zhang, Y.; Wang, J. Obstacle avoidance for kinematically redundant manipulators using a dual neural network. IEEE Trans. Syst. Man Cybern. Part B (Cybern.) 2004, 34, 752–759. [Google Scholar] [CrossRef] [PubMed]
- Liu, S.; Wang, J. A simplified dual neural network for quadratic programming with its KWTA application. IEEE Trans. Neural Netw. 2006, 17, 1500–1510. [Google Scholar]
- Chen, H.; Wu, L.; Dou, Q.; Qin, J.; Li, S.; Cheng, J.-Z.; Ni, D.; Heng, P.-A. Ultrasound standard plane detection using a composite neural network framework. IEEE Trans. Cybern. 2017, 47, 1576–1586. [Google Scholar] [CrossRef]
- Zhu, F.; Ma, Z.; Li, X.; Chen, G.; Chien, J.-T.; Xue, J.-H.; Guo, J. Image-text dual neural network with decision strategy for small-sample image classification. Neurocomputing 2019, 328, 182–188. [Google Scholar] [CrossRef]
- Tian, G.; Liu, J.; Yang, W. A dual neural network for object detection in UAV images. Neurocomputing 2021, 443, 292–301. [Google Scholar] [CrossRef]
- Li, Y.; Geng, T.; Li, A.; Yu, H. Bcnn: Binary complex neural network. Microprocess. Microsyst. 2021, 87, 104359. [Google Scholar] [CrossRef]
- Li, Z.; Shen, H.; Li, H.; Xia, G.; Gamba, P.; Zhang, L. Multi-feature combined cloud and cloud shadow detection in GaoFen-1 wide field of view imagery. Remote Sens. Environ. 2017, 191, 342–358. [Google Scholar] [CrossRef]
- He, Q.; Sun, X.; Yan, Z.; Fu, K. DABNet: Deformable contextual and boundary-weighted network for cloud detection in remote sensing images. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–16. [Google Scholar] [CrossRef]
- Yan, Z.; Yan, M.; Sun, H.; Fu, K.; Hong, J.; Sun, J.; Zhang, Y.; Sun, X. Cloud and cloud shadow detection using multilevel feature fused segmentation network. IEEE Geosci. Remote Sens. Lett. 2018, 15, 1600–1604. [Google Scholar] [CrossRef]
Cloud-Amount Level | 0 | 10 | 20 | 30 | 40 | 50 | 60 | 70 | 80 | 90 | 100 |
---|---|---|---|---|---|---|---|---|---|---|---|
Probability | P0 | P1 | P2 | P3 | P4 | P5 | P6 | P7 | P8 | P9 | P10 |
Cloud-Amount Level | 0 | 10 | 20 | 30 | 40 | 50 | 60 | 70 | 80 | 90 | 100 | Total | Accuracy |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Total Number | 5 | 5 | 5 | 5 | 5 | 5 | 5 | 5 | 5 | 5 | 5 | 55 | 100% |
M = 0 Correct Number | 5 | 4 | 4 | 3 | 4 | 4 | 3 | 3 | 3 | 3 | 4 | 40 | 72.73% |
M = 2 Correct Number | 5 | 4 | 4 | 4 | 4 | 5 | 4 | 4 | 5 | 5 | 5 | 49 | 89.09% |
M = 4 Correct Number | 4 | 4 | 4 | 3 | 3 | 3 | 3 | 2 | 3 | 4 | 3 | 36 | 65.45% |
M = 6 Correct Number | 4 | 4 | 4 | 3 | 3 | 4 | 3 | 3 | 3 | 4 | 3 | 38 | 69.09% |
M = 8 Correct Number | 5 | 4 | 4 | 3 | 3 | 3 | 3 | 3 | 3 | 4 | 3 | 38 | 69.09% |
Method | Precision (Top 3) (%) | Precision (Top 5) (%) | Params (MB) |
---|---|---|---|
ShuffleNetV2 [29] | 71.20 | 82.85 | 5.28 |
ShuffleCloudNet-L | 76.70 | 88.02 | 5.6 |
ShuffleCloudNet-S | 70.23 | 79.11 | 5.6 |
ShuffleCloudNet | 82.85 | 92.56 | 12.2 |
Method | Precision (Top 3) (%) | Precision (Top 5) (%) | Params (MB) |
---|---|---|---|
Baseline | 77.67 | 89.97 | 11.2 |
+MSCF | 82.85 | 92.56 | 12.2 |
Method | Precision (Top 3) (%) | Precision (Top 5) (%) | Params (MB) |
---|---|---|---|
BCE Loss | 77.99 | 87.70 | 12.2 |
BW Loss | 82.85 | 92.56 | 12.2 |
Method | Accuracy | Efficiency | ||||||
---|---|---|---|---|---|---|---|---|
CTI_RSCloud | AIR-CD | GF-1 WFV | Params (MB) | Inf Time (ms) | ||||
Precision (Top 3) (%) | Precision (Top 5) (%) | Precision (Top 3) (%) | Precision (Top 5) (%) | Precision (Top 3) (%) | Precision (Top 5) (%) | |||
SqueezeNet [27] | 73.46 | 84.47 | 82.35 | 94.12 | 67.92 | 79.25 | 4.8 | 1.3 |
MobileNetV2 [25] | 72.17 | 85.76 | 85.29 | 91.18 | 67.92 | 77.36 | 3.5 | 1.5 |
ShuffleNetV2 [29] | 77.67 | 89.03 | 88.24 | 97.06 | 63.21 | 76.42 | 5.2 | 1.5 |
MFFSNet backbone [39] | 81.88 | 90.61 | 94.12 | 100.0 | 71.70 | 82.08 | 63.1 | 215.3 |
MSCFF backbone [20] | 80.91 | 89.32 | 94.12 | 100.0 | 69.92 | 80.19 | 42 | 158.6 |
CDNet backbone [22] | 81.88 | 90.61 | 94.12 | 100.0 | 71.70 | 82.08 | 45.6 | 145.0 |
ShuffleCloudNet | 82.85 | 92.56 | 97.06 | 100.00 | 72.74 | 83.02 | 12.2 | 1.5 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wang, G.; Lu, Z.; Wang, P. ShuffleCloudNet: A Lightweight Composite Neural Network-Based Method for Cloud Computation in Remote-Sensing Images. Remote Sens. 2022, 14, 5258. https://doi.org/10.3390/rs14205258
Wang G, Lu Z, Wang P. ShuffleCloudNet: A Lightweight Composite Neural Network-Based Method for Cloud Computation in Remote-Sensing Images. Remote Sensing. 2022; 14(20):5258. https://doi.org/10.3390/rs14205258
Chicago/Turabian StyleWang, Gang, Zhiying Lu, and Ping Wang. 2022. "ShuffleCloudNet: A Lightweight Composite Neural Network-Based Method for Cloud Computation in Remote-Sensing Images" Remote Sensing 14, no. 20: 5258. https://doi.org/10.3390/rs14205258
APA StyleWang, G., Lu, Z., & Wang, P. (2022). ShuffleCloudNet: A Lightweight Composite Neural Network-Based Method for Cloud Computation in Remote-Sensing Images. Remote Sensing, 14(20), 5258. https://doi.org/10.3390/rs14205258