Multi-View Ground-Based Cloud Recognition by Transferring Deep Visual Information
AbstractSince cloud images captured from different views possess extreme variations, multi-view ground-based cloud recognition is a very challenging task. In this paper, a study of view shift is presented in this field. We focus both on designing proper feature representation and learning distance metrics from sample pairs. Correspondingly, we propose transfer deep local binary patterns (TDLBP) and weighted metric learning (WML). On one hand, to deal with view shift, like variations of illuminations, locations, resolutions and occlusions, we first utilize cloud images to train a convolutional neural network (CNN), and then extract local features from the part summing maps (PSMs) based on feature maps. Finally, we maximize the occurrences of regions for the final feature representation. On the other hand, the number of cloud images in each category varies greatly, leading to the unbalanced similar pairs. Hence, we propose a weighted strategy for metric learning. We validate the proposed method on three cloud datasets (the MOC_e, IAP_e, and CAMS_e) that are collected by different meteorological organizations in China, and the experimental results show the effectiveness of the proposed method. View Full-Text
Share & Cite This Article
Zhang, Z.; Li, D.; Liu, S.; Xiao, B.; Cao, X. Multi-View Ground-Based Cloud Recognition by Transferring Deep Visual Information. Appl. Sci. 2018, 8, 748.
Zhang Z, Li D, Liu S, Xiao B, Cao X. Multi-View Ground-Based Cloud Recognition by Transferring Deep Visual Information. Applied Sciences. 2018; 8(5):748.Chicago/Turabian Style
Zhang, Zhong; Li, Donghong; Liu, Shuang; Xiao, Baihua; Cao, Xiaozhong. 2018. "Multi-View Ground-Based Cloud Recognition by Transferring Deep Visual Information." Appl. Sci. 8, no. 5: 748.
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.