Learning a Dilated Residual Network for SAR Image Despeckling
AbstractIn this paper, to break the limit of the traditional linear models for synthetic aperture radar (SAR) image despeckling, we propose a novel deep learning approach by learning a non-linear end-to-end mapping between the noisy and clean SAR images with a dilated residual network (SAR-DRN). SAR-DRN is based on dilated convolutions, which can both enlarge the receptive field and maintain the filter size and layer depth with a lightweight structure. In addition, skip connections and a residual learning strategy are added to the despeckling model to maintain the image details and reduce the vanishing gradient problem. Compared with the traditional despeckling methods, the proposed method shows a superior performance over the state-of-the-art methods in both quantitative and visual assessments, especially for strong speckle noise. View Full-Text
Share & Cite This Article
Zhang, Q.; Yuan, Q.; Li, J.; Yang, Z.; Ma, X. Learning a Dilated Residual Network for SAR Image Despeckling. Remote Sens. 2018, 10, 196.
Zhang Q, Yuan Q, Li J, Yang Z, Ma X. Learning a Dilated Residual Network for SAR Image Despeckling. Remote Sensing. 2018; 10(2):196.Chicago/Turabian Style
Zhang, Qiang; Yuan, Qiangqiang; Li, Jie; Yang, Zhen; Ma, Xiaoshuang. 2018. "Learning a Dilated Residual Network for SAR Image Despeckling." Remote Sens. 10, no. 2: 196.
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.