remotesensing-logo

Journal Browser

Journal Browser

New Deep Learning Paradigms for Multisource Remote Sensing Data Fusion and Classification

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: 31 August 2024 | Viewed by 4657

Special Issue Editors


E-Mail Website
Guest Editor
School of Science and Engineering, The Chinese University of Hong Kong, Shenzhen 518172, China
Interests: AI internet of things; machine learning; satellite remote sensing

E-Mail Website
Guest Editor
Department of Land Surveying and Geo-Informatics, The Hong Kong Polytechnic University, Kowloon TU428, Hong Kong
Interests: remote sensing; computer vision; deep learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Mathematics, Computer Science and Physics, University of Udine, 33100 Udine, Italy
Interests: computer vision; pattern recognition; machine learning; deep learning; sensor reconfiguration; anomaly detection
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Remote Sensing and Geomatics Engineering, Nanjing University of Information Science and Technology, Nanjing 210044, China
Interests: deep learning; computer vision; remote sensing; semantic segmentation; transformer

Special Issue Information

Dear Colleagues,

Leveraging multisource remote sensing images for earth mapping and monitoring has drawn significant attention in large-scale geoscience applications. Numerous methods have been developed for data fusion and classification based on novel deep learning models in a fully supervised training manner. Although deep learning has shown dominance in multisource image fusion and classification, it still encounters several issues in practice, such as limited labeled samples in model training, weak representative capability for multisource data from heterogeneous domains, and performance degradation in cross-domain tasks. Some new learning paradigms have emerged to address these problems and to promote multimodal collaboration and cross-modal analysis in remote sensing, including self-supervised, weakly supervised, transfer, and federated learning. They significantly improve the generalization and robustness of deep learning models and provide new possibilities and challenges for the use of novel training and optimization algorithms in remote sensing.

This Special Issue aims to highlight the innovative research in novel deep learning paradigms for multisource remote sensing data fusion and classification. Topics may cover anything from multisource data integration to applications in remote sensing. Advanced deep models with new training and optimization strategies exploited in remote sensing applications are welcome to be submitted to this Special Issue.

Articles may address, but are not limited to, the following topics:

  • Novel fusion strategies for multisource remote sensing data;
  • Weakly and self-supervised deep learning in remote sensing image classification;
  • Remote sensing data fusion with generative models;
  • Transfer learning in remote sensing image classification and segmentation;
  • Cross-modal analysis in remote sensing;
  • Land-cover/use mapping using multisource remote sensing data;
  • Multisource remote sensing applications.

Dr. Man On Pun
Dr. Xiaokang Zhang
Dr. Claudio Piciarelli
Dr. Libo Wang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • multi-modal fusion
  • cross-modal analysis
  • deep transfer learning
  • domain adaptation
  • semi-supervised learning
  • active learning
  • federated learning
  • weakly supervised learning
  • self-supervised learning
  • few-shot learning
  • unsupervised representation learning
  • adversarial training
  • generative models
  • semantic segmentation
  • scene classification
  • land cover mapping
  • change detection
  • pansharpening

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

21 pages, 60743 KiB  
Article
Deep Learning Hyperspectral Pansharpening on Large-Scale PRISMA Dataset
by Simone Zini, Mirko Paolo Barbato, Flavio Piccoli and Paolo Napoletano
Remote Sens. 2024, 16(12), 2079; https://doi.org/10.3390/rs16122079 - 8 Jun 2024
Cited by 1 | Viewed by 1047
Abstract
Hyperspectral pansharpening is crucial for the improvement of the usability of images in various applications. However, it remains underexplored due to a scarcity of data. The primary goal of pansharpening is to enhance the spatial resolution of hyperspectral images by reconstructing missing spectral [...] Read more.
Hyperspectral pansharpening is crucial for the improvement of the usability of images in various applications. However, it remains underexplored due to a scarcity of data. The primary goal of pansharpening is to enhance the spatial resolution of hyperspectral images by reconstructing missing spectral information without compromising consistency with the original data. This paper addresses the data gap by presenting a new hyperspectral dataset specifically designed for pansharpening and the evaluation of several deep learning strategies using this dataset. The new dataset has two crucial features that make it invaluable for deep learning hyperspectral pansharpening research. (1) It presents the highest cardinality of images in the state of the art, making it the first statistically relevant dataset for hyperspectral pansharpening evaluation, and (2) it includes a wide variety of scenes, ensuring robust generalization capabilities for various approaches. The data, collected by the ASI PRISMA satellite, cover about 262,200 km2 and their heterogeneity is ensured by a random sampling of the Earth’s surface. The analysis of the deep learning methods consists in the adaptation of these approaches to the PRISMA hyperspectral data and the quantitative and qualitative evaluation of their performance in this new scenario. The investigation included two settings: Reduced Resolution (RR) to evaluate the techniques in a controlled environment and Full Resolution (FR) for a real-world evaluation. In addition, for the sake of completeness, we have also included machine-learning-free approaches in both scenarios. Our comprehensive analysis reveals that data-driven neural network methods significantly outperform traditional approaches, demonstrating a superior adaptability and performance in hyperspectral pansharpening under both RR and FR protocols. Full article
Show Figures

Graphical abstract

19 pages, 11704 KiB  
Article
A Method for Underwater Acoustic Target Recognition Based on the Delay-Doppler Joint Feature
by Libin Du, Zhengkai Wang, Zhichao Lv, Dongyue Han, Lei Wang, Fei Yu and Qing Lan
Remote Sens. 2024, 16(11), 2005; https://doi.org/10.3390/rs16112005 - 2 Jun 2024
Viewed by 601
Abstract
With the aim of solving the problem of identifying complex underwater acoustic targets using a single signal feature in the Time–Frequency (TF) feature, this paper designs a method that recognizes the underwater targets based on the Delay-Doppler joint feature. First, this method uses [...] Read more.
With the aim of solving the problem of identifying complex underwater acoustic targets using a single signal feature in the Time–Frequency (TF) feature, this paper designs a method that recognizes the underwater targets based on the Delay-Doppler joint feature. First, this method uses symplectic finite Fourier transform (SFFT) to extract the Delay-Doppler features of underwater acoustic signals, analyzes the Time–Frequency features at the same time, and combines the Delay-Doppler (DD) feature and Time–Frequency feature to form a joint feature (TF-DD). This paper uses three types of convolutional neural networks to verify that TF-DD can effectively improve the accuracy of target recognition. Secondly, this paper designs an object recognition model (TF-DD-CNN) based on joint features as input, which simplifies the neural network’s overall structure and improves the model’s training efficiency. This research employs ship-radiated noise to validate the efficacy of TF-DD-CNN for target identification. The results demonstrate that the combined characteristic and the TF-DD-CNN model introduced in this study can proficiently detect ships, and the model notably enhances the precision of detection. Full article
Show Figures

Graphical abstract

22 pages, 10413 KiB  
Article
Bridging Domains and Resolutions: Deep Learning-Based Land Cover Mapping without Matched Labels
by Shuyi Cao, Yubin Tang, Enping Yan, Jiawei Jiang and Dengkui Mo
Remote Sens. 2024, 16(8), 1449; https://doi.org/10.3390/rs16081449 - 19 Apr 2024
Viewed by 926
Abstract
High-resolution land cover mapping is crucial in various disciplines but is often hindered by the lack of accurately matched labels. Our study introduces an innovative deep learning methodology for effective land cover mapping, independent of matched labels. The approach comprises three main components: [...] Read more.
High-resolution land cover mapping is crucial in various disciplines but is often hindered by the lack of accurately matched labels. Our study introduces an innovative deep learning methodology for effective land cover mapping, independent of matched labels. The approach comprises three main components: (1) An advanced fully convolutional neural network, augmented with super-resolution features, to refine labels; (2) The application of an instance-batch normalization network (IBN), leveraging these enhanced labels from the source domain, to generate 2-m resolution land cover maps for test sites in the target domain; (3) Noise assessment tests to evaluate the impact of varying noise levels on the model’s mapping accuracy using external labels. The model achieved an overall accuracy of 83.40% in the target domain using endogenous super-resolution labels. In contrast, employing exogenous, high-precision labels from the National Land Cover Database in the source domain led to a notable accuracy increase of 2.55%, reaching 85.48%. This improvement highlights the model’s enhanced generalizability and performance during domain shifts, attributed significantly to the IBN layer. Our findings reveal that, despite the absence of native high-precision labels, the utilization of high-quality external labels can substantially benefit the development of precise land cover mapping, underscoring their potential in scenarios with unmatched labels. Full article
Show Figures

Graphical abstract

20 pages, 1863 KiB  
Article
Denoising Diffusion Probabilistic Model with Adversarial Learning for Remote Sensing Super-Resolution
by Jialu Sui, Qianqian Wu and Man-On Pun
Remote Sens. 2024, 16(7), 1219; https://doi.org/10.3390/rs16071219 - 30 Mar 2024
Viewed by 1174
Abstract
Single Image Super-Resolution (SISR) for image enhancement enables the generation of high spatial resolution in Remote Sensing (RS) images without incurring additional costs. This approach offers a practical solution to obtain high-resolution RS images, addressing challenges posed by the expense of acquisition equipment [...] Read more.
Single Image Super-Resolution (SISR) for image enhancement enables the generation of high spatial resolution in Remote Sensing (RS) images without incurring additional costs. This approach offers a practical solution to obtain high-resolution RS images, addressing challenges posed by the expense of acquisition equipment and unpredictable weather conditions. To address the over-smoothing of the previous SISR models, the diffusion model has been incorporated into RS SISR to generate Super-Resolution (SR) images with enhanced textural details. In this paper, we propose a Diffusion model with Adversarial Learning Strategy (DiffALS) to refine the generative capability of the diffusion model. DiffALS integrates an additional Noise Discriminator (ND) into the training process, employing an adversarial learning strategy on the data distribution learning. This ND guides noise prediction by considering the general correspondence between the noisy image in each step, thereby enhancing the diversity of generated data and the detailed texture prediction of the diffusion model. Furthermore, considering that the diffusion model may exhibit suboptimal performance on traditional pixel-level metrics such as Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity (SSIM), we showcase the effectiveness of DiffALS through downstream semantic segmentation applications. Extensive experiments demonstrate that the proposed model achieves remarkable accuracy and notable visual enhancements. Compared to other state-of-the-art methods, our model establishes an improvement of 189 for Fréchet Inception Distance (FID) and 0.002 for Learned Perceptual Image Patch Similarity (LPIPS) in a SR dataset, namely Alsat, and achieves improvements of 0.4%, 0.3%, and 0.2% for F1 score, MIoU, and Accuracy, respectively, in a segmentation dataset, namely Vaihingen. Full article
Show Figures

Figure 1

Back to TopTop