remotesensing-logo

Journal Browser

Journal Browser

Collaborative Learning for Multimodal Remote Sensing Analysis: Methods, Techniques and Applications

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: 28 May 2025 | Viewed by 1562

Special Issue Editors


E-Mail Website
Guest Editor
Key Laboratory of Collaborative Intelligence Systems of Ministry of Education, School of Electronic Engineering, Xidian University, Xi’an 710071, China
Interests: artificial intelligence (in particular, machine learning, multiagent systems and their applications); formal methods (in particular, machine-learning-based model checking)
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
DHI, Agern Alle 5, 2970 Hørsholm, Denmark
Interests: environmental change monitoring; machine learning/deep learning; geospatial artificial intelligence
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Electronic Engineering, Key Laboratory of Collaborative Intelligence Systems of Ministry of Education, Xidian University, Xi’an 710071, China
Interests: artificial intelligence; evolutionary computation; metaheuristic learning; memetic computing; image processsing; hyperspectral image interpretation; change detection

E-Mail Website
Guest Editor
Key Laboratory of Collaborative Intelligence Systems, Ministry of Education, Xidian University, Xi'an, China
Interests: self-supervised learning; hyperspectral imaging; attention mechanism; spatial resolution; feature extraction

Special Issue Information

Dear Colleagues,

Collaborative Learning for Multimodal Remote Sensing Analysis is a rapidly evolving field that integrates diverse sensing modalities to enhance the understanding and interpretation of complex environmental and urban phenomena. This interdisciplinary approach leverages the strengths of various sensors, including optical, infrared, LiDAR, and radar, to provide a comprehensive view of the observed scenes. Collaborative analysis of multimodal data is crucial for improving the accuracy and reliability of remote sensing applications, such as environmental monitoring, disaster response, and urban planning.

In this context, the collaborative learning paradigm plays a pivotal role in fusing data from different sources, overcoming the limitations of individual sensors, and extracting more meaningful insights. It involves the development of algorithms that can effectively combine information from multiple modalities, leading to enhanced feature representation and improved decision-making processes.

This Special Issue is designed to bring together cutting-edge research on collaborative learning methods, techniques, and applications in the domain of multimodal remote sensing analysis. We seek contributions that showcase innovative approaches and significant advancements in this area. Topics of interest include, but are not limited to, the following:

  • Collaborative learning frameworks for multimodal data integration;
  • Theoretical foundations and algorithms for collaborative learning in remote sensing;
  • Applications of collaborative learning in environmental monitoring and ecological studies;
  • Collaborative learning for urban analytics and smart city development;
  • Multimodal data fusion techniques for disaster detection and management;
  • Collaborative learning for agricultural and forestry monitoring;
  • Advanced signal processing and feature extraction methods for collaborative learning;
  • Performance metrics and evaluation protocols for collaborative learning in remote sensing.

We encourage the submission of research articles, review articles, and short communications that contribute to the advancement of collaborative learning in multimodal remote sensing analysis. Your contributions will not only enrich the body of knowledge in this field but also foster interdisciplinary collaboration and innovation.

Dr. Mingyang Zhang
Dr. Puzhao Zhang
Dr. Xiangming Jiang
Dr. Fenlong Jiang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • deep learning
  • collaborative learning
  • remote sensing analysis
  • computational intelligence
  • regular optimization

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

22 pages, 1613 KiB  
Article
I-PAttnGAN: An Image-Assisted Point Cloud Generation Method Based on Attention Generative Adversarial Network
by Wenwen Li, Yaxing Chen, Qianyue Fan, Meng Yang, Bin Guo and Zhiwen Yu
Remote Sens. 2025, 17(1), 153; https://doi.org/10.3390/rs17010153 - 4 Jan 2025
Cited by 1 | Viewed by 966
Abstract
The key to building a 3D point cloud map is to ensure the consistency and accuracy of point cloud data. However, the hardware limitations of LiDAR lead to a sparse and uneven distribution of point cloud data in the edge region, which brings [...] Read more.
The key to building a 3D point cloud map is to ensure the consistency and accuracy of point cloud data. However, the hardware limitations of LiDAR lead to a sparse and uneven distribution of point cloud data in the edge region, which brings many challenges to 3D map construction, such as low registration accuracy and high construction errors in the sparse regions. To solve these problems, this paper proposes the I-PAttnGAN network to generate point clouds with image-assisted approaches, which aims to improve the density and uniformity of sparse regions and enhance the representation ability of point cloud data in sparse edge regions for distant objects. I-PAttnGAN uses the normalized flow model to extract the point cloud attention weights dynamically and then integrates the point cloud weights into image features to learn the transformation relationship between the weighted image features and the point cloud distribution, so as to realize the adaptive generation of the point cloud density and resolution. Extensive experiments are conducted on ShapeNet and nuScenes datasets. The results show that I-PAttnGAN significantly improves the performance of generating high-quality, dense point clouds in low-density regions compared with existing methods: the Chamfer distance value is reduced by about 2 times, the Earth Mover’s distance value is increased by 1.3 times, and the F1 value is increased by about 1.5 times. In addition, the effectiveness of the newly added modules is verified by ablation experiments, and the experimental results show that these modules play a key role in the generation process. Overall, the proposed model shows significant advantages in terms of accuracy and efficiency, especially in generating complete spatial point clouds. Full article
Show Figures

Figure 1

Back to TopTop