remotesensing-logo

Journal Browser

Journal Browser

Deep Learning Applications of 3D Reconstruction and Visualization from Remote Sensing Imagery

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "AI Remote Sensing".

Deadline for manuscript submissions: 30 June 2025 | Viewed by 6191

Special Issue Editors


E-Mail Website
Guest Editor
German Aerospace Center (DLR), Institute of Optical Sensor Systems, Rutherfordstr. 2, D-12489 Berlin, Germany
Interests: artificial intelligence as a counterpart to traditional photogrammetric approaches; remote sensing in search and rescue operations; calibration and validation of heterogenous sensor concepts and their accuracy/quality
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Faculty of Geo-Information Science and Earth Observation (ITC), University of Twente, P.O. Box 217, 7500 AE Enschede, The Netherlands
Interests: geometric and radiometric sensors; sensor fusion; calibration of imageries; signal/image processing; mission planning; navigation and position/orientation; machine learning; simultaneous localization and mapping; regulations and economic impact; agriculture; geosciences; urban area; architecture; monitoring/change detection; education; unmanned aerial vehicles (UAV)
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Deep learning applications arise and thrive in various fields, including education, healthcare, marketing and advertising, cybersecurity, and natural language processing. However, the number of applications, new approaches, and network architectures has grown rapidly, especially in remote sensing. Related research ranges from automation, enhanced spatial understanding, disaster management, and robotics to fundamental research.

Even if some algorithms and approaches have been known for decades, exciting new approaches are constantly emerging from various combinations and are being developed.

This Special Issue aims to cover recent advancements in deep learning methods in the field of 3D reconstruction and geo-visualization. Both original research and review articles are welcome. Topics include, but are not limited to, the following:

  •     Multi-spectral and hyperspectral remote sensing;
  •     Lidar and laser scanning;
  •     Geometric reconstruction;
  •     Physical modeling and signatures;
  •     Change detection;
  •     Image processing and pattern recognition;
  •     Remote sensing applications.

Prof. Dr. Henry Meißner
Prof. Dr. Francesco Nex
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • deep learning
  • 3D reconstruction
  • visualization
  • disaster management
  • enhanced spatial understanding
  • algorithms

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

24 pages, 6629 KiB  
Article
UnDER: Unsupervised Dense Point Cloud Extraction Routine for UAV Imagery Using Deep Learning
by John Ray Bergado and Francesco Nex
Remote Sens. 2025, 17(1), 24; https://doi.org/10.3390/rs17010024 - 25 Dec 2024
Viewed by 749
Abstract
Extraction of dense 3D geographic information from ultra-high-resolution unmanned aerial vehicle (UAV) imagery unlocks a great number of mapping and monitoring applications. This is facilitated by a step called dense image matching, which tries to find pixels corresponding to the same object within [...] Read more.
Extraction of dense 3D geographic information from ultra-high-resolution unmanned aerial vehicle (UAV) imagery unlocks a great number of mapping and monitoring applications. This is facilitated by a step called dense image matching, which tries to find pixels corresponding to the same object within overlapping images captured by the UAV from different locations. Recent developments in deep learning utilize deep convolutional networks to perform this dense pixel correspondence task. A common theme in these developments is to train the network in a supervised setting using available dense 3D reference datasets. However, in this work we propose a novel unsupervised dense point cloud extraction routine for UAV imagery, called UnDER. We propose a novel disparity-shifting procedure to enable the use of a stereo matching network pretrained on an entirely different typology of image data in the disparity-estimation step of UnDER. Unlike previously proposed disparity-shifting techniques for forming cost volumes, the goal of our procedure was to address the domain shift between the images that the network was pretrained on and the UAV images, by using prior information from the UAV image acquisition. We also developed a procedure for occlusion masking based on disparity consistency checking that uses the disparity image space rather than the object space proposed in a standard 3D reconstruction routine for UAV data. Our benchmarking results demonstrated significant improvements in quantitative performance, reducing the mean cloud-to-cloud distance by approximately 1.8 times the ground sampling distance (GSD) compared to other methods. Full article
Show Figures

Figure 1

18 pages, 5411 KiB  
Article
Leveraging Neural Radiance Fields for Large-Scale 3D Reconstruction from Aerial Imagery
by Max Hermann, Hyovin Kwak, Boitumelo Ruf and Martin Weinmann
Remote Sens. 2024, 16(24), 4655; https://doi.org/10.3390/rs16244655 - 12 Dec 2024
Viewed by 1497
Abstract
Since conventional photogrammetric approaches struggle with with low-texture, reflective, and transparent regions, this study explores the application of Neural Radiance Fields (NeRFs) for large-scale 3D reconstruction of outdoor scenes, since NeRF-based methods have recently shown very impressive results in these areas. We evaluate [...] Read more.
Since conventional photogrammetric approaches struggle with with low-texture, reflective, and transparent regions, this study explores the application of Neural Radiance Fields (NeRFs) for large-scale 3D reconstruction of outdoor scenes, since NeRF-based methods have recently shown very impressive results in these areas. We evaluate three approaches: Mega-NeRF, Block-NeRF, and Direct Voxel Grid Optimization, focusing on their accuracy and completeness compared to ground truth point clouds. In addition, we analyze the effects of using multiple sub-modules, estimating the visibility by an additional neural network and varying the density threshold for the extraction of the point cloud. For performance evaluation, we use benchmark datasets that correspond to the setting off standard flight campaigns and therefore typically have nadir camera perspective and relatively little image overlap, which can be challenging for NeRF-based approaches that are typically trained with significantly more images and varying camera angles. We show that despite lower quality compared to classic photogrammetric approaches, NeRF-based reconstructions provide visually convincing results in challenging areas. Furthermore, our study shows that in particular increasing the number of sub-modules and predicting the visibility using an additional neural network improves the quality of the resulting reconstructions significantly. Full article
Show Figures

Figure 1

19 pages, 2903 KiB  
Article
STs-NeRF: Novel View Synthesis of Space Targets Based on Improved Neural Radiance Fields
by Kaidi Ma, Peixun Liu, Haijiang Sun and Jiawei Teng
Remote Sens. 2024, 16(13), 2327; https://doi.org/10.3390/rs16132327 - 26 Jun 2024
Viewed by 1895
Abstract
Since Neural Radiation Field (NeRF) was first proposed, a large number of studies dedicated to them have emerged. These fields achieved very good results in their respective contexts, but they are not sufficiently practical for our project. If we want to obtain novel [...] Read more.
Since Neural Radiation Field (NeRF) was first proposed, a large number of studies dedicated to them have emerged. These fields achieved very good results in their respective contexts, but they are not sufficiently practical for our project. If we want to obtain novel images of satellites photographed in space by another satellite, we must face problems like inaccurate camera focal lengths and poor image texture. There are also some small structures on satellites that NeRF-like algorithms cannot render well. In these cases, the NeRF’s performance cannot sufficiently meet the project’s needs. In fact, the images rendered by the NeRF will have many incomplete structures, while the MipNeRF will blur the edges of the structures on the satellite and obtain unrealistic colors. In response to these problems, we proposed STs-NeRF, which improves the quality of the new perspective images through an encoding module and a new network structure. We found a method for calculating poses that are suitable for our dataset and that enhance the network’s input learning effect by recoding the sampling points and viewing directions through a dynamic encoding (DE) module. Then, we input them into our layer-by-layer normalized multi-layer perceptron (LLNMLP). By simultaneously inputting points and directions into the network, we avoid the mutual influence between light rays, and through layer-by-layer normalization, we ease the model’s overfitting from a training perspective. Since real images should not be made public, we created a synthetic dataset and conducted a series of experiments. The experiments showed that our method achieves the best results in reconstructing captured satellite images, compared with the NeRF, the MipNeRF, the NeuS and the NeRF2Mesh, and improves the Peak Signal-to-Noise Ratio (PSNR) by 19%. We have also tested on public datasets, and our NeRF can still render acceptable images on datasets with better textures. Full article
Show Figures

Graphical abstract

Other

Jump to: Research

13 pages, 8320 KiB  
Technical Note
Unmanned Aerial Vehicle-Neural Radiance Field (UAV-NeRF): Learning Multiview Drone Three-Dimensional Reconstruction with Neural Radiance Field
by Li Li, Yongsheng Zhang, Zhipeng Jiang, Ziquan Wang, Lei Zhang and Han Gao
Remote Sens. 2024, 16(22), 4168; https://doi.org/10.3390/rs16224168 - 8 Nov 2024
Viewed by 1333
Abstract
In traditional 3D reconstruction using UAV images, only radiance information, which is treated as a geometric constraint, is used in feature matching, allowing for the restoration of the scene’s structure. After introducing radiance supervision, NeRF can adjust the geometry in the fixed-ray direction, [...] Read more.
In traditional 3D reconstruction using UAV images, only radiance information, which is treated as a geometric constraint, is used in feature matching, allowing for the restoration of the scene’s structure. After introducing radiance supervision, NeRF can adjust the geometry in the fixed-ray direction, resulting in a smaller search space and higher robustness. Considering the lack of NeRF construction methods for aerial scenarios, we propose a new NeRF point sampling method, which is generated using a UAV imaging model, compatible with a global geographic coordinate system, and suitable for a UAV view. We found that NeRF is optimized entirely based on the radiance while ignoring the direct geometry constraint. Therefore, we designed a radiance correction strategy that considers the incidence angle. Our method can complete point sampling in a UAV imaging scene, as well as simultaneously perform digital surface model construction and ground radiance information recovery. When tested on self-acquired datasets, the NeRF variant proposed in this paper achieved better reconstruction accuracy than the original NeRF-based methods. It also reached a level of precision comparable to that of traditional photogrammetry methods, and it is capable of outputting a surface albedo that includes shadow information. Full article
Show Figures

Graphical abstract

Back to TopTop