Intelligent Processing and Application of UAV Remote Sensing Image Data

A special issue of Drones (ISSN 2504-446X).

Deadline for manuscript submissions: 15 March 2025 | Viewed by 18058

Special Issue Editor


E-Mail
Guest Editor
The State Key Laboratory of Information Engineering in Surveying Mapping and Remote Sensing, Wuhan University, Wuhan 430079, China
Interests: remote sensing intelligent interpretation; GIS theory and application; unmanned autonomous aerial vehicles; multi-sensor integration; spatio-temporal big data analysis
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Unmanned aerial vehicles (UAVs) carrying different remote sensing loads have been widely used in agricultural monitoring, disaster emergency, urban management, military and other fields. UAVs represent a valid alternative or a complementary solution to satelite platforms, especially for extremely high-resolution acquisitions on small or inaccessible areas, and are not limited by revisit cycles. However, the processing, fusion and comprehensive application of massive UAV remote sensing data are emerging as one of the most important issues in the community.

This Special Issue aims at collecting new developments and methodologies, best practices and applications of UAVs in intelligent processing and application of remote sensing image data.

  • Fine 3D reconstruction of buildings/structures
  • Autonomous indoor/underground landform 3D reconstruction(shopping malls, train station, underground park, catacombs, carst cave, etc.)
  • UAV online target detection and tracking
  • Intelligent interpretation of UAV video/image (image classification, feature extraction, target detection, change detection, biophysical parameter estimation, etc.)
  • Other on-board sensor data processing (multispectral, hyperspectral, thermal, lidar, SAR, gas or radioactivity sensors, etc.)
  • Data fusion: integration of UAV imagery with satellite, aerial or terrestrial data, integration of heterogeneous data captured by UAVs
  • Online and real-time processing/collaborative and fleet of UAVs applied to remote sensing
  • Applications (urban monitoring, precision farming, forestry, disaster prevention, assessment and monitoring, search and rescue, security, archaeology, industrial plant inspection, etc.)
  • Any use of UAVs related to remote sensing

You may choose our Joint Special Issue in Remote Sensing.

Prof. Dr. Haigang Sui
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Drones is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • intelligent processing and application
  • UAV 3D reconstruction in indoor/underground scenes
  • target detection and tracking
  • intelligent interpretation of UAV video/image
  • online and real time processing
  • collaborative and UAV swarm

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 30513 KiB  
Article
From Detection to Action: A Multimodal AI Framework for Traffic Incident Response
by Afaq Ahmed, Muhammad Farhan, Hassan Eesaar, Kil To Chong and Hilal Tayara
Drones 2024, 8(12), 741; https://doi.org/10.3390/drones8120741 - 9 Dec 2024
Viewed by 606
Abstract
With the rising incidence of traffic accidents and growing environmental concerns, the demand for advanced systems to ensure traffic and environmental safety has become increasingly urgent. This paper introduces an automated highway safety management framework that integrates computer vision and natural language processing [...] Read more.
With the rising incidence of traffic accidents and growing environmental concerns, the demand for advanced systems to ensure traffic and environmental safety has become increasingly urgent. This paper introduces an automated highway safety management framework that integrates computer vision and natural language processing for real-time monitoring, analysis, and reporting of traffic incidents. The system not only identifies accidents but also aids in coordinating emergency responses, such as dispatching ambulances, fire services, and police, while simultaneously managing traffic flow. The approach begins with the creation of a diverse highway accident dataset, combining public datasets with drone and CCTV footage. YOLOv11s is retrained on this dataset to enable real-time detection of critical traffic elements and anomalies, such as collisions and fires. A vision–language model (VLM), Moondream2, is employed to generate detailed scene descriptions, which are further refined by a large language model (LLM), GPT 4-Turbo, to produce concise incident reports and actionable suggestions. These reports are automatically sent to relevant authorities, ensuring prompt and effective response. The system’s effectiveness is validated through the analysis of diverse accident videos and zero-shot simulation testing within the Webots environment. The results highlight the potential of combining drone and CCTV imagery with AI-driven methodologies to improve traffic management and enhance public safety. Future work will include refining detection models, expanding dataset diversity, and deploying the framework in real-world scenarios using live drone and CCTV feeds. This study lays the groundwork for scalable and reliable solutions to address critical traffic safety challenges. Full article
Show Figures

Figure 1

20 pages, 4992 KiB  
Article
Fully Automatic Geometric Registration Framework of UAV Imagery Based on Online Map Services and POS
by Pengfei Li, Yu Zhang, Yepei Chen, Ting Bai, Kaimin Sun, Haigang Sui and Yang Wu
Drones 2024, 8(12), 723; https://doi.org/10.3390/drones8120723 - 30 Nov 2024
Viewed by 399
Abstract
Unmanned aerial vehicle (UAV) remote sensing has found extensive applications in various fields due to its ability to quickly provide remote sensing imagery, and the rapid, even automated, geometric registration of these images is an important component of their time efficiency. While current [...] Read more.
Unmanned aerial vehicle (UAV) remote sensing has found extensive applications in various fields due to its ability to quickly provide remote sensing imagery, and the rapid, even automated, geometric registration of these images is an important component of their time efficiency. While current geometric registration methods based on image matching are well developed, there is still room for improvement in terms of time efficiency due to the presence of the following factors: (1) difficulty in accessing historical reference images and (2) inconsistencies in data sources, scales, and orientations between UAV imagery and reference images, which leads to unreliable matching. To further improve the time efficiency of UAV remote sensing, this study proposes a fully automatic geometric registration framework. The workflow features the following aspects: (1) automatic reference image acquisition by using online map services; (2) automatic ground range and resolution estimation using positional and orientation system (POS) data; (3) automatic orientation alignment using POS data. Experimental validation demonstrates that the proposed framework is able to carry out the fully automatic geometric registration of UAV imagery, thus improving the time efficiency of UAV remote sensing. Full article
Show Figures

Figure 1

19 pages, 7665 KiB  
Article
Chestnut Burr Segmentation for Yield Estimation Using UAV-Based Imagery and Deep Learning
by Gabriel A. Carneiro, Joaquim Santos, Joaquim J. Sousa, António Cunha and Luís Pádua
Drones 2024, 8(10), 541; https://doi.org/10.3390/drones8100541 - 1 Oct 2024
Viewed by 922
Abstract
Precision agriculture (PA) has advanced agricultural practices, offering new opportunities for crop management and yield optimization. The use of unmanned aerial vehicles (UAVs) in PA enables high-resolution data acquisition, which has been adopted across different agricultural sectors. However, its application for decision support [...] Read more.
Precision agriculture (PA) has advanced agricultural practices, offering new opportunities for crop management and yield optimization. The use of unmanned aerial vehicles (UAVs) in PA enables high-resolution data acquisition, which has been adopted across different agricultural sectors. However, its application for decision support in chestnut plantations remains under-represented. This study presents the initial development of a methodology for segmenting chestnut burrs from UAV-based imagery to estimate its productivity in point cloud data. Deep learning (DL) architectures, including U-Net, LinkNet, and PSPNet, were employed for chestnut burr segmentation in UAV images captured at a 30 m flight height, with YOLOv8m trained for comparison. Two datasets were used for training and to evaluate the models: one newly introduced in this study and an existing dataset. U-Net demonstrated the best performance, achieving an F1-score of 0.56 and a counting accuracy of 0.71 on the proposed dataset, using a combination of both datasets during training. The primary challenge encountered was that burrs often tend to grow in clusters, leading to unified regions in segmentation, making object detection potentially more suitable for counting. Nevertheless, the results show that DL architectures can generate masks for point cloud segmentation, supporting precise chestnut tree production estimation in future studies. Full article
Show Figures

Figure 1

15 pages, 2235 KiB  
Article
Unmanned Aircraft Systems in Road Assessment: A Novel Approach to the Pavement Condition Index and VIZIR Methodologies
by Diana Marcela Ortega Rengifo, Jose Capa Salinas, Javier Alexander Perez Caicedo and Manuel Alejandro Rojas Manzano
Drones 2024, 8(3), 99; https://doi.org/10.3390/drones8030099 - 14 Mar 2024
Viewed by 2160
Abstract
This paper presents an innovative approach to road assessment, focusing on enhancing the Pavement Condition Index (PCI) and Visión Inspection de Zones et Itinéraires Á Risque (VIZIR) methodologies by integrating Unmanned Aircraft System (UAS) technology. The research was conducted in an urban setting, [...] Read more.
This paper presents an innovative approach to road assessment, focusing on enhancing the Pavement Condition Index (PCI) and Visión Inspection de Zones et Itinéraires Á Risque (VIZIR) methodologies by integrating Unmanned Aircraft System (UAS) technology. The research was conducted in an urban setting, utilizing a UAS to capture high-resolution imagery, which was subsequently processed to generate detailed orthomosaics of road surfaces. This study critically analyzed the discrepancies between traditional field measurements and UAS-derived data in pavement condition assessment. The study findings demonstrate that photogrammetry-derived data from UAS offer at least similar or, in some cases, improved information on the collection of a comprehensive state of roadways, particularly in local and collector roads. Furthermore, this study proposed key modifications to the existing methodologies, including dividing the road network into segments for more precise and relevant data collection. These enhancements aim to address the limitations of current practices in capturing the diverse and dynamic conditions of urban infrastructure. Integrating UAS technology improves the measurement of pavement condition assessments and offers a more efficient, cost-effective, and scalable approach to urban infrastructure management. The implications of this study are significant for urban planners and policymakers, providing a robust framework for future infrastructure assessment and maintenance strategies. Full article
Show Figures

Figure 1

23 pages, 5781 KiB  
Article
Multi-Level Hazard Detection Using a UAV-Mounted Multi-Sensor for Levee Inspection
by Shan Su, Li Yan, Hong Xie, Changjun Chen, Xiong Zhang, Lyuzhou Gao and Rongling Zhang
Drones 2024, 8(3), 90; https://doi.org/10.3390/drones8030090 - 6 Mar 2024
Cited by 3 | Viewed by 1954
Abstract
This paper introduces a developed multi-sensor integrated system comprising a thermal infrared camera, an RGB camera, and a LiDAR sensor, mounted on a lightweight unmanned aerial vehicle (UAV). This system is applied to the inspection tasks of levee engineering, enabling the real-time, rapid, [...] Read more.
This paper introduces a developed multi-sensor integrated system comprising a thermal infrared camera, an RGB camera, and a LiDAR sensor, mounted on a lightweight unmanned aerial vehicle (UAV). This system is applied to the inspection tasks of levee engineering, enabling the real-time, rapid, all-day, all-round, and non-contact acquisition of multi-source data for levee structures and their surrounding environments. Our aim is to address the inefficiencies, high costs, limited data diversity, and potential safety hazards associated with traditional methods, particularly concerning the structural safety of dam bodies. In the preprocessing stage of multi-source data, techniques such as thermal infrared data enhancement and multi-source data alignment are employed to enhance data quality and consistency. Subsequently, a multi-level approach to detecting and screening suspected risk areas is implemented, facilitating the rapid localization of potential hazard zones and assisting in assessing the urgency of addressing these concerns. The reliability of the developed multi-sensor equipment and the multi-level suspected hazard detection algorithm is validated through on-site levee engineering inspections conducted during flood disasters. The application reliably detects and locates suspected hazards, significantly reducing the time and resource costs associated with levee inspections. Moreover, it mitigates safety risks for personnel engaged in levee inspections. Therefore, this method provides reliable data support and technical services for levee inspection, hazard identification, flood control, and disaster reduction. Full article
Show Figures

Figure 1

20 pages, 7223 KiB  
Article
KDP-Net: An Efficient Semantic Segmentation Network for Emergency Landing of Unmanned Aerial Vehicles
by Zhiqi Zhang, Yifan Zhang, Shao Xiang and Lu Wei
Drones 2024, 8(2), 46; https://doi.org/10.3390/drones8020046 - 1 Feb 2024
Viewed by 1912
Abstract
As the application of UAVs becomes more and more widespread, accidents such as accidental injuries to personnel, property damage, and loss and destruction of UAVs due to accidental UAV crashes also occur in daily use scenarios. To reduce the occurrence of such accidents, [...] Read more.
As the application of UAVs becomes more and more widespread, accidents such as accidental injuries to personnel, property damage, and loss and destruction of UAVs due to accidental UAV crashes also occur in daily use scenarios. To reduce the occurrence of such accidents, UAVs need to have the ability to autonomously choose a safe area to land in an accidental situation, and the key lies in realizing on-board real-time semantic segmentation processing. In this paper, we propose an efficient semantic segmentation method called KDP-Net for characteristics such as large feature scale changes and high real-time processing requirements during the emergency landing process. The proposed KDP module can effectively improve the accuracy and performance of the semantic segmentation backbone network; the proposed Bilateral Segmentation Network improves the extraction accuracy and processing speed of important feature categories in the training phase; and the proposed edge extraction module improves the classification accuracy of fine features. The experimental results on the UDD6 and SDD show that the processing speed of this method reaches 85.25 fps and 108.11 fps while the mIoU reaches 76.9% and 67.14%, respectively. The processing speed reaches 53.72 fps and 38.79 fps when measured on Jetson Orin, which can meet the requirements of airborne real-time segmentation for emergency landing. Full article
Show Figures

Figure 1

18 pages, 6870 KiB  
Article
Measuring Surface Deformation of Asphalt Pavement via Airborne LiDAR: A Pilot Study
by Junqing Zhu, Yingda Gao, Siqi Huang, Tianxiang Bu and Shun Jiang
Drones 2023, 7(9), 570; https://doi.org/10.3390/drones7090570 - 5 Sep 2023
Cited by 2 | Viewed by 1646
Abstract
Measuring the surface deformation of asphalt pavement and acquiring the rutting condition is of great importance to transportation agencies. This paper proposes a rutting measuring method based on an unmanned aerial vehicle (UAV) mounted with Light Detection and Ranging (LiDAR). Firstly, an airborne [...] Read more.
Measuring the surface deformation of asphalt pavement and acquiring the rutting condition is of great importance to transportation agencies. This paper proposes a rutting measuring method based on an unmanned aerial vehicle (UAV) mounted with Light Detection and Ranging (LiDAR). Firstly, an airborne LiDAR system is assembled and the data acquisition method is presented. Then, the method for point cloud processing and rut depth computation is presented and the results of field testing are discussed. Thirdly, to investigate error factors, the laser footprint positioning model is established and sensitivity analysis is conducted. Factors including flight height, LiDAR instantaneous angel, and ground inclination angle are discussed. The model was then implemented to obtain the virtual rut depth and to verify the accuracy of the field test results. The main conclusions include that the measurement error increases with the flight height, instantaneous angle, and angular resolution of the LiDAR. The inclination angle of the pavement surface has adverse impact on the measuring accuracy. The field test results show that the assembled airborne LiDAR system is more accurate when the rut depth is significant. The findings of this study pave the way for future exploration of rutting measurement with airborne LiDAR. Full article
Show Figures

Figure 1

21 pages, 4973 KiB  
Article
Fast Opium Poppy Detection in Unmanned Aerial Vehicle (UAV) Imagery Based on Deep Neural Network
by Zhiqi Zhang, Wendi Xia, Guangqi Xie and Shao Xiang
Drones 2023, 7(9), 559; https://doi.org/10.3390/drones7090559 - 30 Aug 2023
Cited by 3 | Viewed by 1958
Abstract
Opium poppy is a medicinal plant, and its cultivation is illegal without legal approval in China. Unmanned aerial vehicle (UAV) is an effective tool for monitoring illegal poppy cultivation. However, targets often appear occluded and confused, and it is difficult for existing detectors [...] Read more.
Opium poppy is a medicinal plant, and its cultivation is illegal without legal approval in China. Unmanned aerial vehicle (UAV) is an effective tool for monitoring illegal poppy cultivation. However, targets often appear occluded and confused, and it is difficult for existing detectors to accurately detect poppies. To address this problem, we propose an opium poppy detection network, YOLOHLA, for UAV remote sensing images. Specifically, we propose a new attention module that uses two branches to extract features at different scales. To enhance generalization capabilities, we introduce a learning strategy that involves iterative learning, where challenging samples are identified and the model’s representation capacity is enhanced using prior knowledge. Furthermore, we propose a lightweight model (YOLOHLA-tiny) using YOLOHLA based on structured model pruning, which can be better deployed on low-power embedded platforms. To evaluate the detection performance of the proposed method, we collect a UAV remote sensing image poppy dataset. The experimental results show that the proposed YOLOHLA model achieves better detection performance and faster execution speed than existing models. Our method achieves a mean average precision (mAP) of 88.2% and an F1 score of 85.5% for opium poppy detection. The proposed lightweight model achieves an inference speed of 172 frames per second (FPS) on embedded platforms. The experimental results showcase the practical applicability of the proposed poppy object detection method for real-time detection of poppy targets on UAV platforms. Full article
Show Figures

Figure 1

26 pages, 7261 KiB  
Article
Multi-UAV Cooperative and Continuous Path Planning for High-Resolution 3D Scene Reconstruction
by Haigang Sui, Hao Zhang, Guohua Gou, Xuanhao Wang, Sheng Wang, Fei Li and Junyi Liu
Drones 2023, 7(9), 544; https://doi.org/10.3390/drones7090544 - 22 Aug 2023
Cited by 4 | Viewed by 2568
Abstract
Unmanned aerial vehicles (UAVs) are extensively employed for urban image captures and the reconstruction of large-scale 3D models due to their affordability and versatility. However, most commercial flight software lack support for the adaptive capture of multi-view images. Furthermore, the limited performance and [...] Read more.
Unmanned aerial vehicles (UAVs) are extensively employed for urban image captures and the reconstruction of large-scale 3D models due to their affordability and versatility. However, most commercial flight software lack support for the adaptive capture of multi-view images. Furthermore, the limited performance and battery capacity of a single UAV hinder efficient image capturing of large-scale scenes. To address these challenges, this paper presents a novel method for multi-UAV continuous trajectory planning aimed at the image captures and reconstructions of a scene. Our primary contribution lies in the development of a path planning framework rooted in task and search principles. Within this framework, we initially ascertain optimal task locations for capturing images by assessing scene reconstructability, thereby enhancing the overall quality of reconstructions. Furthermore, we curtail energy costs of trajectories by allocating task sequences, characterized by minimal corners and lengths, among multiple UAVs. Ultimately, we integrate considerations of energy costs, safety, and reconstructability into a unified optimization process, facilitating the search for optimal paths for multiple UAVs. Empirical evaluations demonstrate the efficacy of our approach in facilitating collaborative full-scene image captures by multiple UAVs, achieving low energy costs while attaining high-quality 3D reconstructions. Full article
Show Figures

Figure 1

18 pages, 15194 KiB  
Article
DFA-Net: Multi-Scale Dense Feature-Aware Network via Integrated Attention for Unmanned Aerial Vehicle Infrared and Visible Image Fusion
by Sen Shen, Di Li, Liye Mei, Chuan Xu, Zhaoyi Ye, Qi Zhang, Bo Hong, Wei Yang and Ying Wang
Drones 2023, 7(8), 517; https://doi.org/10.3390/drones7080517 - 6 Aug 2023
Cited by 4 | Viewed by 1804
Abstract
Fusing infrared and visible images taken by an unmanned aerial vehicle (UAV) is a challenging task, since infrared images distinguish the target from the background by the difference in infrared radiation, while the low resolution also produces a less pronounced effect. Conversely, the [...] Read more.
Fusing infrared and visible images taken by an unmanned aerial vehicle (UAV) is a challenging task, since infrared images distinguish the target from the background by the difference in infrared radiation, while the low resolution also produces a less pronounced effect. Conversely, the visible light spectrum has a high spatial resolution and rich texture; however, it is easily affected by harsh weather conditions like low light. Therefore, the fusion of infrared and visible light has the potential to provide complementary advantages. In this paper, we propose a multi-scale dense feature-aware network via integrated attention for infrared and visible image fusion, namely DFA-Net. Firstly, we construct a dual-channel encoder to extract the deep features of infrared and visible images. Secondly, we adopt a nested decoder to adequately integrate the features of various scales of the encoder so as to realize the multi-scale feature representation of visible image detail texture and infrared image salient target. Then, we present a feature-aware network via integrated attention to further fuse the feature information of different scales, which can focus on specific advantage features of infrared and visible images. Finally, we use unsupervised gradient estimation and intensity loss to learn significant fusion features of infrared and visible images. In addition, our proposed DFA-Net approach addresses the challenges of fusing infrared and visible images captured by a UAV. The results show that DFA-Net achieved excellent image fusion performance in nine quantitative evaluation indexes under a low-light environment. Full article
Show Figures

Figure 1

Back to TopTop