Intelligent Image Processing and Sensing for Drones, 2nd Edition

A special issue of Drones (ISSN 2504-446X).

Deadline for manuscript submissions: 18 April 2025 | Viewed by 10406

Special Issue Editor

Special Issue Information

Dear Colleagues,

Drones, or unmanned aerial vehicles (UAVs), are widely applied in a variety of fields, with the ability to hover and maneuver by the operator or as programmed making them valuable tools. By capturing images from various angles and heights, drones can obtain cost-effective aerial views covering arbitrary areas. Applications for drones include agricultural and environmental monitoring, industrial and infrastructure inspection, security and surveillance, etc.

A wide range of imaging sensors can be mounted on a drone; in addition to visible cameras, these include infrared thermal and multispectral imaging. LiDAR and SAR are active sensors that can also be mounted on drones. These mobile aerial sensors provide a new perspective on research and development across various domains. However, challenges are often posed because of drones’ unique sensing environments and limited resources. The information acquired by them is certainly of tremendous value, yet intelligent analysis is necessary to make the best use of it. 

This Special Issue focuses on a wide range of intelligent processing of images, signals, and sensor data acquired by drones. The objectives of intelligent processing range from the refinement of raw data to the extraction and processing of featured attributes and the symbolic representation or visualization of the real world. These can be achieved through image/signal processing and deep/machine learning techniques. The latest technological developments will be disseminated in this Special Issue; researchers and investigators are invited to contribute original research or review articles.

Prof. Dr. Seokwon Yeom
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Drones is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • visible/infrared thermal/multispectral image analysis
  • LiDAR/SAR with a drone
  • security and surveillance
  • monitoring and inspection
  • object detection, classification, and tracking
  • segmentation and feature extraction
  • image registration and visualization

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Related Special Issue

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

20 pages, 10897 KiB  
Article
A Multimodal Image Registration Method for UAV Visual Navigation Based on Feature Fusion and Transformers
by Ruofei He, Shuangxing Long, Wei Sun and Hongjuan Liu
Drones 2024, 8(11), 651; https://doi.org/10.3390/drones8110651 - 7 Nov 2024
Viewed by 663
Abstract
Using images captured by drone cameras and comparing them with known Google satellite maps to obtain the current location of the drone is an important way of UAV navigation in GPS-denied environments. But, due to inherent modality differences and significant geometric deformations, cross-modal [...] Read more.
Using images captured by drone cameras and comparing them with known Google satellite maps to obtain the current location of the drone is an important way of UAV navigation in GPS-denied environments. But, due to inherent modality differences and significant geometric deformations, cross-modal image registration is challenging. This paper proposes a CNN-Transformer hybrid network model for feature detection and feature matching. ResNet50 is used as the backbone network for feature extraction. An improved feature fusion module is used to fuse feature maps from different levels, and then a Transformer encoder–decoder structure is used for feature matching to obtain preliminary correspondences. Finally, a geometric outlier removal method (GSM) is used to eliminate mismatched points based on the geometric similarity of inliers, resulting in more robust correspondences. Qualitative and quantitative experiments were conducted on multimodal image datasets captured by UAVs; the correct matching rate was improved by 52%, 21%, and 15%, respectively, and the error was reduced by 36% compared to the 3MRS algorithm. A total of 56 experiments were conducted in actual scenarios, with a localization success rate of 91.1%, and the RMSE of UAV positioning was 4.6 m. Full article
(This article belongs to the Special Issue Intelligent Image Processing and Sensing for Drones, 2nd Edition)
Show Figures

Figure 1

25 pages, 23247 KiB  
Article
Infrared and Visible Camera Integration for Detection and Tracking of Small UAVs: Systematic Evaluation
by Ana Pereira, Stephen Warwick, Alexandra Moutinho and Afzal Suleman
Drones 2024, 8(11), 650; https://doi.org/10.3390/drones8110650 - 6 Nov 2024
Viewed by 895
Abstract
Given the recent proliferation of Unmanned Aerial Systems (UASs) and the consequent importance of counter-UASs, this project aims to perform the detection and tracking of small non-cooperative UASs using Electro-optical (EO) and Infrared (IR) sensors. Two data integration techniques, at the decision and [...] Read more.
Given the recent proliferation of Unmanned Aerial Systems (UASs) and the consequent importance of counter-UASs, this project aims to perform the detection and tracking of small non-cooperative UASs using Electro-optical (EO) and Infrared (IR) sensors. Two data integration techniques, at the decision and pixel levels, are compared with the use of each sensor independently to evaluate the system robustness in different operational conditions. The data are submitted to a YOLOv7 detector merged with a ByteTrack tracker. For training and validation, additional efforts are made towards creating datasets of spatially and temporally aligned EO and IR annotated Unmanned Aerial Vehicle (UAV) frames and videos. These consist of the acquisition of real data captured from a workstation on the ground, followed by image calibration, image alignment, the application of bias-removal techniques, and data augmentation methods to artificially create images. The performance of the detector across datasets shows an average precision of 88.4%, recall of 85.4%, and [email protected] of 88.5%. Tests conducted on the decision-level fusion architecture demonstrate notable gains in recall and precision, although at the expense of lower frame rates. Precision, recall, and frame rate are not improved by the pixel-level fusion design. Full article
(This article belongs to the Special Issue Intelligent Image Processing and Sensing for Drones, 2nd Edition)
Show Figures

Figure 1

23 pages, 5508 KiB  
Article
YOLO-DroneMS: Multi-Scale Object Detection Network for Unmanned Aerial Vehicle (UAV) Images
by Xueqiang Zhao and Yangbo Chen
Drones 2024, 8(11), 609; https://doi.org/10.3390/drones8110609 - 24 Oct 2024
Viewed by 1334
Abstract
In recent years, research on Unmanned Aerial Vehicles (UAVs) has developed rapidly. Compared to traditional remote-sensing images, UAV images exhibit complex backgrounds, high resolution, and large differences in object scales. Therefore, UAV object detection is an essential yet challenging task. This paper proposes [...] Read more.
In recent years, research on Unmanned Aerial Vehicles (UAVs) has developed rapidly. Compared to traditional remote-sensing images, UAV images exhibit complex backgrounds, high resolution, and large differences in object scales. Therefore, UAV object detection is an essential yet challenging task. This paper proposes a multi-scale object detection network, namely YOLO-DroneMS (You Only Look Once for Drone Multi-Scale Object), for UAV images. Targeting the pivotal connection between the backbone and neck, the Large Separable Kernel Attention (LSKA) mechanism is adopted with the Spatial Pyramid Pooling Factor (SPPF), where weighted processing of multi-scale feature maps is performed to focus more on features. And Attentional Scale Sequence Fusion DySample (ASF-DySample) is introduced to perform attention scale sequence fusion and dynamic upsampling to conserve resources. Then, the faster cross-stage partial network bottleneck with two convolutions (named C2f) in the backbone is optimized using the Inverted Residual Mobile Block and Dilated Reparam Block (iRMB-DRB), which balances the advantages of dynamic global modeling and static local information fusion. This optimization effectively increases the model’s receptive field, enhancing its capability for downstream tasks. By replacing the original CIoU with WIoUv3, the model prioritizes anchoring boxes of superior quality, dynamically adjusting weights to enhance detection performance for small objects. Experimental findings on the VisDrone2019 dataset demonstrate that at an Intersection over Union (IoU) of 0.5, YOLO-DroneMS achieves a 3.6% increase in mAP@50 compared to the YOLOv8n model. Moreover, YOLO-DroneMS exhibits improved detection speed, increasing the number of frames per second (FPS) from 78.7 to 83.3. The enhanced model supports diverse target scales and achieves high recognition rates, making it well-suited for drone-based object detection tasks, particularly in scenarios involving multiple object clusters. Full article
(This article belongs to the Special Issue Intelligent Image Processing and Sensing for Drones, 2nd Edition)
Show Figures

Figure 1

14 pages, 2202 KiB  
Article
HSP-YOLOv8: UAV Aerial Photography Small Target Detection Algorithm
by Heng Zhang, Wei Sun, Changhao Sun, Ruofei He and Yumeng Zhang
Drones 2024, 8(9), 453; https://doi.org/10.3390/drones8090453 - 2 Sep 2024
Cited by 2 | Viewed by 1609
Abstract
To address the larger numbers of small objects and the issues of occlusion and clustering in UAV aerial photography, which can lead to false positives and missed detections, we propose an improved small object detection algorithm for UAV aerial scenarios called YOLOv8 with [...] Read more.
To address the larger numbers of small objects and the issues of occlusion and clustering in UAV aerial photography, which can lead to false positives and missed detections, we propose an improved small object detection algorithm for UAV aerial scenarios called YOLOv8 with tiny prediction head and Space-to-Depth Convolution (HSP-YOLOv8). Firstly, a tiny prediction head specifically for small targets is added to provide higher-resolution feature mapping, enabling better predictions. Secondly, we designed the Space-to-Depth Convolution (SPD-Conv) module to mitigate the loss of small target feature information and enhance the robustness of feature information. Lastly, soft non-maximum suppression (Soft-NMS) is used in the post-processing stage to improve accuracy by significantly reducing false positives in the detection results. In experiments on the Visdrone2019 dataset, the improved algorithm increased the detection precision mAP0.5 and mAP0.5:0.95 values by 11% and 9.8%, respectively, compared to the baseline model YOLOv8s. Full article
(This article belongs to the Special Issue Intelligent Image Processing and Sensing for Drones, 2nd Edition)
Show Figures

Figure 1

19 pages, 4736 KiB  
Article
An Improved YOLOv7 Model for Surface Damage Detection on Wind Turbine Blades Based on Low-Quality UAV Images
by Yongkang Liao, Mingyang Lv, Mingyong Huang, Mingwei Qu, Kehan Zou, Lei Chen and Liang Feng
Drones 2024, 8(9), 436; https://doi.org/10.3390/drones8090436 - 27 Aug 2024
Viewed by 820
Abstract
The efficient damage detection of the wind turbine blade (WTB), the core part of the wind power, is very improtant to wind power. In this paper, an improved YOLOv7 model is designed to enhance the performance of surface damage detection on WTBs based [...] Read more.
The efficient damage detection of the wind turbine blade (WTB), the core part of the wind power, is very improtant to wind power. In this paper, an improved YOLOv7 model is designed to enhance the performance of surface damage detection on WTBs based on the low-quality unmanned aerial vehicle (UAV) images. (1) An efficient channel attention (ECA) module is imbeded, which makes the network more sensitive to damage to decrease the false detection and missing detection caused by the low-quality image. (2) A DownSampling module is introduced to retain key feature information to enhance the detection speed and accuracy which are restricted by low-quality images with large amounts of redundant information. (3) The Multiple attributes Intersection over Union (MIoU) is applied to improve the inaccurate detection location and detection size of the damage region. (4) The dynamic group convolution shuffle transformer (DGST) is developed to improve the ability to comprehensively capture the contours, textures and potential damage information. Compared with YOLOv7, YOLOv8l, YOLOv9e and YOLOv10x, this experiment’s results show that the improved YOLOv7 has the optimal detection performance synthetically considering the detection accuracy, the detection speed and the robustness. Full article
(This article belongs to the Special Issue Intelligent Image Processing and Sensing for Drones, 2nd Edition)
Show Figures

Figure 1

21 pages, 7702 KiB  
Article
PHSI-RTDETR: A Lightweight Infrared Small Target Detection Algorithm Based on UAV Aerial Photography
by Sen Wang, Huiping Jiang, Zhongjie Li, Jixiang Yang, Xuan Ma, Jiamin Chen and Xingqun Tang
Drones 2024, 8(6), 240; https://doi.org/10.3390/drones8060240 - 3 Jun 2024
Cited by 4 | Viewed by 3280
Abstract
To address the issues of low model accuracy caused by complex ground environments and uneven target scales and high computational complexity in unmanned aerial vehicle (UAV) aerial infrared image target detection, this study proposes a lightweight UAV aerial infrared small target detection algorithm [...] Read more.
To address the issues of low model accuracy caused by complex ground environments and uneven target scales and high computational complexity in unmanned aerial vehicle (UAV) aerial infrared image target detection, this study proposes a lightweight UAV aerial infrared small target detection algorithm called PHSI-RTDETR. Initially, an improved backbone feature extraction network is designed using the lightweight RPConv-Block module proposed in this paper, which effectively captures small target features, significantly reducing the model complexity and computational burden while improving accuracy. Subsequently, the HiLo attention mechanism is combined with an intra-scale feature interaction module to form an AIFI-HiLo module, which is integrated into a hybrid encoder to enhance the focus of the model on dense targets, reducing the rates of missed and false detections. Moreover, the slimneck-SSFF architecture is introduced as the cross-scale feature fusion architecture of the model, utilizing GSConv and VoVGSCSP modules to enhance adaptability to infrared targets of various scales, producing more semantic information while reducing network computations. Finally, the original GIoU loss is replaced with the Inner-GIoU loss, which uses a scaling factor to control auxiliary bounding boxes to speed up convergence and improve detection accuracy for small targets. The experimental results show that, compared to RT-DETR, PHSI-RTDETR reduces model parameters by 30.55% and floating-point operations by 17.10%. Moreover, detection precision and speed are increased by 3.81% and 13.39%, respectively, and mAP50, impressively, reaches 82.58%, demonstrating the great potential of this model for drone infrared small target detection. Full article
(This article belongs to the Special Issue Intelligent Image Processing and Sensing for Drones, 2nd Edition)
Show Figures

Figure 1

Review

Jump to: Research

22 pages, 22148 KiB  
Review
Research Progress on Power Visual Detection of Overhead Line Bolt Defects Based on UAV Images
by Xinlan Deng, Min He, Jingwen Zheng, Liang Qin and Kaipei Liu
Drones 2024, 8(9), 442; https://doi.org/10.3390/drones8090442 - 29 Aug 2024
Viewed by 718
Abstract
In natural environments, the connecting bolts of overhead lines and power towers are prone to loosening and missing, posing potential risks to the safe and stable operation of the power system. This paper reviews the challenges in bolt defect detection using power vision [...] Read more.
In natural environments, the connecting bolts of overhead lines and power towers are prone to loosening and missing, posing potential risks to the safe and stable operation of the power system. This paper reviews the challenges in bolt defect detection using power vision technology, with a particular focus on unmanned aerial vehicle (UAV) images. These UAV images offer a cost-effective and flexible solution for detecting bolt defects. However, challenges remain, including missed detection due to the small size of bolts, false detection caused by dense and occluded bolts, and underfitting resulting from imbalanced bolt defect datasets. To address these issues, this paper summarizes solutions that leverage deep learning algorithms. An experimental analysis is conducted on a dataset derived from UAV inspections, comparing the detection characteristics and visualizing the results of various algorithms. The paper also discusses future trends in the application of UAV-based power vision technology for bolt defect detection, providing insights for the advancement of intelligent power inspection. Full article
(This article belongs to the Special Issue Intelligent Image Processing and Sensing for Drones, 2nd Edition)
Show Figures

Figure 1

Back to TopTop