The Application of Image Processing and Signal Processing Techniques in Unmanned Aerial Vehicles

A special issue of Drones (ISSN 2504-446X). This special issue belongs to the section "Drone Design and Development".

Deadline for manuscript submissions: 26 October 2024 | Viewed by 6096

Special Issue Editors


E-Mail Website
Guest Editor
School of Electronic Information, Wuhan University, Wuhan 430072, China
Interests: objection detection and recognition; multimodal image registration; cross-modal geo-localization
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Electronic Information, Wuhan University, Wuhan 430072, China
Interests: Visual SLAM; sensor fusion; line segment detection; camera localization

E-Mail Website
Guest Editor
CETC Key Laboratory of Aerospace Information Applications, Shijiazhuang 050081, China
Interests: remote sensing image interpretation; space ground system
CETC Key Laboratory of Aerospace Information Applications, Shijiazhuang 050081, China
Interests: object detection and recognition; remote sensing image scene understanding

Special Issue Information

Dear Colleagues,

We are pleased to invite you to submit manuscripts to the MDPI Drones Special Issue on “The Application of Image Processing and Signal Processing Techniques in Unmanned Aerial Vehicles”.

Image and signal processing in unmanned aerial vehicles (UAVs) are becoming increasingly important due to the wide range of potential applications, such as surveillance, mapping, environmental monitoring, and disaster response. The cameras, LiDARs, inertial sensors, and other advanced sensors mounted on unmanned aerial vehicles (UAVs) can transmit data that are more flexible and have a better view than ground platforms. With the development of artificial intelligence and computer vision, the sensor data from UAVs can also be used for object recognition, tracking, 3D reconstruction, SLAM, etc., which is particularly useful for high-level drone development and application.  

This Special Issue is dedicated to collect and promote the latest work of image processing and signal processing techniques on drone platforms. The applications include but are not limited to drone photography, object detection and tracking, 3D reconstruction, drone localization, path planning, navigation, and drone autonomy, which are all important topics for drone development and application to our life.

This Special Issue will welcome manuscripts of original research articles and reviews of the following themes:

  •  Remote sensing on drone platforms;
  • Aerial image processing;
  • Deep learning on drone images and signals;
  • Computer vision on aerial image;
  • Localization and path planning for UAVs.

We look forward to receiving your original research articles and reviews.

Prof. Dr. Wen Yang
Dr. Huai Yu
Dr. Jinyong Chen
Dr. Gang Wang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Drones is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • drone-view object detection and recognition
  • drone-view object tracking
  • UAV sensor fusion
  • depth estimation for drones
  • UAV SLAM
  • drone path planning and navigation
  • aerial 3D reconstruction
  • aerial optical/thermal/event image processing
  • aerial image deblur and enhancement
  • drone-view powerline detection
  • drone swarm

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

26 pages, 18371 KiB  
Article
MFEFNet: A Multi-Scale Feature Information Extraction and Fusion Network for Multi-Scale Object Detection in UAV Aerial Images
by Liming Zhou, Shuai Zhao, Ziye Wan, Yang Liu, Yadi Wang and Xianyu Zuo
Drones 2024, 8(5), 186; https://doi.org/10.3390/drones8050186 - 8 May 2024
Viewed by 447
Abstract
Unmanned aerial vehicles (UAVs) are now widely used in many fields. Due to the randomness of UAV flight height and shooting angle, UAV images usually have the following characteristics: many small objects, large changes in object scale, and complex background. Therefore, object detection [...] Read more.
Unmanned aerial vehicles (UAVs) are now widely used in many fields. Due to the randomness of UAV flight height and shooting angle, UAV images usually have the following characteristics: many small objects, large changes in object scale, and complex background. Therefore, object detection in UAV aerial images is a very challenging task. To address the challenges posed by these characteristics, this paper proposes a novel UAV image object detection method based on global feature aggregation and context feature extraction named the multi-scale feature information extraction and fusion network (MFEFNet). Specifically, first of all, to extract the feature information of objects more effectively from complex backgrounds, we propose an efficient spatial information extraction (SIEM) module, which combines residual connection to build long-distance feature dependencies and effectively extracts the most useful feature information by building contextual feature relations around objects. Secondly, to improve the feature fusion efficiency and reduce the burden brought by redundant feature fusion networks, we propose a global aggregation progressive feature fusion network (GAFN). This network adopts a three-level adaptive feature fusion method, which can adaptively fuse multi-scale features according to the importance of different feature layers and reduce unnecessary intermediate redundant features by utilizing the adaptive feature fusion module (AFFM). Furthermore, we use the MPDIoU loss function as the bounding-box regression loss function, which not only enhances model robustness to noise but also simplifies the calculation process and improves the final detection efficiency. Finally, the proposed MFEFNet was tested on VisDrone and UAVDT datasets, and the mAP0.5 value increased by 2.7% and 2.2%, respectively. Full article
Show Figures

Figure 1

22 pages, 5892 KiB  
Article
SiamMAN: Siamese Multi-Phase Aware Network for Real-Time Unmanned Aerial Vehicle Tracking
by Faxue Liu, Xuan Wang, Qiqi Chen, Jinghong Liu and Chenglong Liu
Drones 2023, 7(12), 707; https://doi.org/10.3390/drones7120707 - 13 Dec 2023
Cited by 1 | Viewed by 1580
Abstract
In this paper, we address aerial tracking tasks by designing multi-phase aware networks to obtain rich long-range dependencies. For aerial tracking tasks, the existing methods are prone to tracking drift in scenarios with high demand for multi-layer long-range feature dependencies such as viewpoint [...] Read more.
In this paper, we address aerial tracking tasks by designing multi-phase aware networks to obtain rich long-range dependencies. For aerial tracking tasks, the existing methods are prone to tracking drift in scenarios with high demand for multi-layer long-range feature dependencies such as viewpoint change caused by the characteristics of the UAV shooting perspective, low resolution, etc. In contrast to the previous works that only used multi-scale feature fusion to obtain contextual information, we designed a new architecture to adapt the characteristics of different levels of features in challenging scenarios to adaptively integrate regional features and the corresponding global dependencies information. Specifically, for the proposed tracker (SiamMAN), we first propose a two-stage aware neck (TAN), where first a cascaded splitting encoder (CSE) is used to obtain the distributed long-range relevance among the sub-branches by the splitting of feature channels, and then a multi-level contextual decoder (MCD) is used to achieve further global dependency fusion. Finally, we design the response map context encoder (RCE) utilizing long-range contextual information in backpropagation to accomplish pixel-level updating for the deeper features and better balance the semantic and spatial information. Several experiments on well-known tracking benchmarks illustrate that the proposed method outperforms SOTA trackers, which results from the effective utilization of the proposed multi-phase aware network for different levels of features. Full article
Show Figures

Figure 1

19 pages, 7693 KiB  
Article
UAV Localization in Low-Altitude GNSS-Denied Environments Based on POI and Store Signage Text Matching in UAV Images
by Yu Liu, Jing Bai, Gang Wang, Xiaobo Wu, Fangde Sun, Zhengqiang Guo and Hujun Geng
Drones 2023, 7(7), 451; https://doi.org/10.3390/drones7070451 - 6 Jul 2023
Cited by 1 | Viewed by 1749
Abstract
Localization is the most important basic information for unmanned aerial vehicles (UAV) during their missions. Currently, most UAVs use GNSS to calculate their own position. However, when faced with complex electromagnetic interference situations or multipath effects within cities, GNSS signals can be interfered [...] Read more.
Localization is the most important basic information for unmanned aerial vehicles (UAV) during their missions. Currently, most UAVs use GNSS to calculate their own position. However, when faced with complex electromagnetic interference situations or multipath effects within cities, GNSS signals can be interfered with, resulting in reduced positioning accuracy or even complete unavailability. To avoid this situation, this paper proposes an autonomous UAV localization method for low-altitude urban scenarios based on POI and store signage text matching (LPS) in UAV images. The text information of the store signage is first extracted from the UAV images and then matched with the name of the POI data. Finally, the scene location of the UAV images is determined using multiple POIs jointly. Multiple corner points of the store signage in a single image are used as control points to the UAV position. As verified by real flight data, our method can achieve stable UAV autonomous localization with a positioning error of around 13 m without knowing the exact initial position of the UAV at take-off. The positioning effect is better than that of ORB-SLAM2 in long-distance flight, and the positioning error is not affected by text recognition accuracy and does not accumulate with flight time and distance. Combined with an inertial navigation system, it may be able to maintain high-accuracy positioning for UAVs for a long time and can be used as an alternative to GNSS in ultra-low-altitude urban environments. Full article
Show Figures

Figure 1

17 pages, 22213 KiB  
Article
Multi-Branch Parallel Networks for Object Detection in High-Resolution UAV Remote Sensing Images
by Qihong Wu, Bin Zhang, Chang Guo and Lei Wang
Drones 2023, 7(7), 439; https://doi.org/10.3390/drones7070439 - 2 Jul 2023
Cited by 3 | Viewed by 1187
Abstract
Uncrewed Aerial Vehicles (UAVs) are instrumental in advancing the field of remote sensing. Nevertheless, the complexity of the background and the dense distribution of objects both present considerable challenges for object detection in UAV remote sensing images. This paper proposes a Multi-Branch Parallel [...] Read more.
Uncrewed Aerial Vehicles (UAVs) are instrumental in advancing the field of remote sensing. Nevertheless, the complexity of the background and the dense distribution of objects both present considerable challenges for object detection in UAV remote sensing images. This paper proposes a Multi-Branch Parallel Network (MBPN) based on the ViTDet (Visual Transformer for Object Detection) model, which aims to improve object detection accuracy in UAV remote sensing images. Initially, the discriminative ability of the input feature map of the Feature Pyramid Network (FPN) is improved by incorporating the Receptive Field Enhancement (RFE) and Convolutional Self-Attention (CSA) modules. Subsequently, to mitigate the loss of semantic information, the sampling process of the FPN is replaced by Multi-Branch Upsampling (MBUS) and Multi-Branch Downsampling (MBDS) modules. Lastly, a Feature-Concatenating Fusion (FCF) module is employed to merge feature maps of varying levels, thereby addressing the issue of semantic misalignment. This paper evaluates the performance of the proposed model on both a custom UAV-captured WCH dataset and the publicly available NWPU VHR10 dataset. The experimental results demonstrate that the proposed model achieves an increase in APL of 2.4% and 0.7% on the WCH and NWPU VHR10 datasets, respectively, compared to the baseline model ViTDet-B. Full article
Show Figures

Figure 1

Back to TopTop