Vision-Based Autonomous Unmanned Systems: Challenges and Approaches

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Aerospace Science and Engineering".

Deadline for manuscript submissions: closed (31 January 2024) | Viewed by 4301

Special Issue Editors


E-Mail Website
Guest Editor
College of Civil Aviation, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China
Interests: unmanned aerial vehicle; computer vision; autonomous unmanned systems; navigation technology
Department of Computing Science, University of Aberdeen, Aberdeen AB24 3FX, UK
Interests: aerial visual perception; UAV-based remote sensing; machine learning for UAV
Special Issues, Collections and Topics in MDPI journals
College of Civil Aviation, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China
Interests: fault detection and diagnosis based on artificial intelligence

Special Issue Information

Dear Colleagues,

Currently, various types of unmanned systems are emerging, such as unmanned aerial vehicles (UAV), unmanned ground vehicles (UGV), unmanned underwater vehicles (UUV), and unmanned surface vehicles (USV). Improving the autonomy of these unmanned systems is one of the most significant trends in the development of these unmanned systems. Similar to humans assimilating information from the surrounding environment mainly through visual perception, perceptual information required by autonomous unmanned systems can be acquired by using visual sensors. Therefore, research around vision-based autonomous unmanned systems is attracting an increasing amount of attention. Unlike the signals provided by most other sensors, visual signals contain relevant information in a highly indirect manner. Hence, developing sophisticated machine vision and image understanding techniques is necessary to obtain the useful information required by unmanned systems. Consequently, autonomous unmanned systems are constantly putting new demands on vision technologies and vision approaches are facing new challenges in their applications to autonomous unmanned systems. This Special Issue intends to present new ideas and experimental results in the fields of visual sensing approaches and autonomous unmanned systems from design, theory, and system integration to practical applications.

Areas related to intelligent visual sensing and autonomous unmanned systems include, but are not limited to, object detection and recognition, single or multiple object tracking, multi-source information fusion for sensing, visual measurement, vision-based depth estimation, visual behavior understanding, and vision-based unmanned system applications in the fields of transportation, agriculture, and public security, etc. Vision-based navigation and path planning, vision-based sense and avoid, unmanned system modeling and simulation, and artificial intelligence and its application in unmanned systems, are also topics of interest.

Prof. Dr. Meng Ding
Prof. Dr. Mou Chen
Dr. Dewei Yi
Dr. Jiayu Chen
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • intelligent visual sensing
  • vision-based guidance, navigation, and control
  • vision-based sense and avoid
  • multi-source information fusion for sensing
  • visual measurement and depth estimation
  • visual behavior understanding
  • vision-based unmanned system modeling and simulation
  • safety analysis and evaluation for vision-based unmanned systems

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

15 pages, 3079 KiB  
Article
Depth Completion in Autonomous Driving: Adaptive Spatial Feature Fusion and Semi-Quantitative Visualization
by Hantao Wang, Ente Guo, Feng Chen and Pingping Chen
Appl. Sci. 2023, 13(17), 9804; https://doi.org/10.3390/app13179804 - 30 Aug 2023
Cited by 2 | Viewed by 722
Abstract
The safety of autonomous driving is closely linked to accurate depth perception. With the continuous development of autonomous driving, depth completion has become one of the crucial methods in this field. However, current depth completion methods have major shortcomings in small objects. To [...] Read more.
The safety of autonomous driving is closely linked to accurate depth perception. With the continuous development of autonomous driving, depth completion has become one of the crucial methods in this field. However, current depth completion methods have major shortcomings in small objects. To solve this problem, this paper proposes an end-to-end architecture with adaptive spatial feature fusion by encoder–decoder (ASFF-ED) module for depth completion. The architecture is built on the basis of the network architecture proposed in this paper, and is able to extract depth information adaptively with different weights on the specified feature map, which effectively solves the problem of insufficient depth accuracy of small objects. At the same time, this paper also proposes a depth map visualization method with a semi-quantitative visualization, which makes the depth information more intuitive to display. Compared with the currently available depth map visualization methods, this method has stronger quantitative analysis and horizontal comparison ability. Through experiments of ablation study and comparison, the results show that the method proposed in this paper exhibits a lower root-mean-squared error (RMSE) and better small object detection performance on the KITTI dataset. Full article
(This article belongs to the Special Issue Vision-Based Autonomous Unmanned Systems: Challenges and Approaches)
Show Figures

Figure 1

20 pages, 6632 KiB  
Article
Cross-Modal Images Matching Based Enhancement to MEMS INS for UAV Navigation in GNSS Denied Environments
by Songlai Han, Mingcun Zhao, Kai Wang, Jing Dong and Ang Su
Appl. Sci. 2023, 13(14), 8238; https://doi.org/10.3390/app13148238 - 16 Jul 2023
Viewed by 1013
Abstract
A new cross-modal image matching method is proposed to solve the problem that unmanned aerial vehicles (UAVs) are difficult to navigate in GPS-free environment and night environment. In this algorithm, infrared image or visible image is matched with satellite visible image. The matching [...] Read more.
A new cross-modal image matching method is proposed to solve the problem that unmanned aerial vehicles (UAVs) are difficult to navigate in GPS-free environment and night environment. In this algorithm, infrared image or visible image is matched with satellite visible image. The matching process is divided into two steps, namely, coarse matching and fine alignment. Based on the dense structure features, the coarse matching algorithm can realize the position update above 10 Hz with a small amount of computation. Based on the end-to-end matching network, the fine alignment algorithm can align the multi-sensor image with the satellite image under the condition of interference. In order to obtain the position and heading information with higher accuracy, the fusion of the information after visual matching with the inertial information can restrain the divergence of the inertial navigation position error. The experiment shows that it has the advantages of strong anti-interference ability, strong reliability, and low requirements on hardware, which is expected to be applied in the field of unmanned navigation. Full article
(This article belongs to the Special Issue Vision-Based Autonomous Unmanned Systems: Challenges and Approaches)
Show Figures

Figure 1

20 pages, 2466 KiB  
Article
SDebrisNet: A Spatial–Temporal Saliency Network for Space Debris Detection
by Jiang Tao, Yunfeng Cao and Meng Ding
Appl. Sci. 2023, 13(8), 4955; https://doi.org/10.3390/app13084955 - 14 Apr 2023
Cited by 5 | Viewed by 1895
Abstract
The rapidly growing number of space activities is generating numerous space debris, which greatly threatens the safety of space operations. Therefore, space-based space debris surveillance is crucial for the early avoidance of spacecraft emergencies. With the progress in computer vision technology, space debris [...] Read more.
The rapidly growing number of space activities is generating numerous space debris, which greatly threatens the safety of space operations. Therefore, space-based space debris surveillance is crucial for the early avoidance of spacecraft emergencies. With the progress in computer vision technology, space debris detection using optical sensors has become a promising solution. However, detecting space debris at far ranges is challenging due to its limited imaging size and unknown movement characteristics. In this paper, we propose a space debris saliency detection algorithm called SDebrisNet. The algorithm utilizes a convolutional neural network (CNN) to take into account both spatial and temporal data from sequential video images, which aim to assist in detecting small and moving space debris. Firstly, taking into account the limited resource of the space-based computational platform, a MobileNet-based space debris feature extraction structure was constructed to make the overall model more lightweight. In particular, an enhanced spatial feature module is introduced to strengthen the spatial details of small objects. Secondly, based on attention mechanisms, a constrained self-attention (CSA) module is applied to learn the spatiotemporal data from the sequential images. Finally, a space debris dataset was constructed for algorithm evaluation. The experimental results demonstrate that the method proposed in this paper is robust for detecting moving space debris with a low signal-to-noise ratio in the video. Compared to the NODAMI method, SDebrisNet shows improvements of 3.5% and 1.7% in terms of detection probability and the false alarm rate, respectively. Full article
(This article belongs to the Special Issue Vision-Based Autonomous Unmanned Systems: Challenges and Approaches)
Show Figures

Figure 1

Back to TopTop