Topic Editors

School of Computer Science, Northwestern Polytechnical University, Xi’an 710060, China
College of Computer Science, Chongqing University, Chongqing 400044, China
Prof. Dr. Yingjie Zhang
School of Computer Science and Electronic Engineering, Hunan University, Changsha 410082, China

Advanced Technologies and Applications for Unmanned Systems

Abstract submission deadline
31 March 2026
Manuscript submission deadline
31 July 2026
Viewed by
7957

Topic Information

Dear Colleagues,

At present, unmanned vehicles (e.g., robots, self-driving cars, unmanned surface vehicles, unmanned aerial vehicles, drones, etc.) and intelligent systems are some of the most exciting and innovative technologies in the field of transportation. Unmanned vehicles and systems can share their detected information (e.g., physical surroundings, collision events, threat messages, etc.) with others via various communication systems (e.g., aircraft addressing and reporting systems, vehicular ad hoc networks, long-term evolution, 4G/5G mobile networks, etc.) for cooperation and coordination. Unmanned vehicles and systems play an increasingly important role in the national economy and human social life in many fields, such as traffic monitoring, disaster relief, anti-terror campaigns, and target acquisition. Compared with manned vehicles, unmanned vehicles can free humans from dull, dirty, and dangerous tasks and run facilities efficiently. With the advances in various computing models and control strategies, a greater number of researchers and practitioners have actively started focusing on the key technologies and intelligent applications of unmanned systems. Moreover, with the integration of artificial intelligence, machine learning, data mining, signal processing, and other technologies, many intelligent applications of autonomous unmanned vehicles are fast-growing and widely applied. Advances in unmanned vehicles and unmanned systems affect every part of life, business, industry, and education, and become an important driver of benefit to many companies and organizations. The following Topic aims to bring together world-class researchers to present state-of-the-art research achievements and advances that contribute to unmanned systems in terms of advanced technologies and intelligent applications for self-driving cars, unmanned surface vehicles, unmanned aerial vehicles, etc. Review articles are also encouraged.

Potential topics to be included in this Topic include but are not limited to:

  • Knowledge-based AI for vehicle perception, control, and decision-making;
  • Artificial intelligence applications for autonomous vehicles;
  • Data science in autonomous vehicle systems;
  • Localization, mapping, and semantic segmentation for autonomous vehicles;
  • Collaborative perception and control of vehicle swarms;
  • High safety and reliability communication networks;
  • Simulation and verification of autonomous vehicle systems;
  • Task allocation and resource scheduling for multi-agent systems;
  • Fault detection and diagnosis for autonomous vehicles;
  • Human–robot interaction for autonomous robots/vehicles;
  • Motion drive and teleoperation control for autonomous vehicles.

Dr. Jinchao Chen
Prof. Dr. Chao Chen
Prof. Dr. Yingjie Zhang
Topic Editors

Keywords

  • unmanned vehicles
  • intelligent systems
  • artificial intelligence
  • communication networks
  • human–robot interaction

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
Drones
drones
4.4 5.6 2017 19.2 Days CHF 2600 Submit
Electronics
electronics
2.6 5.3 2012 16.4 Days CHF 2400 Submit
Eng
eng
- 2.1 2020 21.5 Days CHF 1200 Submit
Journal of Marine Science and Engineering
jmse
2.7 4.4 2013 16.4 Days CHF 2600 Submit
Robotics
robotics
2.9 6.7 2012 21 Days CHF 1800 Submit
Sensors
sensors
3.4 7.3 2001 18.6 Days CHF 2600 Submit
Vehicles
vehicles
2.4 4.1 2019 19.9 Days CHF 1600 Submit

Preprints.org is a multidisciplinary platform offering a preprint service designed to facilitate the early sharing of your research. It supports and empowers your research journey from the very beginning.

MDPI Topics is collaborating with Preprints.org and has established a direct connection between MDPI journals and the platform. Authors are encouraged to take advantage of this opportunity by posting their preprints at Preprints.org prior to publication:

  1. Share your research immediately: disseminate your ideas prior to publication and establish priority for your work.
  2. Safeguard your intellectual contribution: Protect your ideas with a time-stamped preprint that serves as proof of your research timeline.
  3. Boost visibility and impact: Increase the reach and influence of your research by making it accessible to a global audience.
  4. Gain early feedback: Receive valuable input and insights from peers before submitting to a journal.
  5. Ensure broad indexing: Web of Science (Preprint Citation Index), Google Scholar, Crossref, SHARE, PrePubMed, Scilit and Europe PMC.

Published Papers (8 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
25 pages, 8829 KiB  
Article
Novel Surveillance View: A Novel Benchmark and View-Optimized Framework for Pedestrian Detection from UAV Perspectives
by Chenglizhao Chen, Shengran Gao, Hongjuan Pei, Ning Chen, Lei Shi and Peiying Zhang
Sensors 2025, 25(3), 772; https://doi.org/10.3390/s25030772 - 27 Jan 2025
Viewed by 541
Abstract
To address the issues of insufficient samples, limited scene diversity, missing perspectives, and low resolution in existing UAV-based pedestrian detection datasets, this paper proposes a novel UAV-based pedestrian detection benchmark dataset named the Novel Surveillance View (NSV). This dataset encompasses diverse scenes and [...] Read more.
To address the issues of insufficient samples, limited scene diversity, missing perspectives, and low resolution in existing UAV-based pedestrian detection datasets, this paper proposes a novel UAV-based pedestrian detection benchmark dataset named the Novel Surveillance View (NSV). This dataset encompasses diverse scenes and pedestrian information captured from multiple perspectives, and introduces an innovative data mining approach that leverages tracking and optical flow information. This approach significantly improves data acquisition efficiency while ensuring annotation quality. Furthermore, an improved pedestrian detection method is proposed to overcome the performance degradation caused by significant perspective changes in top-down UAV views. Firstly, the View-Agnostic Decomposition (VAD) module decouples features into perspective-dependent and perspective-independent branches to enhance the model’s generalization ability to perspective variations. Secondly, the Deformable Conv-BN-SiLU (DCBS) module dynamically adjusts the receptive field shape to better adapt to the geometric deformations of pedestrians. Finally, the Context-Aware Pyramid Spatial Attention (CPSA) module integrates multi-scale features with attention mechanisms to address the challenge of drastic target scale variations. The experimental results demonstrate that the proposed method improves the mean Average Precision (mAP) by 9% on the NSV dataset, thereby validating that the approach effectively enhances pedestrian detection accuracy from UAV perspectives by optimizing perspective features. Full article
Show Figures

Figure 1

20 pages, 3563 KiB  
Article
EDANet: Efficient Dynamic Alignment of Small Target Detection Algorithm
by Gaofeng Zhu, Fenghua Zhu, Zhixue Wang, Shengli Yang and Zheng Li
Electronics 2025, 14(2), 242; https://doi.org/10.3390/electronics14020242 - 8 Jan 2025
Viewed by 620
Abstract
Unmanned aerial vehicles (UAVs) integrated with computer vision technology have emerged as an effective method for information acquisition in various applications. However, due to the small proportion of target pixels and susceptibility to background interference in multi-angle UAV imaging, missed detections and false [...] Read more.
Unmanned aerial vehicles (UAVs) integrated with computer vision technology have emerged as an effective method for information acquisition in various applications. However, due to the small proportion of target pixels and susceptibility to background interference in multi-angle UAV imaging, missed detections and false results frequently occur. To address this issue, a small target detection algorithm, EDANet, is proposed based on YOLOv8. First, the backbone network is replaced by EfficientNet, which can dynamically explore the network size and the image resolution using a scaling factor. Second, the EC2f feature extraction module is designed to achieve unique coding in different directions through parallel branches. The position information is effectively embedded in the channel attention to enhance the spatial representation ability of features. To mitigate the low utilization of small target pixels, we introduce the DTADH detection module, which facilitates feature fusion via a feature-sharing interactive network. Simultaneously, a task alignment predictor assigns classification and localization tasks. In this way, not only is feature utilization optimized, but also the number of parameters is reduced. Finally, leveraging logic and feature knowledge distillation, we employ binary probability mapping of soft labels and a soft label weighting strategy to enhance the algorithm’s learning capabilities in target classification and localization. Experimental validation on the UAV aerial dataset VisDrone2019 demonstrates that EDANet outperforms existing methods, reducing GFLOPs by 39.3% and improving Map by 4.6%. Full article
Show Figures

Figure 1

21 pages, 1176 KiB  
Article
A Sparse Feature-Based Mixed Signal Frequencies Detecting for Unmanned Aerial Vehicle Communications
by Yang Wang, Yongxin Feng, Fan Zhou, Xi Chen, Jian Wang and Peiying Zhang
Drones 2025, 9(1), 34; https://doi.org/10.3390/drones9010034 - 6 Jan 2025
Viewed by 670
Abstract
As drone technology develops rapidly and many users emerge in airspace networks, various forms of interference have caused the wireless spectrum to exhibit a dense, diverse, and dynamic trend. This increases the probability of spectrum conflicts among users and seriously impacts the quality [...] Read more.
As drone technology develops rapidly and many users emerge in airspace networks, various forms of interference have caused the wireless spectrum to exhibit a dense, diverse, and dynamic trend. This increases the probability of spectrum conflicts among users and seriously impacts the quality and transmission rate of communication. How to effectively improve the detection accuracy of each frequency component in the electromagnetic space mixed signals and avoid spectrum conflicts will become one of the crucial issues currently faced by unmanned aerial vehicle (UAV) communication technologies. However, the existing methods overlook the mutual interference among the component signals as well as the noise during the frequency detection process, which affects their detection performance. In this paper, we propose a mixed-signal frequency detection method based on the reconstruction of sparse feature signals. Without information such as frequency range, bandwidth, and the number of components, it can utilize the autoencoder network to learn the sparse features of each component signal in the high-dimensional frequency domain space and construct a nonlinear reconstruction function to reconstruct each component signal in the mixed signal, thereby realizing the separation of signals. On this basis, complex dilated convolution and deconvolution are used successively to perform feature extraction on the separated signals, which enhances the receptive field and frequency resolution ability of the network for signals, reduces the interference between noise and different component signals, and realizes the accurate estimation of the number of components and carrier frequencies. The simulation results show that when SNR 6 dB, the detection accuracy of the number of component signals is greater than 96.3%. The detection error and detection accuracy of component frequencies are less than 3.19% and greater than 90.7%, respectively. Full article
Show Figures

Figure 1

29 pages, 25966 KiB  
Article
Visual–Inertial Autonomous UAV Navigation in Complex Illumination and Highly Cluttered Under-Canopy Environments
by Leyang Zhao, Weixi Wang, Qizhi He, Li Yan and Xiaoming Li
Drones 2025, 9(1), 27; https://doi.org/10.3390/drones9010027 - 2 Jan 2025
Viewed by 908
Abstract
The under-canopy environment, which is inherently inaccessible to humans, necessitates the use of unmanned aerial vehicles (UAVs) for data collection. The implementation of UAV autonomous navigation in such environments faces challenges, including dense obstacles, GNSS signal interference, and varying lighting conditions. This paper [...] Read more.
The under-canopy environment, which is inherently inaccessible to humans, necessitates the use of unmanned aerial vehicles (UAVs) for data collection. The implementation of UAV autonomous navigation in such environments faces challenges, including dense obstacles, GNSS signal interference, and varying lighting conditions. This paper introduces a UAV autonomous navigation method specifically designed for under-canopy environments. Initially, image enhancement techniques are integrated with neural network-based visual feature extraction. Subsequently, employs a high-dimensional error-state optimizer coupled with a low-dimensional height filter to achieve high-precision localization of the UAV in under-canopy environments. Furthermore, proposes a boundary sampling autonomous exploration algorithm and an advanced Rapidly exploring Random Tree (RRT) path planning algorithm. The objective is to enhance the reliability and safety of UAV operations beneath the forest canopy, thereby establishing a technical foundation for surveying vertically stratified natural resources. Full article
Show Figures

Figure 1

17 pages, 4021 KiB  
Article
Neural Network Identification-Based Model Predictive Heading Control for Wave Gliders
by Peng Jin, Baolin Zhang and Yun Zhang
J. Mar. Sci. Eng. 2024, 12(12), 2279; https://doi.org/10.3390/jmse12122279 - 11 Dec 2024
Viewed by 656
Abstract
This paper deals with the neural network identification-based model predictive heading control problem in a wave glider. First, based on a kinematic model of the wave glider subjected to external disturbance and system uncertainty, a state space model of the wave glider is [...] Read more.
This paper deals with the neural network identification-based model predictive heading control problem in a wave glider. First, based on a kinematic model of the wave glider subjected to external disturbance and system uncertainty, a state space model of the wave glider is established. Then, a neural network identification-based model predictive heading controller (NNI-MPHC) is designed for the wave glider. The heading controller mainly includes three components: a model predictive controller, a neural network-based model identifier, and a linear reduced-order extended state observer. Third, a design algorithm of the NNI-MPHC is presented. The algorithm is demonstrated through simulation, where the results show the following: (i) The designed NNI-MPHC is remarkably capable of guaranteeing the tracing effects of the wave glider. (ii) Comparing the NNI-MPHC and existing heading controllers, the former is better than the latter in terms of tracking accuracy and rapidity and robustness to model uncertainty and/or external disturbances. Full article
Show Figures

Figure 1

21 pages, 11232 KiB  
Article
Deep Learning-Based Docking Scheme for Autonomous Underwater Vehicles with an Omnidirectional Rotating Optical Beacon
by Yiyang Li, Kai Sun, Zekai Han and Jichao Lang
Drones 2024, 8(12), 697; https://doi.org/10.3390/drones8120697 - 21 Nov 2024
Viewed by 969
Abstract
Visual recognition and localization of underwater optical beacons are critical for AUV docking, but traditional beacons are limited by fixed directionality and light attenuation in water. To extend the range of optical docking, this study designs a novel omnidirectional rotating optical beacon that [...] Read more.
Visual recognition and localization of underwater optical beacons are critical for AUV docking, but traditional beacons are limited by fixed directionality and light attenuation in water. To extend the range of optical docking, this study designs a novel omnidirectional rotating optical beacon that provides 360-degree light coverage over 45 m, improving beacon detection probability through synchronized scanning. Addressing the challenges of light centroid detection, we introduce a parallel deep learning detection algorithm based on an improved YOLOv8-pose model. Initially, an underwater optical beacon dataset encompassing various light patterns was constructed. Subsequently, the network was optimized by incorporating a small detection head, implementing dynamic convolution and receptive-field attention convolution for single-stage multi-scale localization. A post-processing method based on keypoint joint IoU matching was proposed to filter redundant detections. The algorithm achieved 93.9% AP at 36.5 FPS, with at least a 5.8% increase in detection accuracy over existing methods. Moreover, a light-source-based measurement method was developed to accurately detect the beacon’s orientation. Experimental results indicate that this scheme can achieve high-precision omnidirectional guidance with azimuth and pose estimation errors of -4.54° and 3.09°, respectively, providing a reliable solution for long-range and large-scale optical docking. Full article
Show Figures

Figure 1

12 pages, 768 KiB  
Article
Three-Dimensional CKANs: UUV Noncooperative Target State Estimation Approach Based on 3D Convolutional Kolmogorov–Arnold Networks
by Changjian Lin, Dan Yu and Shibo Lin
J. Mar. Sci. Eng. 2024, 12(11), 2040; https://doi.org/10.3390/jmse12112040 - 11 Nov 2024
Viewed by 800
Abstract
Accurate and stable estimation of the position and trajectory of noncooperative targets is crucial for the safe navigation and operation of sonar-equipped underwater unmanned vehicles (UUVs). However, the uncertainty associated with sonar observations and the unpredictability of noncooperative target movements often undermine the [...] Read more.
Accurate and stable estimation of the position and trajectory of noncooperative targets is crucial for the safe navigation and operation of sonar-equipped underwater unmanned vehicles (UUVs). However, the uncertainty associated with sonar observations and the unpredictability of noncooperative target movements often undermine the stability of traditional Bayesian methods. This paper presents an innovative approach for noncooperative target state estimation utilizing 3D Convolutional Kolmogorov–Arnold Networks (3DCKANs). By establishing a non-Markovian model that characterizes state estimation of UUV noncooperative targets under uncertain observations, we leverage historical data to construct 3D Convolutional Kolmogorov–Arnold Networks. This network learns the patterns of sonar observations and target state transitions from a substantial offline dataset, allowing it to approximate the posterior probability distribution derived from past observations effectively. Additionally, a sliding window technique is integrated into the convolutional neural network to enhance the estimator’s fault tolerance with respect to observation data in both temporal and spatial dimensions, particularly when posterior probabilities are unknown. The incorporation of the Kolmogorov–Arnold representation within the convolutional layers enhances the network’s capacity for nonlinear expression and adaptability in processing spatial information. Finally, we present statistical experiments and simulation cases to validate the accuracy and stability of the proposed method. Full article
Show Figures

Figure 1

20 pages, 3310 KiB  
Article
Categorical-Parallel Adversarial Defense for Perception Models on Single-Board Embedded Unmanned Vehicles
by Yilan Li, Xing Fan, Shiqi Sun, Yantao Lu and Ning Liu
Drones 2024, 8(9), 438; https://doi.org/10.3390/drones8090438 - 28 Aug 2024
Viewed by 1134
Abstract
Significant advancements in robustness against input perturbations have been realized for deep neural networks (DNNs) through the application of adversarial training techniques. However, implementing these methods for perception tasks in unmanned vehicles, such as object detection and semantic segmentation, particularly on real-time single-board [...] Read more.
Significant advancements in robustness against input perturbations have been realized for deep neural networks (DNNs) through the application of adversarial training techniques. However, implementing these methods for perception tasks in unmanned vehicles, such as object detection and semantic segmentation, particularly on real-time single-board computing devices, encounters two primary challenges: the time-intensive nature of training large-scale models and performance degradation due to weight quantization in real-time deployments. To address these challenges, we propose Ca-PAT, an efficient and effective adversarial training framework designed to mitigate perturbations. Ca-PAT represents a novel approach by integrating quantization effects into adversarial defense strategies specifically for unmanned vehicle perception models on single-board computing platforms. Notably, Ca-PAT introduces an innovative categorical-parallel adversarial training mechanism for efficient defense in large-scale models, coupled with an alternate-direction optimization framework to minimize the adverse impacts of weight quantization. We conducted extensive experiments on various perception tasks using the Imagenet-te dataset and data collected from physical unmanned vehicle platforms. The results demonstrate that the Ca-PAT defense framework significantly outperforms state-of-the-art baselines, achieving substantial improvements in robustness across a range of perturbation scenarios. Full article
Show Figures

Figure 1

Back to TopTop