Perception, Decision-Making and Control of Intelligent Unmanned System

A special issue of Drones (ISSN 2504-446X).

Deadline for manuscript submissions: 20 April 2025 | Viewed by 25883

Special Issue Editors


E-Mail Website
Guest Editor
College of Aeronautics, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China
Interests: dynamics and control; guidance navigation and control; intelligent and autonomous systems
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Astronautics, Harbin Institute of Technology, Harbin 150001, China
Interests: perception and decision-making
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
1. College of Astronautics, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China
2. Shanghai Key Laboratory of Aerospace Intelligent Control Technology, Shanghai 200233, China
Interests: Intelligent perception

Special Issue Information

Dear Colleagues,

In recent years, the field of unmanned systems has witnessed remarkable advancements, leading to the emergence of intelligent unmanned systems capable of perceiving, making decisions, and autonomously controlling their actions. These systems, often referred to as autonomous or intelligent unmanned systems, have revolutionized various industries, including transportation, agriculture, surveillance, and exploration. They have the potential to enhance efficiency, reduce human error, and perform tasks that are too dangerous or tedious for humans. Unmanned Aerial Vehicles (UAVs), commonly known as drones, have experienced significant advancements in recent years thanks to the integration of intelligent capabilities. These intelligent unmanned systems have revolutionized the field of UAVs, enabling a wide range of applications across various industries, such as aerial photography and videography, precision agriculture, search and rescue operations, infrastructure inspection, environmental monitoring, and so on.

At the core of these intelligent unmanned systems lie three fundamental components: perception, decision making, and control. Perception involves the ability to gather and interpret information from the environment using various sensors such as cameras, lidar, radar, and other specialized sensors. This information is then processed to create a representation of the system's surroundings, enabling it to understand and interact with the world. Once the perception phase is complete, the intelligent unmanned system moves on to the decision-making stage. These decisions can range from simple tasks, such as obstacle avoidance, to complex actions, such as route planning, target identification, or collaborative decision making in a multi-agent environment. Finally, the control component comes into play, where the intelligent unmanned system executes the decisions made during the decision-making phase.

The integration of perception, decision making, and control in intelligent unmanned systems is a highly interdisciplinary field, drawing upon expertise from various domains such as artificial intelligence, robotics, computer vision, machine learning, and control theory. Researchers and engineers in this field strive to develop algorithms, architectures, and techniques that enable unmanned systems to operate autonomously, adapt to dynamic environments, and interact safely and effectively with humans and other agents. The challenges in developing perception, decision-making, and control capabilities for intelligent unmanned systems are numerous. These include the need for robust and accurate perception algorithms, efficient and scalable decision-making frameworks, real-time control strategies, and the ability to handle uncertainty and unexpected situations.

In conclusion, the field of perception, decision making, and control of intelligent unmanned systems holds immense potential for transforming various industries and enabling a wide range of applications. As technology continues to advance, the development of robust and reliable algorithms and systems will pave the way for the deployment of intelligent unmanned systems that can operate autonomously, adapt to changing environments, and collaborate with humans and other agents effectively.

As technology continues to advance, we can expect further innovations in areas such as swarm intelligence, collaborative decision making, and enhanced autonomy, opening up new possibilities for UAVs in various industries and domains.

This Special Issue focuses on the design of intelligent drone systems, including research into control systems, artificial intelligence, decision- making, UAV modelling and simulation, etc. The aim of this Special Issue is to provide a venue for drones research on artificial intelligence, unmanned systems, robotics, automation, intelligent systems, etc. All papers will be published in an open access format following peer review.

Both research papers and overview papers are welcome. Topics of interest include (but are not limited to) the following:

  • Intelligence unmanned systems;
  • Artificial intelligence;
  • Robotics and automation;
  • Machine learning;
  • Safe learning;
  • Formation control;
  • Group formation tracking;
  • Bipartite cooperative;
  • SLAM (simultaneous localization and mapping);
  • Collaborative perception and positioning;
  • Decision making;
  • Drones;
  • Path planning;
  • Image fusion;
  • Feature fusion;
  • Sensor fusion;
  • Scene understanding.

We are pleased to invite you to submit manuscripts to this MDPI Drones Special Issue on “Perception, Decision-Making and Control of Intelligent Unmanned System”.

Prof. Dr. Shuang Li
Dr. Chengchao Bai
Dr. Jinzhen Mu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Drones is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

18 pages, 9378 KiB  
Article
Multi-Rotor Drone-Based Thermal Target Tracking with Track Segment Association for Search and Rescue Missions
by Seokwon Yeom
Drones 2024, 8(11), 689; https://doi.org/10.3390/drones8110689 - 19 Nov 2024
Viewed by 641
Abstract
Multi-rotor drones have expanded their range of applications, one of which being search and rescue (SAR) missions using infrared thermal imaging. This paper addresses thermal target tracking with track segment association (TSA) for SAR missions. Three types of associations including TSA are developed [...] Read more.
Multi-rotor drones have expanded their range of applications, one of which being search and rescue (SAR) missions using infrared thermal imaging. This paper addresses thermal target tracking with track segment association (TSA) for SAR missions. Three types of associations including TSA are developed with an interacting multiple model (IMM) approach. During multiple-target tracking, tracks are initialized, maintained, and terminated. There are three different associations in track maintenance: measurement–track association, track–track association for tracks that exist at the same time (track association and fusion), and track–track association for tracks that exist at separate times (TSA). Measurement–track association selects the statistically nearest measurement and updates the track with the measurement through the IMM filter. Track association and fusion fuses redundant tracks for the same target that are spatially separated. TSA connects tracks that have become broken and separated over time. This process is accomplished through the selection of candidate track pairs, backward IMM filtering, association testing, and an assignment rule. In the experiments, a drone was equipped with an infrared thermal imaging camera, and two thermal videos were captured of three people in a non-visible environment. These three hikers were located close together and occluded by each other or other obstacles in the mountains. The drone was allowed to move arbitrarily. The tracking results were evaluated by the average total track life, average mean track life, and average track purity. The track segment association improved the average mean track life of each video by 99.8% and 250%, respectively. Full article
Show Figures

Figure 1

27 pages, 1679 KiB  
Article
T–S Fuzzy Observer-Based Output Feedback Lateral Control of UGVs Using a Disturbance Observer
by Seunghoon Lee, Sounghwan Hwang and Han Sol Kim
Drones 2024, 8(11), 685; https://doi.org/10.3390/drones8110685 - 19 Nov 2024
Viewed by 528
Abstract
This paper introduces a novel observer-based fuzzy tracking controller that integrates disturbance estimation to improve state estimation and path tracking in the lateral control systems of Unmanned Ground Vehicles (UGVs). The design of the controller is based on linear matrix inequality (LMI) conditions [...] Read more.
This paper introduces a novel observer-based fuzzy tracking controller that integrates disturbance estimation to improve state estimation and path tracking in the lateral control systems of Unmanned Ground Vehicles (UGVs). The design of the controller is based on linear matrix inequality (LMI) conditions derived from a Takagi–Sugeno fuzzy model and a relaxation technique that incorporates additional null terms. The state observer is developed to estimate both the vehicle’s state and external disturbances, such as road curvature. By incorporating the disturbance observer, the proposed approach effectively mitigates performance degradation caused by discrepancies between the system and observer dynamics. The simulation results, conducted in MATLAB and a commercial autonomous driving simulator, demonstrate that the proposed control method substantially enhances state estimation accuracy and improves the robustness of path tracking under varying conditions. Full article
Show Figures

Figure 1

26 pages, 3132 KiB  
Article
A Novel Fuzzy Image-Based UAV Landing Using RGBD Data and Visual SLAM
by Shayan Sepahvand, Niloufar Amiri, Houman Masnavi, Iraj Mantegh and Farrokh Janabi-Sharifi
Drones 2024, 8(10), 594; https://doi.org/10.3390/drones8100594 - 18 Oct 2024
Viewed by 945
Abstract
In this work, an innovative perception-guided approach is proposed for landing zone detection and realization of Unmanned Aerial Vehicles (UAVs) operating in unstructured environments ridden with obstacles. To accommodate secure landing, two well-established tools, namely fuzzy systems and visual Simultaneous Localization and Mapping [...] Read more.
In this work, an innovative perception-guided approach is proposed for landing zone detection and realization of Unmanned Aerial Vehicles (UAVs) operating in unstructured environments ridden with obstacles. To accommodate secure landing, two well-established tools, namely fuzzy systems and visual Simultaneous Localization and Mapping (vSLAM), are implemented into the landing pipeline. Firstly, colored images and point clouds acquired by a visual sensory device are processed to serve as characterizing maps that acquire information about flatness, steepness, inclination, and depth variation. By leveraging these images, a novel fuzzy map infers the areas for risk-free landing on which the UAV can safely land. Subsequently, the vSLAM system is employed to estimate the platform’s pose and an additional set of point clouds. The vSLAM point clouds presented in the corresponding keyframe are projected back onto the image plane on which a threshold fuzzy landing score map is applied. In other words, this binary image serves as a mask for the re-projected vSLAM world points to identify the best subset for landing. Once these image points are identified, their corresponding world points are located, and among them, the center of the cluster with the largest area is chosen as the point to land. Depending on the UAV’s size, four synthesis points are added to the vSLAM point cloud to execute the image-based visual servoing landing using image moment features. The effectiveness of the landing package is assessed through the ROS Gazebo simulation environment, where comparisons are made with a state-of-the-art landing site detection method. Full article
Show Figures

Figure 1

21 pages, 36173 KiB  
Article
Multi-Robot Path Planning Algorithm for Collaborative Mapping under Communication Constraints
by Chengyu Zhou, Junxiang Li, Meiping Shi and Tao Wu
Drones 2024, 8(9), 493; https://doi.org/10.3390/drones8090493 - 17 Sep 2024
Viewed by 877
Abstract
In extensive outdoor collaborative exploration tasks, multiple robots require efficient path planning methods to ensure rapid and comprehensive map construction. However, current collaborative mapping algorithms often integrate poorly with path planning, especially under limited communication conditions. Such conditions can complicate data exchange, leading [...] Read more.
In extensive outdoor collaborative exploration tasks, multiple robots require efficient path planning methods to ensure rapid and comprehensive map construction. However, current collaborative mapping algorithms often integrate poorly with path planning, especially under limited communication conditions. Such conditions can complicate data exchange, leading to inefficiencies and missed areas in real-world environments. This paper introduces a path planning approach specifically designed for distributed collaborative mapping tasks, aimed at enhancing map completeness, mapping efficiency, and communication robustness under communication constraints. We frame the entire task as a k-Chinese Postman Problem (k-CPP) and optimize it using a genetic algorithm (GA). This method fully leverages topology maps to efficiently plan subpaths for multiple robots, ensuring thorough coverage of the mapping area without the need for prior navigation maps. Additionally, we incorporate communication constraints into our path planning to ensure stable data exchange among robots in environments with only short-range communication capabilities. Field experiment results highlight the superior performance of our method in terms of stability, efficiency, and robust inter-robot communication. Full article
Show Figures

Figure 1

26 pages, 9195 KiB  
Article
Deep Reinforcement Learning-Driven Collaborative Rounding-Up for Multiple Unmanned Aerial Vehicles in Obstacle Environments
by Zipeng Zhao, Yu Wan and Yong Chen
Drones 2024, 8(9), 464; https://doi.org/10.3390/drones8090464 - 6 Sep 2024
Viewed by 1509
Abstract
With the rapid advancement of UAV technology, the utilization of multi-UAV cooperative operations has become increasingly prevalent in various domains, including military and civilian applications. However, achieving efficient coordinated rounding-up of multiple UAVs remains a challenging problem. This paper addresses the issue of [...] Read more.
With the rapid advancement of UAV technology, the utilization of multi-UAV cooperative operations has become increasingly prevalent in various domains, including military and civilian applications. However, achieving efficient coordinated rounding-up of multiple UAVs remains a challenging problem. This paper addresses the issue of collaborative drone hunting by proposing a decision-making control model based on deep reinforcement learning. Additionally, a shared experience data pool is established to facilitate communication between drones. Each drone possesses independent decision-making and control capabilities while also considering the presence of other drones in the environment to collaboratively accomplish obstacle avoidance and rounding-up tasks. Furthermore, we redefine and design the reward function of reinforcement learning to achieve precise control of drone swarms in diverse environments. Simulation experiments demonstrate the feasibility of the proposed method, showcasing its successful completion of obstacle avoidance, tracking, and rounding-up tasks in an obstacle environment. Full article
Show Figures

Figure 1

16 pages, 4801 KiB  
Article
A Cooperative Decision-Making Approach Based on a Soar Cognitive Architecture for Multi-Unmanned Vehicles
by Lin Ding, Yong Tang, Tao Wang, Tianle Xie, Peihao Huang and Bingsan Yang
Drones 2024, 8(4), 155; https://doi.org/10.3390/drones8040155 - 18 Apr 2024
Cited by 1 | Viewed by 1815
Abstract
Multi-unmanned systems have demonstrated significant applications across various fields under complex or extreme operating environments. In order to make such systems highly efficient and reliable, cooperative decision-making methods have been utilized as a critical technology for successful future applications. However, current multi-agent decision-making [...] Read more.
Multi-unmanned systems have demonstrated significant applications across various fields under complex or extreme operating environments. In order to make such systems highly efficient and reliable, cooperative decision-making methods have been utilized as a critical technology for successful future applications. However, current multi-agent decision-making algorithms pose many challenges, including difficulties understanding human decision processes, poor time efficiency, and reduced interpretability. Thus, a real-time online collaborative decision-making model simulating human cognition is presented in this paper to solve those problems under unknown, complex, and dynamic environments. The provided model based on the Soar cognitive architecture aims to establish domain knowledge and simulate the process of human cooperation and adversarial cognition, fostering an understanding of the environment and tasks to generate real-time adversarial decisions for multi-unmanned systems. This paper devised intricate forest environments to evaluate the collaborative capabilities of agents and their proficiency in implementing various tactical strategies while assessing the effectiveness, reliability, and real-time action of the proposed model. The results reveal significant advantages for the agents in adversarial experiments, demonstrating strong capabilities in understanding the environment and collaborating effectively. Additionally, decision-making occurs in milliseconds, with time consumption decreasing as experience accumulates, mirroring the growth pattern of human decision-making. Full article
Show Figures

Figure 1

26 pages, 4764 KiB  
Article
The Situation Assessment of UAVs Based on an Improved Whale Optimization Bayesian Network Parameter-Learning Algorithm
by Weinan Li, Weiguo Zhang, Baoning Liu and Yicong Guo
Drones 2023, 7(11), 655; https://doi.org/10.3390/drones7110655 - 1 Nov 2023
Cited by 4 | Viewed by 1940
Abstract
To realize unmanned aerial vehicle (UAV) situation assessment, a Bayesian network (BN) for situation assessment is established. Aimed at the problem that the parameters of the BN are difficult to obtain, an improved whale optimization algorithm based on prior parameter intervals (IWOA-PPI) for [...] Read more.
To realize unmanned aerial vehicle (UAV) situation assessment, a Bayesian network (BN) for situation assessment is established. Aimed at the problem that the parameters of the BN are difficult to obtain, an improved whale optimization algorithm based on prior parameter intervals (IWOA-PPI) for parameter learning is proposed. Firstly, according to the dependencies between the situation and its related factors, the structure of the BN is established. Secondly, in order to fully mine the prior knowledge of parameters, the parameter constraints are transformed into parameter prior intervals using Monte Carlo sampling and interval transformation formulas. Thirdly, a variable encircling factor and a nonlinear convergence factor are proposed. The former and the latter enhance the local and global search capabilities of the whale optimization algorithm (WOA), respectively. Finally, a simulated annealing strategy incorporating Levy flight is introduced to enable the WOA to jump out of the local optimum. In the experiment for the standard BNs, five parameter-learning algorithms are applied, and the results prove that the IWOA-PPI is not only effective but also the most accurate. In the experiment for the situation BN, the situations of the assumed mission scenario are evaluated, and the results show that the situation assessment method proposed in this article is correct and feasible. Full article
Show Figures

Figure 1

Review

Jump to: Research

30 pages, 2317 KiB  
Review
Artificial Intelligence Applied to Drone Control: A State of the Art
by Daniel Caballero-Martin, Jose Manuel Lopez-Guede, Julian Estevez and Manuel Graña
Drones 2024, 8(7), 296; https://doi.org/10.3390/drones8070296 - 3 Jul 2024
Cited by 6 | Viewed by 15656
Abstract
The integration of Artificial Intelligence (AI) tools and techniques has provided a significant advance in drone technology. Besides the military applications, drones are being increasingly used for logistics and cargo transportation, agriculture, construction, security and surveillance, exploration, and mobile wireless communication. The synergy [...] Read more.
The integration of Artificial Intelligence (AI) tools and techniques has provided a significant advance in drone technology. Besides the military applications, drones are being increasingly used for logistics and cargo transportation, agriculture, construction, security and surveillance, exploration, and mobile wireless communication. The synergy between drones and AI has led to notable progress in the autonomy of drones, which have become capable of completing complex missions without direct human supervision. This study of the state of the art examines the impact of AI on improving drone autonomous behavior, covering from automation to complex real-time decision making. The paper provides detailed examples of the latest developments and applications. Ethical and regulatory challenges are also considered for the future evolution of this field of research, because drones with AI have the potential to greatly change our socioeconomic landscape. Full article
Show Figures

Figure 1

Back to TopTop