Distributed Control, Optimization, and Game of UAV Swarm Systems

A special issue of Drones (ISSN 2504-446X).

Deadline for manuscript submissions: closed (25 July 2024) | Viewed by 22598

Special Issue Editors


E-Mail Website
Guest Editor
Institute of Artificial Intelligence, Beihang University, Beijing 100191, China
Interests: distributed control; formation control; Intelligent control

E-Mail Website
Guest Editor
School of Automation Science and Electrical Engineering, Beihang University, Beijing 100191, China
Interests: cooperative guidance; flight control

E-Mail Website
Guest Editor
Institute of Artificial Intelligence, Beihang University, Beijing 100191, China
Interests: distributed optimization; networked game; cooperative control

Special Issue Information

Dear Colleagues,

UAV swarm systems can also be named as multi-UAV systems consisting of multiple UAVs with neighboring interactions, and have broad potential applications in various areas, such as intelligent transportation, disaster rescue, and cooperative detection. Distributed control, optimization, and game of UAV swarm systems has been a hot research topic in many scientific communities, especially the control and robotics communities. How to design the controller or protocol using only neighboring relative information is the main challenge. Distributed control, optimization, and game of UAV swarm systems is promising due to that the emerging behavior has the features of low cost, high scalability and flexibility, great robustness, and easy maintenance. Motivated by the facts stated above, more and more researchers are devoting themselves to obtain sound results on this topic.

The goal of this Special Issue is to collect papers (original research articles and review papers) to give insights about distributed control, optimization, and game of UAV swarm systems. The journal and the special issue does not consider the publication of manuscripts related to military operations/applications or that have any explicit reference to military organization.

This Special Issue will welcome manuscripts that link the following themes:

  • Distributed control
  • Formation control
  • Distributed optimization
  • Intelligent motion planning
  • Game of UAV swarm systems
  • Distributed Nash equilibrium seeking

We look forward to receiving your original research articles and reviews.

Dr. Yongzhao Hua
Dr. Jianglong Yu
Dr. Chao Sun
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Drones is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • UAV swarm systems
  • distributed control
  • formation control
  • distributed optimization
  • intelligent motion planning
  • swarm game
  • distributed nash equilibrium seeking

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Related Special Issue

Published Papers (14 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

20 pages, 3247 KiB  
Article
A Butterfly Algorithm That Combines Chaos Mapping and Fused Particle Swarm Optimization for UAV Path Planning
by Linlin Wang, Xin Zhang, Huilong Zheng, Chuanyun Wang, Qian Gao, Tong Zhang, Zhongyi Li and Jing Shao
Drones 2024, 8(10), 576; https://doi.org/10.3390/drones8100576 - 11 Oct 2024
Viewed by 787
Abstract
Effective path planning is essential for autonomous drone flight to enhance task efficiency. Many researchers have applied swarm intelligence algorithms to drone path planning. For instance, the traditional Butterfly Optimization Algorithm (BOA) has been used for this purpose. However, traditional BOA faces challenges [...] Read more.
Effective path planning is essential for autonomous drone flight to enhance task efficiency. Many researchers have applied swarm intelligence algorithms to drone path planning. For instance, the traditional Butterfly Optimization Algorithm (BOA) has been used for this purpose. However, traditional BOA faces challenges such as slow convergence and susceptibility to being trapped in local optima. An Improved Butterfly Optimization Algorithm (IBOA) has been developed to identify optimal routes to address these limitations. Initially, ICMIC mapping is utilized to establish the butterfly community, enhancing the initial population’s diversity and preventing premature algorithm convergence. Following this, a population reset strategy is introduced, replacing weaker individuals over a specified number of iterations while maintaining a constant population size. This strategy enhances the algorithm’s ability to avoid local optima and increases its robustness. Additionally, characteristics of the Particle Swarm Optimization (PSO) algorithm are integrated to enhance the butterfly’s location update mechanism, accelerating the algorithm’s convergence rate. To evaluate the performance of the IBOA algorithm, this study designed a CEC2020 function test experiment and compared it with several swarm intelligence algorithms. The results showed that IBOA achieved the best performance in 70% of the function tests, outperforming 75% of the other algorithms. In the path planning experiments within a simulated environment, IBOA quickly converged to the optimal path, and the paths it planned were the shortest and safest compared to those generated by other algorithms. Full article
(This article belongs to the Special Issue Distributed Control, Optimization, and Game of UAV Swarm Systems)
Show Figures

Figure 1

36 pages, 24832 KiB  
Article
Intelligent Swarm: Concept, Design and Validation of Self-Organized UAVs Based on Leader–Followers Paradigm for Autonomous Mission Planning
by Wilfried Yves Hamilton Adoni, Junaidh Shaik Fareedh, Sandra Lorenz, Richard Gloaguen, Yuleika Madriz, Aastha Singh and Thomas D. Kühne
Drones 2024, 8(10), 575; https://doi.org/10.3390/drones8100575 - 11 Oct 2024
Viewed by 2860
Abstract
Unmanned Aerial Vehicles (UAVs), commonly known as drones, are omnipresent and have grown in popularity due to their wide potential use in many civilian sectors. Equipped with sophisticated sensors and communication devices, drones can potentially form a multi-UAV system, also called an autonomous [...] Read more.
Unmanned Aerial Vehicles (UAVs), commonly known as drones, are omnipresent and have grown in popularity due to their wide potential use in many civilian sectors. Equipped with sophisticated sensors and communication devices, drones can potentially form a multi-UAV system, also called an autonomous swarm, in which UAVs work together with little or no operator control. According to the complexity of the mission and coverage area, swarm operations require important considerations regarding the intelligence and self-organization of the UAVs. Factors including the types of drones, the communication protocol and architecture, task planning, consensus control, and many other swarm mobility considerations must be investigated. While several papers highlight the use cases for UAV swarms, there is a lack of research that addresses in depth the challenges posed by deploying an intelligent UAV swarm. Against this backdrop, we propose a computation framework of a self-organized swarm for autonomous and collaborative missions. The proposed approach is based on the Leader–Followers paradigm, which involves the distribution of ROS nodes among follower UAVs, while leaders perform supervision. Additionally, we have integrated background services that autonomously manage the complexities relating to task coordination, control policy, and failure management. In comparison with several research efforts, the proposed multi-UAV system is more autonomous and resilient since it can recover swiftly from system failure. It is also reliable and has been deployed on real UAVs for outdoor survey missions. This validates the applicability of the theoretical underpinnings of the proposed swarming concept. Experimental tests carried out as part of an area coverage mission with 6 quadcopters (2 leaders and 4 followers) reveal that the proposed swarming concept is very promising and inspiring for aerial vehicle technology. Compared with the conventional planning approach, the results are highly satisfactory, highlighting a significant gain in terms of flight time, and enabling missions to be achieved rapidly while optimizing energy consumption. This gives the advantage of exploring large areas without having to make frequent downtime to recharge and/or charge the batteries. This manuscript has the potential to be extremely useful for future research into the application of unmanned swarms for autonomous missions. Full article
(This article belongs to the Special Issue Distributed Control, Optimization, and Game of UAV Swarm Systems)
Show Figures

Figure 1

21 pages, 1507 KiB  
Article
An Adaptive Task Planning Method for UAVC Task Layer: DSTCA
by Ting Duan, Qun Li, Xin Zhou and Xiaobo Li
Drones 2024, 8(10), 553; https://doi.org/10.3390/drones8100553 - 6 Oct 2024
Viewed by 473
Abstract
With the rapid development of digital intelligence, drones can provide many conveniences for people’s lives, especially in executing rescue missions in special areas. When executing rescue missions in remote areas, communication cannot be fully covered. Therefore, to improve the online adaptability of the [...] Read more.
With the rapid development of digital intelligence, drones can provide many conveniences for people’s lives, especially in executing rescue missions in special areas. When executing rescue missions in remote areas, communication cannot be fully covered. Therefore, to improve the online adaptability of the task chain link in task planning with a complex system structure as the background, a distributed source-task-capability allocation (DSTCA) problem was constructed. The first task chain coordination mechanism scheme was proposed, and a DSTCA architecture based on the task chain coordination mechanism was constructed to achieve the online adaptability of the swarm. At the same time, the existing algorithms cannot achieve this idea, and the DSTCA-CBBA algorithm based on CNP is proposed. The efficiency change, agent score, and time three indicators are evaluated through specific cases. In response to sudden changes in nodes in the task chain link, the maximum spanning tree algorithm is used to reconstruct the task chain link in a short time, thereby completing the mission task assigned to the drone entity. Meanwhile, the experimental results also prove the effectiveness of the proposed algorithm. Full article
(This article belongs to the Special Issue Distributed Control, Optimization, and Game of UAV Swarm Systems)
Show Figures

Figure 1

19 pages, 2002 KiB  
Article
Collision-Free Path Planning for Multiple Drones Based on Safe Reinforcement Learning
by Hong Chen, Dan Huang, Chenggang Wang, Lu Ding, Lei Song and Hongtao Liu
Drones 2024, 8(9), 481; https://doi.org/10.3390/drones8090481 - 12 Sep 2024
Viewed by 1119
Abstract
Reinforcement learning (RL) has been shown to be effective in path planning. However, it usually requires exploring a sufficient number of state–action pairs, some of which may be unsafe when deployed in practical obstacle environments. To this end, this paper proposes an end-to-end [...] Read more.
Reinforcement learning (RL) has been shown to be effective in path planning. However, it usually requires exploring a sufficient number of state–action pairs, some of which may be unsafe when deployed in practical obstacle environments. To this end, this paper proposes an end-to-end planning method based model-free RL framework with optimization, which can achieve better learning performance with a safety guarantee. Firstly, for second-order drone systems, a differentiable high-order control barrier function (HOCBF) is introduced to ensure the output of the planning algorithm falls in a safe range. Then, a safety layer based on the HOCBF is proposed, which projects RL actions into a feasible solution set to guarantee safe exploration. Finally, we conducted a simulation for drone obstacle avoidance and validated the proposed method in the simulation environment. The experimental results demonstrate a significant enhancement over the baseline approach. Specifically, the proposed method achieved a substantial reduction in the average cumulative number of collisions per drone during training compared to the baseline. Additionally, in the testing phase, the proposed method realized a 43% improvement in the task success rate relative to the MADDPG. Full article
(This article belongs to the Special Issue Distributed Control, Optimization, and Game of UAV Swarm Systems)
Show Figures

Figure 1

27 pages, 3180 KiB  
Article
A Robust Hybrid Iterative Learning Formation Strategy for Multi-Unmanned Aerial Vehicle Systems with Multi-Operating Modes
by Song Yang, Wenshuai Yu, Zhou Liu and Fei Ma
Drones 2024, 8(8), 406; https://doi.org/10.3390/drones8080406 - 19 Aug 2024
Cited by 1 | Viewed by 689
Abstract
This paper investigates the formation control problem of multi-unmanned aerial vehicle (UAV) systems with multi-operating modes. While mode switching enhances the flexibility of multi-UAV systems, it also introduces dynamic model switching behaviors in UAVs. Moreover, obtaining an accurate dynamic model for a multi-UAV [...] Read more.
This paper investigates the formation control problem of multi-unmanned aerial vehicle (UAV) systems with multi-operating modes. While mode switching enhances the flexibility of multi-UAV systems, it also introduces dynamic model switching behaviors in UAVs. Moreover, obtaining an accurate dynamic model for a multi-UAV system is challenging in practice. In addition, communication link failures and time-varying unknown disturbances are inevitable in multi-UAV systems. Hence, to overcome the adverse effects of the above challenges, a hybrid iterative learning formation control strategy is proposed in this paper. The proposed controller does not rely on precise modeling and exhibits its learning ability by utilizing historical input–output data to update the current control input. Furthermore, two convergence theorems are proven to guarantee the convergence of state, disturbance estimation, and formation tracking errors. Finally, three simulation examples are conducted for a multi-UAV system consisting of four quadrotor UAVs under multi-operating modes, switching topologies, and external disturbances. The results of the simulations show the strategy’s effectiveness and superiority in achieving the desired formation control objectives. Full article
(This article belongs to the Special Issue Distributed Control, Optimization, and Game of UAV Swarm Systems)
Show Figures

Figure 1

25 pages, 11275 KiB  
Article
Multiple Unmanned Aerial Vehicle (multi-UAV) Reconnaissance and Search with Limited Communication Range Using Semantic Episodic Memory in Reinforcement Learning
by Boquan Zhang, Tao Wang, Mingxuan Li, Yanru Cui, Xiang Lin and Zhi Zhu
Drones 2024, 8(8), 393; https://doi.org/10.3390/drones8080393 - 14 Aug 2024
Cited by 1 | Viewed by 986
Abstract
Unmanned Aerial Vehicles (UAVs) have garnered widespread attention in reconnaissance and search operations due to their low cost and high flexibility. However, when multiple UAVs (multi-UAV) collaborate on these tasks, a limited communication range can restrict their efficiency. This paper investigates the problem [...] Read more.
Unmanned Aerial Vehicles (UAVs) have garnered widespread attention in reconnaissance and search operations due to their low cost and high flexibility. However, when multiple UAVs (multi-UAV) collaborate on these tasks, a limited communication range can restrict their efficiency. This paper investigates the problem of multi-UAV collaborative reconnaissance and search for static targets with a limited communication range (MCRS-LCR). To address communication limitations, we designed a communication and information fusion model based on belief maps and modeled MCRS-LCR as a multi-objective optimization problem. We further reformulated this problem as a decentralized partially observable Markov decision process (Dec-POMDP). We introduced episodic memory into the reinforcement learning framework, proposing the CNN-Semantic Episodic Memory Utilization (CNN-SEMU) algorithm. Specifically, CNN-SEMU uses an encoder–decoder structure with a CNN to learn state embedding patterns influenced by the highest returns. It extracts semantic features from the high-dimensional map state space to construct a smoother memory embedding space, ultimately enhancing reinforcement learning performance by recalling the highest returns of historical states. Extensive simulation experiments demonstrate that in reconnaissance and search tasks of various scales, CNN-SEMU surpasses state-of-the-art multi-agent reinforcement learning methods in episodic rewards, search efficiency, and collision frequency. Full article
(This article belongs to the Special Issue Distributed Control, Optimization, and Game of UAV Swarm Systems)
Show Figures

Figure 1

18 pages, 4720 KiB  
Article
Multi-Unmanned Aerial Vehicle Confrontation in Intelligent Air Combat: A Multi-Agent Deep Reinforcement Learning Approach
by Jianfeng Yang, Xinwei Yang and Tianqi Yu
Drones 2024, 8(8), 382; https://doi.org/10.3390/drones8080382 - 7 Aug 2024
Cited by 1 | Viewed by 1075
Abstract
Multiple unmanned aerial vehicle (multi-UAV) confrontation is becoming an increasingly important combat mode in intelligent air combat. The confrontation highly relies on the intelligent collaboration and real-time decision-making of the UAVs. Thus, a decomposed and prioritized experience replay (PER)-based multi-agent deep deterministic policy [...] Read more.
Multiple unmanned aerial vehicle (multi-UAV) confrontation is becoming an increasingly important combat mode in intelligent air combat. The confrontation highly relies on the intelligent collaboration and real-time decision-making of the UAVs. Thus, a decomposed and prioritized experience replay (PER)-based multi-agent deep deterministic policy gradient (DP-MADDPG) algorithm has been proposed in this paper for the moving and attacking decisions of UAVs. Specifically, the confrontation is formulated as a partially observable Markov game. To solve the problem, the DP-MADDPG algorithm is proposed by integrating the decomposed and PER mechanisms into the traditional MADDPG. To overcome the technical challenges of the convergence to a local optimum and a single dominant policy, the decomposed mechanism is applied to modify the MADDPG framework with local and global dual critic networks. Furthermore, to improve the convergence rate of the MADDPG training process, the PER mechanism is utilized to optimize the sampling efficiency from the experience replay buffer. Simulations have been conducted based on the Multi-agent Combat Arena (MaCA) platform, wherein the traditional MADDPG and independent learning DDPG (ILDDPG) algorithms are benchmarks. Simulation results indicate that the proposed DP-MADDPG improves the convergence rate and the convergent reward value. During confrontations against the vanilla distance-prioritized rule-empowered and intelligent ILDDPG-empowered blue parties, the DP-MADDPG-empowered red party can improve the win rate to 96% and 80.5%, respectively. Full article
(This article belongs to the Special Issue Distributed Control, Optimization, and Game of UAV Swarm Systems)
Show Figures

Figure 1

20 pages, 1190 KiB  
Article
UAV Confrontation and Evolutionary Upgrade Based on Multi-Agent Reinforcement Learning
by Xin Deng, Zhaoqi Dong and Jishiyu Ding
Drones 2024, 8(8), 368; https://doi.org/10.3390/drones8080368 - 1 Aug 2024
Cited by 2 | Viewed by 949
Abstract
Unmanned aerial vehicle (UAV) confrontation scenarios play a crucial role in the study of agent behavior selection and decision planning. Multi-agent reinforcement learning (MARL) algorithms serve as a universally effective method guiding agents toward appropriate action strategies. They determine subsequent actions based on [...] Read more.
Unmanned aerial vehicle (UAV) confrontation scenarios play a crucial role in the study of agent behavior selection and decision planning. Multi-agent reinforcement learning (MARL) algorithms serve as a universally effective method guiding agents toward appropriate action strategies. They determine subsequent actions based on the state of the agents and the environmental information that the agents receive. However, traditional MARL settings often result in one party agent consistently outperforming the other party due to superior strategies, or both agents reaching a strategic stalemate with no further improvement. To solve this issue, we propose a semi-static deep deterministic policy gradient algorithm based on MARL. This algorithm employs a centralized training and decentralized execution approach, dynamically adjusting the training intensity based on the comparative strengths and weaknesses of both agents’ strategies. Experimental results show that during the training process, the strategy of the winning team drives the losing team’s strategy to upgrade continuously, and the relationship between the winning team and the losing team keeps changing, thus achieving mutual improvement of the strategies of both teams. The semi-static reinforcement learning algorithm improves the win-loss relationship conversion by 8% and reduces the training time by 40% compared with the traditional reinforcement learning algorithm. Full article
(This article belongs to the Special Issue Distributed Control, Optimization, and Game of UAV Swarm Systems)
Show Figures

Figure 1

24 pages, 4335 KiB  
Article
Decentralized UAV Swarm Control: A Multi-Layered Architecture for Integrated Flight Mode Management and Dynamic Target Interception
by Bingze Xia, Iraj Mantegh and Wenfang Xie
Drones 2024, 8(8), 350; https://doi.org/10.3390/drones8080350 - 29 Jul 2024
Cited by 1 | Viewed by 2472
Abstract
Uncrewed Aerial Vehicles (UAVs) are increasingly deployed across various domains due to their versatility in navigating three-dimensional spaces. The utilization of UAV swarms further enhances the efficiency of mission execution through collaborative operation and shared intelligence. This paper introduces a novel decentralized swarm [...] Read more.
Uncrewed Aerial Vehicles (UAVs) are increasingly deployed across various domains due to their versatility in navigating three-dimensional spaces. The utilization of UAV swarms further enhances the efficiency of mission execution through collaborative operation and shared intelligence. This paper introduces a novel decentralized swarm control strategy for multi-UAV systems engaged in intercepting multiple dynamic targets. The proposed control framework leverages the advantages of both learning-based intelligent algorithms and rule-based control methods, facilitating complex task control in unknown environments while enabling adaptive and resilient coordination among UAV swarms. Moreover, dual flight modes are introduced to enhance mission robustness and fault tolerance, allowing UAVs to autonomously return to base in case of emergencies or upon task completion. Comprehensive simulation scenarios are designed to validate the effectiveness and scalability of the proposed control system under various conditions. Additionally, a feasibility analysis is conducted to guarantee real-world UAV implementation. The results demonstrate significant improvements in tracking performance, scheduling efficiency, and overall success rates compared to traditional methods. This research contributes to the advancement of autonomous UAV swarm coordination and specific applications in complex environments. Full article
(This article belongs to the Special Issue Distributed Control, Optimization, and Game of UAV Swarm Systems)
Show Figures

Figure 1

28 pages, 878 KiB  
Article
Fault-Tolerant Tracking Control of Hypersonic Vehicle Based on a Universal Prescribe Time Architecture
by Fangyue Guo, Wenqian Zhang, Maolong Lv and Ruiqi Zhang
Drones 2024, 8(7), 295; https://doi.org/10.3390/drones8070295 - 2 Jul 2024
Viewed by 1107
Abstract
An adaptive tracking control strategy with a prescribe tracking error and the convergence time is proposed for hypersonic vehicles with state constraints and actuator failures. The peculiarity is that constructing a new time scale coordinate translation mapping method, which maps the prescribe time [...] Read more.
An adaptive tracking control strategy with a prescribe tracking error and the convergence time is proposed for hypersonic vehicles with state constraints and actuator failures. The peculiarity is that constructing a new time scale coordinate translation mapping method, which maps the prescribe time on the finite field to the time variable on the infinite field, and the convergence problem of the prescribe time is transformed into the conventional system convergence problem. The improved Lyapunov function, the improved tuning function, and the adaptive fault-tolerant mechanism are further constructed. Combined with the neural network, the prescribe time tracking control of the speed subsystem and the height subsystem are realized respectively. Combined with the Barbalat lemma and Lyapunov stability theory, the boundedness of the closed-loop system is proved. The simulation results have proven that, compared with other control strategies, it can ensure that the tracking error converges to the prescribe interval in the prescribe time and meets the constraints of the whole state of the system. Full article
(This article belongs to the Special Issue Distributed Control, Optimization, and Game of UAV Swarm Systems)
Show Figures

Figure 1

17 pages, 4092 KiB  
Article
Multi-UAV Formation Path Planning Based on Compensation Look-Ahead Algorithm
by Tianye Sun, Wei Sun, Changhao Sun and Ruofei He
Drones 2024, 8(6), 251; https://doi.org/10.3390/drones8060251 - 7 Jun 2024
Viewed by 1038
Abstract
This study primarily studies the shortest-path planning problem for unmanned aerial vehicle (UAV) formations under uncertain target sequences. In order to enhance the efficiency of collaborative search in drone clusters, a compensation look-ahead algorithm based on optimizing the four-point heading angles is proposed. [...] Read more.
This study primarily studies the shortest-path planning problem for unmanned aerial vehicle (UAV) formations under uncertain target sequences. In order to enhance the efficiency of collaborative search in drone clusters, a compensation look-ahead algorithm based on optimizing the four-point heading angles is proposed. Building upon the receding-horizon algorithm, this method introduces the heading angles of adjacent points to approximately compensate and decouple the triangular equations of the optimal trajectory, and a general formula for calculating the heading angles is proposed. The simulation data indicate that the model using the compensatory look forward algorithm exhibits a maximum improvement of 12.9% compared to other algorithms. Furthermore, to solve the computational complexity and sample size requirements for optimal solutions in the Dubins multiple traveling salesman model, a path-planning model for multiple UAV formations is introduced based on the Euclidean traveling salesman problem (ETSP) pre-allocation. By pre-allocating sub-goals, the model reduces the computational scale of individual samples while maintaining a constant sample size. The simulation results show an 8.4% and 17.5% improvement in sparse regions for the proposed Euclidean Dubins traveling salesman problem (EDTSP) model for takeoff from different points. Full article
(This article belongs to the Special Issue Distributed Control, Optimization, and Game of UAV Swarm Systems)
Show Figures

Figure 1

31 pages, 4285 KiB  
Article
A Path-Planning Method for UAV Swarm under Multiple Environmental Threats
by Xiangyu Fan, Hao Li, You Chen and Danna Dong
Drones 2024, 8(5), 171; https://doi.org/10.3390/drones8050171 - 26 Apr 2024
Cited by 1 | Viewed by 1904
Abstract
To weaken or avoid the impact of dynamic threats such as wind and extreme weather on the real-time path of a UAV swarm, a path-planning method based on improved long short-term memory (LSTM) network prediction parameters was constructed. First, models were constructed for [...] Read more.
To weaken or avoid the impact of dynamic threats such as wind and extreme weather on the real-time path of a UAV swarm, a path-planning method based on improved long short-term memory (LSTM) network prediction parameters was constructed. First, models were constructed for wind, static threats, and dynamic threats during the flight of the drone. Then, it was found that atmospheric parameters are typical time series data with spatial correlation. The LSTM network was optimized and used to process time series parameters to construct a network for predicting atmospheric parameters. The state of the drone was adjusted in real time based on the prediction results to mitigate the impact of wind or avoid the threat of extreme weather. Finally, a path optimization method based on an improved LSTM network was constructed. Through simulation, it can be seen that compared to the path that does not consider atmospheric effects, the optimized path has significantly improved flightability and safety. Full article
(This article belongs to the Special Issue Distributed Control, Optimization, and Game of UAV Swarm Systems)
Show Figures

Figure 1

18 pages, 1851 KiB  
Article
Collaborative Task Allocation and Optimization Solution for Unmanned Aerial Vehicles in Search and Rescue
by Dan Han, Hao Jiang, Lifang Wang, Xinyu Zhu, Yaqing Chen and Qizhou Yu
Drones 2024, 8(4), 138; https://doi.org/10.3390/drones8040138 - 3 Apr 2024
Cited by 8 | Viewed by 1810
Abstract
Earthquakes pose significant risks to national stability, endangering lives and causing substantial economic damage. This study tackles the urgent need for efficient post-earthquake relief in search and rescue (SAR) scenarios by proposing a multi-UAV cooperative rescue task allocation model. With consideration the unique [...] Read more.
Earthquakes pose significant risks to national stability, endangering lives and causing substantial economic damage. This study tackles the urgent need for efficient post-earthquake relief in search and rescue (SAR) scenarios by proposing a multi-UAV cooperative rescue task allocation model. With consideration the unique requirements of post-earthquake rescue missions, the model aims to minimize the number of UAVs deployed, reduce rescue costs, and shorten the duration of rescue operations. We propose an innovative hybrid algorithm combining particle swarm optimization (PSO) and grey wolf optimizer (GWO), called the PSOGWO algorithm, to achieve the objectives of the model. This algorithm is enhanced by various strategies, including interval transformation, nonlinear convergence factor, individual update strategy, and dynamic weighting rules. A practical case study illustrates the use of our model and algorithm in reality and validates its effectiveness by comparing it to PSO and GWO. Moreover, a sensitivity analysis on UAV capacity highlights its impact on the overall rescue time and cost. The research results contribute to the advancement of vehicle-routing problem (VRP) models and algorithms for post-earthquake relief in SAR. Furthermore, it provides optimized relief distribution strategies for rescue decision-makers, thereby improving the efficiency and effectiveness of SAR operations. Full article
(This article belongs to the Special Issue Distributed Control, Optimization, and Game of UAV Swarm Systems)
Show Figures

Figure 1

16 pages, 3864 KiB  
Article
Reinforcement Learning-Based Formation Pinning and Shape Transformation for Swarms
by Zhaoqi Dong, Qizhen Wu and Lei Chen
Drones 2023, 7(11), 673; https://doi.org/10.3390/drones7110673 - 13 Nov 2023
Cited by 4 | Viewed by 2483
Abstract
Swarm models hold significant importance as they provide the collective behavior of self-organized systems. Boids model is a fundamental framework for studying emergent behavior in swarms systems. It addresses problems related to simulating the emergent behavior of autonomous agents, such as alignment, cohesion, [...] Read more.
Swarm models hold significant importance as they provide the collective behavior of self-organized systems. Boids model is a fundamental framework for studying emergent behavior in swarms systems. It addresses problems related to simulating the emergent behavior of autonomous agents, such as alignment, cohesion, and repulsion, to imitate natural flocking movements. However, traditional models of Boids often lack pinning and the adaptability to quickly adapt to the dynamic environment. To address this limitation, we introduce reinforcement learning into the framework of Boids to solve the problem of disorder and the lack of pinning. The aim of this approach is to enable drone swarms to quickly and effectively adapt to dynamic external environments. We propose a method based on the Q-learning network to improve the cohesion and repulsion parameters in the Boids model to achieve continuous obstacle avoidance and maximize spatial coverage in the simulation scenario. Additionally, we introduce a virtual leader to provide pinning and coordination stability, reflecting the leadership and coordination seen in drone swarms. To validate the effectiveness of this method, we demonstrate the model’s capabilities through empirical experiments with drone swarms, and show the practicality of the RL-Boids framework. Full article
(This article belongs to the Special Issue Distributed Control, Optimization, and Game of UAV Swarm Systems)
Show Figures

Figure 1

Back to TopTop