Enhancing Quadcopter Autonomy: Implementing Advanced Control Strategies and Intelligent Trajectory Planning
Abstract
:1. Introduction
2. Quadcopter State Space Model
- mt is the total mass of the quadcopter;
- and the aerodynamics rotation coefficients matrix;
- is the rotor’s angular velocities about the axis where the rotation occurs (z-axis).
3. Quadcopter Control Methods
3.1. PID Controller
3.2. Fractional-Order Controller
3.3. Sliding Mode Controller
3.4. Results
3.4.1. PID Controller Results
- Linear model control
- Nonlinear model control
3.4.2. Fractional-Order PID Controller Results
3.4.3. Sliding Mode Controller Results
3.5. Discussion
3.6. Comparison
4. Enhancing Quadcopter Trajectory Tracking through Dyna-Q Learning
4.1. Reinforcement Learning Approaches
- -
- The deterministic policy specifies a single action for each state; for every state (s) there is a clear action choice π ∶ S → A that the agent follows.
- -
- The stochastic policy assigns a distribution over actions to each state following the policy π such that π: S → proba(A), where the agent decides actions based on probabilities for each state (s). This way, the agent can choose different actions in a state, with each option having its own chance of being selected.
4.2. Q-Learning Algorithms
- High Learning Rate (α near 1): The agent will be highly responsive to the most recent experiences.
- Low Learning Rate (α near 0): The agent will be less responsive to new experiences and will rely more on existing knowledge.
4.3. Implementation of Dyna-Q Learning for Trajectory Planning
- -
- A lower learning rate is better for obstacle avoidance in uncertain environments. It allows the agent to be cautious in updating its Q-values based on new experiences.
- -
- A higher discount factor is better for obstacle avoidance. It encourages the agent to consider long-term consequences and plan for the future, which is important when navigating around obstacles and finding safe paths.
- -
- A lower exploration rate ε is better for obstacle avoidance during the initial stages of learning.
4.4. Results
4.4.1. Deterministic and Stochastic Environments Results
4.4.2. Dyna-Q Learning with Sliding Mode Controller
4.5. Discussion
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Ahirwar, S.; Swarnkar, R.; Bhukya, S.; Namwade, G. Application of Drone in Agriculture. Int. J. Curr. Microbiol. Appl. Sci. 2019, 8, 2500–2505. [Google Scholar] [CrossRef]
- Kille, T.; Bates, P.R.; Lee, S.Y. Unmanned Aerial Vehicles in Civilian Logistics and Supply Chain Management; IGI Global: Hershey, PA, USA, 2019; pp. 66–67. [Google Scholar] [CrossRef]
- Raundal, A.; Dhawale, A.; Gathe, M.; Salunke, G. Fire Ball Drone. Int. J. Res. Publ. Rev. 2022, 3, 4055–4064. Available online: https://ijrpr.com/uploads/V3ISSUE6/IJRPR5298.pdf (accessed on 13 May 2024).
- Ostojić, G.; Stankovski, S.; Tejic, B.; Đukić, N.; Tegeltija, S. Design, Control and Application of Quadcopter. Int. J. Ind. Eng. Manag. (IJIEM) 2015, 6, 44–45. [Google Scholar] [CrossRef]
- Thu, K.M.; Gavrilov, A.I. Designing and modeling of quadcopter control system using L1 adaptive control. Procedia Comput. Sci. 2017, 103, 528–535. [Google Scholar] [CrossRef]
- Eatemadi, M. Mathematical Dynamics, Kinematics Modeling and PID Equation Controller of Quadcopter. Int. J. Appl. Oper. Res. 2017, 7, 77–85. Available online: http://ijorlu.liau.ac.ir/article-1-503-fa.html (accessed on 13 May 2024).
- Harkare, O.; Maan, R. Design and Control of a quadcopter. Int. J. Eng. Tech. Res. 2021, 10, 258. [Google Scholar] [CrossRef]
- Okulski, M.; Ławryńczuk, M. How Much Energy Do We Need to Fly with Greater Agility? Energy Consumption and Performance of an Attitude Stabilization Controller in a Quadcopter Drone: A Modified MPC vs. PID. Energies 2022, 15, 1380. [Google Scholar] [CrossRef]
- Yao, W.-S.; Lin, C.-Y. Dynamic Stiffness Enhancement of the Quadcopter Control System. Electronics 2022, 11, 2206. [Google Scholar] [CrossRef]
- Leitão, D.; Cunha, R.; Lemos, J.M. Adaptive Control of Quadrotors in Uncertain Environments. Eng 2024, 5, 544–561. [Google Scholar] [CrossRef]
- Li, J.; Chen, P.; Chang, Z.; Zhang, G.; Guo, L.; Zhao, C. Trajectory Tracking Control of Quadrotor Based on Fractional-Order S-Plane Model. Machines 2023, 11, 672. [Google Scholar] [CrossRef]
- Ademola, A.; Ademola, I.; Oguntosin, V.; Olawale, P. Modeling and Nonlinear Control of a Quadcopter for Stabilization and Trajectory Tracking. SSRN Electron. J. 2022. [Google Scholar] [CrossRef]
- Yih, C.-C.; Wu, S.-J. Sliding Mode Path following and Control Allocation of a Tilt-Rotor Quadcopter. Appl. Sci. 2022, 12, 11088. [Google Scholar] [CrossRef]
- Huo, X.; Zhang, T.; Wang, Y.; Liu, W. Dyna-Q Algorithm for Path Planning of Quadrotor UAVs. In Methods and Applications for Modeling and Simulation of Complex Systems. AsiaSim 2018. Communications in Computer and Information Science; Li, L., Hasegawa, K., Tanaka, S., Eds.; Springer: Singapore, 2018; Volume 946. [Google Scholar] [CrossRef]
- Budiyanto, A.; Matsunaga, N. Deep Dyna-Q for Rapid Learning and Improved Formation Achievement in Cooperative Transportation. Automation 2023, 4, 210–231. [Google Scholar] [CrossRef]
- Faycal, T.; Zito, C. Dyna-T: Dyna-Q and Upper Confidence Bounds Applied to Trees. arXiv 2022, arXiv:2201.04502. [Google Scholar]
- Changliu, Z.; Ding, X.; Yu, Y.; Wang, X. Quaternion-based Nonlinear Trajectory Tracking Control of a Quadrotor Unmanned Aerial Vehicle. Chin. J. Mech. Eng. 2017, 30, 84–85. [Google Scholar]
- Fernando, H.C.T.E.; De Silva, A.T.A.; De Zoysa, M.D.C.; Dilshan, K.A.D.C.; Munasinghe, S.R. Modelling, simulation and implementation of a quadrotor UAV. In Proceedings of the IEEE 8th International Conference on Industrial and Information Systems (ICIIS), Peradeniya, Sri Lanka, 17–20 December 2013; p. 207. [Google Scholar]
- Nagaty, A.; Saeedi, S.; Thibault, C.; Seto, M.; Li, H. Control and navigation framework for quadrotor helicopters. J. Intell. Robot. Syst. 2013, 69, 2–5. [Google Scholar] [CrossRef]
- Zheng, Q.; Tang, R.; Gou, S.; Zhang, W. A PID Gain Adjustment Scheme Based on Reinforcement Learning Algorithm for a Quadrotor. In Proceedings of the 39th Chinese Control Conference (CCC), Shenyang, China, 27–29 July 2020. [Google Scholar]
- Siti, I.; Mjahed, M.; Ayad, H.; El Kari, A. New Designing Approaches for Quadcopter PID Controllers Using Reference Model and Genetic Algorithm Techniques. Int. Rev. Autom. Control (IREACO) 2017, 10, 240–248. [Google Scholar] [CrossRef]
- Bingi, K.; Ibrahim, R.; Karsiti, M.N.; Hassan, S.M. Fractional-order Systems and PID Controllers Using Scilab and Curve Fitting Based Approximation Techniques. In Studies in Systems, Decision and Control; Springer: Berlin/Heidelberg, Germany, 2020; Volume 264. [Google Scholar]
- Mirghasemi, S.A. Fractional Order Controller for Quadcopter Subjected to Ground Effect. Master’s Thesis, University of Ottawa, Ottawa, ON, Canada, 2019; pp. 14–15. [Google Scholar] [CrossRef]
- Le, H.D.; Nestorović, T. Adaptive Proportional Integral Derivative Nonsingular Dual Terminal Sliding Mode Control for Robotic Manipulators. Dynamics 2023, 3, 656–677. [Google Scholar] [CrossRef]
- Loubar, H.; Boushaki, R.; Aouati, A.; Bouanzoul, M. Sliding Mode Controller for Linear and Nonlinear Trajectory Tracking of a Quadrotor. Int. Rev. Autom. Control (IREACO) 2020, 13, 128–138. [Google Scholar] [CrossRef]
- Elagib, R.; Karaarslan, A. Sliding Mode Control-Based Modeling and Simulation of a Quadcopter. J. Eng. Res. Rep. 2023, 24, 32–41. [Google Scholar] [CrossRef]
- Ling, F.; Jimenez-Rodriguez, A.; Prescott, T.J. Obstacle Avoidance Using Stereo Vision and Deep Reinforcement Learning in an Animal-like Robot. In Proceedings of the International Conference on Robotics and Biomimetics (ROBIO), Dali, China, 6–8 December 2019; pp. 1–94. [Google Scholar]
- Deshpande, A.M.; Minai, A.A.; Kumar, M. Robust Deep Reinforcement Learning for Quadcopter Control. IFAC-PaperOnLine 2021, 54, 90–95. [Google Scholar] [CrossRef]
- Lambert, N.O.; Drew, D.S.; Yaconelli, J.; Levine, S.; Calandra, R.; Pister, K.S.J. Low-Level Control of a Quadrotor with Deep Model-Based Reinforcement Learning. IEEE Robot. Autom. Lett. 2019, 4, 4224–4230. [Google Scholar] [CrossRef]
- Chen, D.; Wei, Y.; Wang, L.; Hong, C.S.; Wang, L.-C.; Han, Z. Deep Reinforcement Learning Based Strategy for Quadrotor UAV Pursuer and Evader Problem. In Proceedings of the IEEE International Conference on Communications Workshops, Dublin, Ireland, 7–11 June 2020. [Google Scholar] [CrossRef]
- Ouahouah, S.; Bagaa, M.; Prados-Garzon, J. Deep Reinforcement Learning based Collision Avoidance in UAV Environment. IEEE Internet Things J. 2022, 9, 4015–4030. [Google Scholar] [CrossRef]
- Agarwal, M.; Aggarwal, V.; Ghosh, A.; Tiwari, N. Reinforcement Learning for Mean-Field Game. Algorithms 2022, 15, 73. [Google Scholar] [CrossRef]
- Dhuheir, M.; Baccour, E.; Erbad, A.; Al-Obaidi, S.S.; Hamdi, M. Deep Reinforcement Learning for Trajectory Path Planning and Distributed Inference in Resource-Constrained UAV Swarms. IEEE Internet Things J. 2022, 10, 8185–8201. [Google Scholar] [CrossRef]
- Rubi, B.; Morcego, B.; Perez, R. A Deep Reinforcement Learning Approach for Path Following on a Quadrotor. In Proceedings of the European Control Conference (ECC), Saint Petersburg, Russia, 12–15 May 2020. [Google Scholar] [CrossRef]
- Yoo, J.; Jang, D.; Kim, H.J.; Johansson, K.H. Hybrid reinforcement learning control for a micro quadrotor flight. IEEE Control Syst. Lett. 2021, 5, 505–510. [Google Scholar] [CrossRef]
- Liu, H.; Zhao, W.; Lewis, F.L.; Jiang, Z.-P.; Modares, H. Data-based Formation Control for Underactuated Quadrotor Team via Reinforcement Learning*. In Proceedings of the 39th Chinese Control Conference (CCC), Shenyang, China, 27–29 July 2020. [Google Scholar] [CrossRef]
Type of Controller | λ | μ |
---|---|---|
1 | 0 | |
1 | 1 | |
0 | 1 | |
1 | 1 | |
1 | 1 |
Controller | P | I | D | Settling Time | Overshoot |
---|---|---|---|---|---|
x | 0.18858 | 0.0025421 | 3.1082 | 1.28 s | 0.737% |
y | 0.38107 | 0.0053418 | 3.2245 | 4.95 s | 4.42% |
z | 11 | 0.034082 | 15 | 0.863 s | 2.61% |
Roll | 0.2564 | 0.025926 | 0.5634 | 5.15 s | 5.58% |
Pitch | 0.9788 | 0.19295 | 1.1033 | 0.001 s | 4.25% |
Yaw | 3.2682 | 5.5053 | 0.2235 | 0.909 | 8.07% |
PID | P | I | D |
---|---|---|---|
x | 0.25 | 0.003 | 3.5 |
y | 0.92 | 0.01 | 2 |
z | 150 | 50 | 30 |
Roll | 9 | 0.05 | 1.2 |
Pitch | 7 | 0.2 | 1 |
Yaw | 3.26 | 5.5 | 0.22 |
FOPID | KP | KI | KD | λ | μ |
---|---|---|---|---|---|
x | 1 | 0.01 | 15 | 0.8 | 0.785 |
y | 1 | 0.01 | 4 | 0.8 | 0.64 |
z | 140 | 50 | 30 | 1 | 1 |
Roll | 1 | 0.03 | 1.9 | 0.6 | 0.8 |
Pitch | 1 | 0.2 | 1 | 0.5 | 0.8 |
Yaw | 3.26 | 5.5 | 1 | 0.8 | 0.9 |
Controller | λ | K1 | K2 |
---|---|---|---|
x | 10 | 0.5 | 0.9 |
y | 14.5 | 0.1 | 1.5 |
z | 5 | 0.1 | 10 |
10.2 | 0.1 | 7.5 | |
5 | 0.01 | 110 | |
30 | 0.1 | 30 |
Number of Episodes | Learning Rate | Discount Factor | Exploration Rate | Horizon | Obstacle Hit Reward | Reach Goal Reward | Regular Step |
---|---|---|---|---|---|---|---|
3000 | 0.1 | 0.8 | 0.2 | 200 | −100 | 100 | −2 |
Controller | λ | K1 | K2 |
---|---|---|---|
x | 1.5 | 0.08 | 0.4 |
y | 1.6 | 0.07 | 0.6 |
z | 0.5 | 0.08 | 60 |
40 | 0.5 | 14 | |
49 | 0.3 | 11 | |
0.5 | 1 | 4 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Hadid, S.; Boushaki, R.; Boumchedda, F.; Merad, S. Enhancing Quadcopter Autonomy: Implementing Advanced Control Strategies and Intelligent Trajectory Planning. Automation 2024, 5, 151-175. https://doi.org/10.3390/automation5020010
Hadid S, Boushaki R, Boumchedda F, Merad S. Enhancing Quadcopter Autonomy: Implementing Advanced Control Strategies and Intelligent Trajectory Planning. Automation. 2024; 5(2):151-175. https://doi.org/10.3390/automation5020010
Chicago/Turabian StyleHadid, Samira, Razika Boushaki, Fatiha Boumchedda, and Sabrina Merad. 2024. "Enhancing Quadcopter Autonomy: Implementing Advanced Control Strategies and Intelligent Trajectory Planning" Automation 5, no. 2: 151-175. https://doi.org/10.3390/automation5020010
APA StyleHadid, S., Boushaki, R., Boumchedda, F., & Merad, S. (2024). Enhancing Quadcopter Autonomy: Implementing Advanced Control Strategies and Intelligent Trajectory Planning. Automation, 5(2), 151-175. https://doi.org/10.3390/automation5020010