Next Article in Journal
Small-Diameter Tube Wall Damage-Detection Method Based on TE01 Mode Microwave
Previous Article in Journal
Low-Complexity Joint 3D Super-Resolution Estimation of Range Velocity and Angle of Multi-Targets Based on FMCW Radar
 
 
Article

DRL-RNP: Deep Reinforcement Learning-Based Optimized RNP Flight Procedure Execution

1
School of Aeronautics and Astronautics, Sichuan University, Chengdu 610065, China
2
College of Computer Science, Sichuan University, Chengdu 610065, China
3
National Subsea Centre, Robert Gordon University, Aberdeen AB21 0BH, UK
*
Author to whom correspondence should be addressed.
Academic Editor: Kyandoghere Kyamakya
Sensors 2022, 22(17), 6475; https://doi.org/10.3390/s22176475
Received: 4 July 2022 / Revised: 19 August 2022 / Accepted: 26 August 2022 / Published: 28 August 2022
(This article belongs to the Section Intelligent Sensors)
The required navigation performance (RNP) procedure is one of the two basic navigation specifications for the performance-based navigation (PBN) procedure as proposed by the International Civil Aviation Organization (ICAO) through an integration of the global navigation infrastructures to improve the utilization efficiency of airspace and reduce flight delays and the dependence on ground navigation facilities. The approach stage is one of the most important and difficult stages in the whole flying. In this study, we proposed deep reinforcement learning (DRL)-based RNP procedure execution, DRL-RNP. By conducting an RNP approach procedure, the DRL algorithm was implemented, using a fixed-wing aircraft to explore a path of minimum fuel consumption with reward under windy conditions in compliance with the RNP safety specifications. The experimental results have demonstrated that the six degrees of freedom aircraft controlled by the DRL algorithm can successfully complete the RNP procedure whilst meeting the safety specifications for protection areas and obstruction clearance altitude in the whole procedure. In addition, the potential path with minimum fuel consumption can be explored effectively. Hence, the DRL method can be used not only to implement the RNP procedure with a simulated aircraft but also to help the verification and evaluation of the RNP procedure. View Full-Text
Keywords: deep reinforcement learning (DRL); required navigation performance (RNP) procedure; performance-based navigation (PBN) procedure; flight control; path planning deep reinforcement learning (DRL); required navigation performance (RNP) procedure; performance-based navigation (PBN) procedure; flight control; path planning
Show Figures

Figure 1

MDPI and ACS Style

Zhu, L.; Wang, J.; Wang, Y.; Ji, Y.; Ren, J. DRL-RNP: Deep Reinforcement Learning-Based Optimized RNP Flight Procedure Execution. Sensors 2022, 22, 6475. https://doi.org/10.3390/s22176475

AMA Style

Zhu L, Wang J, Wang Y, Ji Y, Ren J. DRL-RNP: Deep Reinforcement Learning-Based Optimized RNP Flight Procedure Execution. Sensors. 2022; 22(17):6475. https://doi.org/10.3390/s22176475

Chicago/Turabian Style

Zhu, Longtao, Jinlin Wang, Yi Wang, Yulong Ji, and Jinchang Ren. 2022. "DRL-RNP: Deep Reinforcement Learning-Based Optimized RNP Flight Procedure Execution" Sensors 22, no. 17: 6475. https://doi.org/10.3390/s22176475

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop