Next Article in Journal
Verification of a VR Play Program’s Effects on Young Children’s Playfulness
Previous Article in Journal
The Impact of the Cooling System on the Thermal Management of an Electric Bus Battery
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

A Deep Reinforcement Learning Framework for Multi-Fleet Scheduling and Optimization of Hybrid Ground Support Equipment Vehicles in Airport Operations

1
College of Transportation, Tongji University, 4800 Cao’an Road, Shanghai 201804, China
2
Institute of High Performance Computing (IHPC), Agency for Science, Technology and Research (A*STAR), 1 Fusionopolis Way, Singapore 138632, Singapore
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Appl. Sci. 2025, 15(17), 9777; https://doi.org/10.3390/app15179777 (registering DOI)
Submission received: 27 July 2025 / Revised: 20 August 2025 / Accepted: 24 August 2025 / Published: 5 September 2025
(This article belongs to the Topic AI-Enhanced Techniques for Air Traffic Management)

Abstract

The increasing electrification of Ground Support Equipment (GSE) vehicles promotes sustainable airport operations but introduces new challenges in task scheduling, energy management, and hybrid fleet coordination. To address these issues, we develop an end-to-end Deep Reinforcement Learning (DRL) framework and evaluate it under three representative deployment scenarios with 30%, 50%, and 80% electric fleet proportions through case studies at Singapore’s Changi Airport. Experimental results show that the proposed approach outperforms baseline models, achieves more balanced state-of-charge (SoC) distributions, reduces overall carbon emissions, and improves real-time responsiveness under operational constraints. Beyond these results, this work contributes a unified DRL-based scheduling paradigm that integrates electric and fuel-powered vehicles, adapts Proximal Policy Optimization (PPO) to heterogeneous fleet compositions, and provides interpretable insights through Gantt chart visualizations. These findings demonstrate the potential of DRL as a scalable and robust solution for smart airport logistics.
Keywords: deep reinforcement learning; ground support equipment vehicles; hybrid fleet scheduling; carbon emission reduction; airport operations management deep reinforcement learning; ground support equipment vehicles; hybrid fleet scheduling; carbon emission reduction; airport operations management

Share and Cite

MDPI and ACS Style

Wang, F.; Zhou, M.; Xing, Y.; Wang, H.-W.; Peng, Y.; Chen, Z. A Deep Reinforcement Learning Framework for Multi-Fleet Scheduling and Optimization of Hybrid Ground Support Equipment Vehicles in Airport Operations. Appl. Sci. 2025, 15, 9777. https://doi.org/10.3390/app15179777

AMA Style

Wang F, Zhou M, Xing Y, Wang H-W, Peng Y, Chen Z. A Deep Reinforcement Learning Framework for Multi-Fleet Scheduling and Optimization of Hybrid Ground Support Equipment Vehicles in Airport Operations. Applied Sciences. 2025; 15(17):9777. https://doi.org/10.3390/app15179777

Chicago/Turabian Style

Wang, Fengde, Miao Zhou, Yingying Xing, Hong-Wei Wang, Yichuan Peng, and Zhen Chen. 2025. "A Deep Reinforcement Learning Framework for Multi-Fleet Scheduling and Optimization of Hybrid Ground Support Equipment Vehicles in Airport Operations" Applied Sciences 15, no. 17: 9777. https://doi.org/10.3390/app15179777

APA Style

Wang, F., Zhou, M., Xing, Y., Wang, H.-W., Peng, Y., & Chen, Z. (2025). A Deep Reinforcement Learning Framework for Multi-Fleet Scheduling and Optimization of Hybrid Ground Support Equipment Vehicles in Airport Operations. Applied Sciences, 15(17), 9777. https://doi.org/10.3390/app15179777

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop