Next Article in Journal
Reducing Waiting Times to Improve Patient Satisfaction: A Hybrid Strategy for Decision Support Management
Next Article in Special Issue
Large Language Model-Guided SARSA Algorithm for Dynamic Task Scheduling in Cloud Computing
Previous Article in Journal
Strategic Decisions in Corporate Travel: Optimization Through Decision Trees
Previous Article in Special Issue
Intelligent Fault Diagnosis Across Varying Working Conditions Using Triplex Transfer LSTM for Enhanced Generalization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Discrete Dynamic Berth Allocation Optimization in Container Terminal Based on Deep Q-Network

School of Transportation and Logistics Engineering, Wuhan University of Technology, Wuhan 430063, China
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(23), 3742; https://doi.org/10.3390/math12233742
Submission received: 23 October 2024 / Revised: 25 November 2024 / Accepted: 25 November 2024 / Published: 28 November 2024

Abstract

Effective berth allocation in container terminals is crucial for optimizing port operations, given the limited space and the increasing volume of container traffic. This study addresses the discrete dynamic berth allocation problem (DDBAP) under uncertain ship arrival times and varying load capacities. A novel deep Q-network (DQN)-based model is proposed, leveraging a custom state space, rule-based actions, and an optimized reward function to dynamically allocate berths and schedule vessel arrivals. Comparative experiments were conducted with traditional algorithms, including ant colony optimization (ACO), parallel ant colony optimization (PACO), and ant colony optimization combined with genetic algorithm (ACOGA). The results show that DQN outperforms these methods significantly, achieving superior efficiency and effectiveness, particularly under high variability in ship arrivals and load conditions. Specifically, the DQN model reduced the total waiting time of vessels by 58.3% compared to ACO (262.85 h), by 57.9% compared to PACO (259.5 h), and by 57.4% compared to ACOGA (257.4 h), with a total waiting time of 109.45 h. Despite its impressive performance, DQN requires substantial computational power during the training phase and is sensitive to data quality. These findings underscore the potential of reinforcement learning to optimize berth allocation under dynamic conditions. Future work will explore multi-agent reinforcement learning (MARL) and real-time adaptive mechanisms to further enhance the robustness and scalability of the model.
Keywords: berth allocation; parallel ant colony algorithm; DQN; container terminal; dynamic scheduling berth allocation; parallel ant colony algorithm; DQN; container terminal; dynamic scheduling

Share and Cite

MDPI and ACS Style

Wang, P.; Li, J.; Cao, X. Discrete Dynamic Berth Allocation Optimization in Container Terminal Based on Deep Q-Network. Mathematics 2024, 12, 3742. https://doi.org/10.3390/math12233742

AMA Style

Wang P, Li J, Cao X. Discrete Dynamic Berth Allocation Optimization in Container Terminal Based on Deep Q-Network. Mathematics. 2024; 12(23):3742. https://doi.org/10.3390/math12233742

Chicago/Turabian Style

Wang, Peng, Jie Li, and Xiaohua Cao. 2024. "Discrete Dynamic Berth Allocation Optimization in Container Terminal Based on Deep Q-Network" Mathematics 12, no. 23: 3742. https://doi.org/10.3390/math12233742

APA Style

Wang, P., Li, J., & Cao, X. (2024). Discrete Dynamic Berth Allocation Optimization in Container Terminal Based on Deep Q-Network. Mathematics, 12(23), 3742. https://doi.org/10.3390/math12233742

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop