Next Article in Journal
A Video-Based Mobile Palmprint Dataset and an Illumination-Robust Deep Learning Architecture for Unconstrained Environments
Previous Article in Journal
Gas-Powered Negative-Pressure Pump for Liquid Unloading in Underground Gas Storage
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

Research on Energy-Saving and Efficiency-Improving Optimization of a Four-Way Shuttle-Based Dense Three-Dimensional Warehouse System Based on Two-Stage Deep Reinforcement Learning

School of Logistics Engineering, Shanghai Maritime University, Shanghai 201306, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(21), 11367; https://doi.org/10.3390/app152111367
Submission received: 21 August 2025 / Revised: 4 October 2025 / Accepted: 10 October 2025 / Published: 23 October 2025

Abstract

In the context of rapid development within the logistics sector and widespread advocacy for sustainable development, this paper proposes enhancements to the task scheduling and path planning components of four-way shuttle systems. The focus lies on refining and innovating modeling approaches and algorithms to address issues in complex environments such as uneven task distribution, poor adaptability to dynamic conditions, and high rates of idle vehicle operation. These improvements aim to enhance system performance, reduce energy consumption, and achieve sustainable development. Therefore, this paper presents an energy-saving and efficiency-enhancing optimization study for a four-way shuttle-based high-density automated warehouse system, utilizing deep reinforcement learning. In terms of task scheduling, a collaborative scheduling algorithm based on an Improved Genetic Algorithm (IGA) and Multi-Agent Deep Deterministic Policy Gradient (MADDPG) has been designed. In terms of path planning, this paper provides the A*-DQN method, which integrates the A* algorithm(A*) with Deep Q-Networks (DQN). Through combining multiple layout scenarios and adjusting various parameters, simulation experiments verified that the system error is within 5% or less. Compared to existing methods, the total task duration, path planning length, and energy consumption per order decreased by approximately 12.84%, 9.05%, and 16.68%, respectively. The four-way shuttle vehicle can complete order tasks with virtually no conflicts. The conclusions of this paper have been validated through simulation experiments.
Keywords: four-way shuttle-based warehouse system; path planning; A*-DQN algorithm; task scheduling; IGA-MADDPG algorithm four-way shuttle-based warehouse system; path planning; A*-DQN algorithm; task scheduling; IGA-MADDPG algorithm

Share and Cite

MDPI and ACS Style

Xiang, Y.; Jin, X.; Lei, K.; Zhang, Q. Research on Energy-Saving and Efficiency-Improving Optimization of a Four-Way Shuttle-Based Dense Three-Dimensional Warehouse System Based on Two-Stage Deep Reinforcement Learning. Appl. Sci. 2025, 15, 11367. https://doi.org/10.3390/app152111367

AMA Style

Xiang Y, Jin X, Lei K, Zhang Q. Research on Energy-Saving and Efficiency-Improving Optimization of a Four-Way Shuttle-Based Dense Three-Dimensional Warehouse System Based on Two-Stage Deep Reinforcement Learning. Applied Sciences. 2025; 15(21):11367. https://doi.org/10.3390/app152111367

Chicago/Turabian Style

Xiang, Yang, Xingyu Jin, Kaiqian Lei, and Qin Zhang. 2025. "Research on Energy-Saving and Efficiency-Improving Optimization of a Four-Way Shuttle-Based Dense Three-Dimensional Warehouse System Based on Two-Stage Deep Reinforcement Learning" Applied Sciences 15, no. 21: 11367. https://doi.org/10.3390/app152111367

APA Style

Xiang, Y., Jin, X., Lei, K., & Zhang, Q. (2025). Research on Energy-Saving and Efficiency-Improving Optimization of a Four-Way Shuttle-Based Dense Three-Dimensional Warehouse System Based on Two-Stage Deep Reinforcement Learning. Applied Sciences, 15(21), 11367. https://doi.org/10.3390/app152111367

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop