Next Article in Journal
A Video-Based Mobile Palmprint Dataset and an Illumination-Robust Deep Learning Architecture for Unconstrained Environments
Previous Article in Journal
Gas-Powered Negative-Pressure Pump for Liquid Unloading in Underground Gas Storage
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Research on Energy-Saving and Efficiency-Improving Optimization of a Four-Way Shuttle-Based Dense Three-Dimensional Warehouse System Based on Two-Stage Deep Reinforcement Learning

School of Logistics Engineering, Shanghai Maritime University, Shanghai 201306, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(21), 11367; https://doi.org/10.3390/app152111367
Submission received: 21 August 2025 / Revised: 4 October 2025 / Accepted: 10 October 2025 / Published: 23 October 2025

Abstract

In the context of rapid development within the logistics sector and widespread advocacy for sustainable development, this paper proposes enhancements to the task scheduling and path planning components of four-way shuttle systems. The focus lies on refining and innovating modeling approaches and algorithms to address issues in complex environments such as uneven task distribution, poor adaptability to dynamic conditions, and high rates of idle vehicle operation. These improvements aim to enhance system performance, reduce energy consumption, and achieve sustainable development. Therefore, this paper presents an energy-saving and efficiency-enhancing optimization study for a four-way shuttle-based high-density automated warehouse system, utilizing deep reinforcement learning. In terms of task scheduling, a collaborative scheduling algorithm based on an Improved Genetic Algorithm (IGA) and Multi-Agent Deep Deterministic Policy Gradient (MADDPG) has been designed. In terms of path planning, this paper provides the A*-DQN method, which integrates the A* algorithm(A*) with Deep Q-Networks (DQN). Through combining multiple layout scenarios and adjusting various parameters, simulation experiments verified that the system error is within 5% or less. Compared to existing methods, the total task duration, path planning length, and energy consumption per order decreased by approximately 12.84%, 9.05%, and 16.68%, respectively. The four-way shuttle vehicle can complete order tasks with virtually no conflicts. The conclusions of this paper have been validated through simulation experiments.

1. Introduction

In recent years, with the rapid development of the e-commerce industry, the warehousing and logistics industry has also ushered in a period of rapid growth. The four-way shuttle warehouse system stands out for its efficient and flexible operation characteristics, becoming the mainstream direction of intelligent warehousing technology. However, as the system scale expands, path planning complexity increases, and energy consumption issues have gradually attracted attention [1]. Existing research often ignores factors such as shuttle loading status, path structure restrictions, and energy consumption changes under multi-shuttle coordination, leading to energy waste and uneven equipment loads in practical applications [2]. Therefore, conducting research on four-way shuttle path planning and scheduling for energy consumption optimization is of great theoretical value and engineering significance for achieving energy conservation and emission reduction in warehousing systems and intelligent operation. It also aligns with the current trend towards greening and intelligentization of intelligent warehousing systems. The four-way shuttle storage system significantly enhances automation levels while reducing reliance on manual labor. Moreover, its flexible warehouse layout and high spatial utilization effectively reduce land and operational costs, providing a viable pathway for the construction of modern warehousing systems [3].
In a four-way shuttle-based warehouse system, faced with increasingly complex task scenarios, the efficiency and accuracy of scheduling and path planning are crucial to improving overall operation efficiency [3]. Traditional warehouse systems have many shortcomings in path planning and task scheduling, such as failing to fully consider shuttle loading status, path structure restrictions, and energy consumption changes during multi-shuttle coordination, resulting in uneven resource allocation and energy waste [4].
Domestic and foreign scholars have conducted extensive research on path planning and scheduling optimization for four-way shuttle-based warehouse systems. In terms of path planning, researchers have proposed various improved algorithms, such as the improved branch cutting and branch pricing algorithm by Sun Zhuo et al. [5], the algorithm for directly constructing paths based on Non-Uniform Rational B-Splines (NURBS) surfaces proposed by Anushrut et al. [6], Guo et al. [7] achieved visual guidance-based path extraction and joint optimization in robotic weld grinding tasks, Cui’s [8] fast RRT algorithm, Jiang Qilong’s [9] PSO-PH-RRT* algorithm, etc., to address challenges such as the complexity of multi-objective optimization in intricate environments and the limited adaptability to dynamic environmental conditions. Gou Yujun et al. [10] proposed the improved Imperialist Northern Goshawk Optimization (INGO), which demonstrated stronger global optimization ability and stability in the grid map path planning experiment. In terms of multi-shuttle scheduling, research efforts have primarily centered on task allocation and collaborative optimization. Fan et al. [11] developed an integrated scheduling model. Sango et al. [12] introduced a novel approach to the design of human–machine collaborative systems tailored for Industry 5.0. Tang et al. [13] proposed a data-driven optimization framework for job scheduling. Cai et al. [14] established a collision avoidance scheduling architecture leveraging edge intelligence. Li et al. [15] achieved efficient solutions in seconds in large-scale environments. Carida et al. [16] designed a multi-attribute scheduling mechanism for Automated Guided Vehicle (AGV) based on fuzzy systems and Petri network. Tang et al. [17] incorporated quantum computing techniques to formulate a quadratic unconstrained binary optimization model. Liu et al. [18] improved the solution stability and quality of scheduling through enhancements to heuristic strategies and local search mechanisms. Xu Lili et al. [19] constructed a collaborative scheduling model integrating four-way shuttles and elevators. Yin Yinghao [20] proposed a comprehensive framework for multi-four-way shuttle path planning and real-time traffic scheduling. In terms of energy consumption optimization, research has gradually shifted from single-objective optimization to multi-dimensional collaborative optimization. For example, Zhou et al. [21] proposed a robust scheduling model for AGVs with power constraints, Yang et al. [22] proposed a path planning algorithm considering shuttle rollover stability, Hu et al. [23] proposed a multi-stage hybrid scheduling model for AGV charging and scheduling collaboration considering a new supporting facility (AGV-Mate) for AGV charging. Ma et al. [24] proposed a hybrid optimization method integrating staged and continuous speed control strategies in a U-shaped dock scenario. Yue et al. [25] incorporated the configuration and scheduling of AGVs and double dual-trolley quay cranes into a unified modeling framework. Overall, research on path planning and scheduling optimization is moving towards dynamic, intelligent, and multi-objective optimization. However, further breakthroughs are still needed in areas such as dynamic environment adaptability, energy consumption optimization, and task coordination.
This study differs from recent domestic and international research in several aspects. Regarding path planning, Sun Zhuo et al.’s [5] improved branch-and-cut algorithm focuses on static multi-objective optimization, while Cui’s [8] fast RRT algorithm relies on random sampling for post-collision avoidance. Neither addresses dynamic changes in device states or precise conflict identification. This study innovatively proposes an A*-DQN dynamic fusion architecture. It dynamically switches the feasible path region in real-time via load status flags and quantifies three conflict types based on direction vector angles. This forms a “global routing guidance + local dynamic fine-tuning” dynamic-static coordination model, overcoming the bottleneck of insufficient dynamic adaptability in traditional algorithms. Regarding task scheduling, Fan et al.’s [11] integrated scheduling model only achieves basic task-device matching, while Xu Lili et al.’s [19] elevator–shuttle collaboration excludes energy consumption considerations and primarily employs centralized control. This study constructs the IGA-MADDPG hierarchical coordination framework. It employs IGA for global pre-planning and injects the MADDPG experience pool to resolve cold-start issues. Multi-agent distributed decision-making enables local dynamic adjustments, while integrating energy consumption, time, and load balancing into a coupled optimization objective. This overcomes the limitations of traditional scheduling—single-objective, centralized, and weak coordination. Regarding energy optimization, Zhou et al. [21] only considered fixed power constraints for AGVs, while Ma et al. [24] designed dedicated strategies for U-shaped docks, partially relying on hardware assistance. This study proposes a load-state-driven system-level energy modeling approach, distinguishing between shuttle and elevator empty/loaded states. Load energy consumption of shuttles and elevators is important in the construction of a unified cross-device model. This approach reduces empty-run energy consumption without requiring additional hardware. Furthermore, it decouples the model from the layout by utilizing grid maps and Manhattan distance, maintaining stable optimization across three warehouse scenarios. This overcomes the limitations of traditional energy consumption research, which is characterized by “single-device focus, strong dependency, and narrow application scenarios”.
In summary, existing studies suffer from issues such as uneven task allocation, poor adaptability to dynamic environments, and high shuttle idling rates. Therefore, this paper proposes a multi-shuttle scheduling optimization method combining an improved genetic algorithm (IGA) with a multi-agent deep deterministic policy gradient algorithm (MADDPG), as well as an A*-DQN path planning method integrating A* heuristic search and Deep Q-Networks (DQN), aiming to optimize the path performance of four-way shuttles in complex layouts and dynamic environments through a dynamic map switching mechanism, path conflict resolution strategy, and the adaptive capabilities of reinforcement learning. Specific contributions are as follows:
(1)
Through a layered architecture consisting of a global optimization layer and a local execution optimization layer, the advantages of both algorithms are fully utilized to achieve synergistic improvements in task allocation, path planning, and energy consumption optimization.
  • The global optimization layer utilizes IGA to generate high-quality initial task allocation plans, laying the foundation for the entire scheduling process. By coordinating the scheduling of four-way shuttles and elevators, unnecessary waiting and empty running between equipment are avoided, enhancing the overall consistency of the system’s scheduling. The IGA balances three key objectives, namely: energy consumption, efficiency, and equipment load balancing, thereby overcoming the limitations of traditional genetic algorithms (GA) which focus solely on single-objective optimization. By incorporating ‘Sequence-Retaining Crossover’ and ‘Path Reversal Mutation’, this approach reduces the empty running rate of four-way shuttles, accelerates convergence speed, and enhances global optimality. It thereby resolves the issues of slow convergence and susceptibility to local optima, which is inherent in traditional genetic algorithms (GA).
  • The local execution optimization layer uses the MADDPG algorithm to dynamically adjust task sequences and path planning based on real-time status, ensuring optimal energy consumption and efficient operation in complex dynamic environments. The MADDPG algorithm treats each four-way shuttle and elevator as an independent agent, establishing a local observation space for each. It updates their local state such as task information for four-way shuttles and the elevators’ current floor level every second. Through a critic network invoking the global state, it dynamically adjusts task sequences in real time to prevent conflicts among four-way shuttles and elevators.
  • By integrating IGA with MADDPG, the scheduling plan generated by IGA is simulated. The data generated from this simulation is then injected into the experience replay pool, thereby enhancing the efficiency of local execution optimization and addressing the optimizing low efficiency issues arising from reliance on a single algorithm for optimization.
(2)
By introducing shuttle loading status information to construct a path map switching mechanism, path constraints can be automatically adjusted according to the shuttle’s different operating status.
  • Using the A* algorithm, calculating the angle of direction vectors between vehicles travelling in four directions, quantifying conflict types, and resolving them according to corresponding methods, this approach addresses the issue where traditional path planning disability to quantify conflict types, resulting in high collision risks.
  • By employing the DQN algorithm and updating environmental states in real time, adaptive path adjustments are achieved within dynamic environments. This resolves issues inherent in traditional path planning algorithms, such as path conflicts arising from their inability to update generated paths.
  • By integrating the A* algorithm with the DQN algorithm, the high-quality initial path planning solutions generated by A* during the early training phase are provided as experience to the DQN algorithm, thereby accelerating DQN’s convergence speed. This enables the A*-DQN algorithm to effectively plan safe and efficient paths in complex environments compared to traditional path planning algorithms.
This study focuses on key challenges commonly encountered in practical applications of high-density four-way shuttle storage systems, including low scheduling efficiency, frequent path conflicts, and high energy consumption. Centered on the dual core objectives of minimizing system energy consumption and optimizing operational time, systematic research is conducted at two distinct levels: task scheduling and path planning. At the task scheduling level, a hierarchical scheduling framework based on the collaborative optimization of IGA and MADDPG is proposed. At the path planning level, an A*-DQN path optimization method integrating A* heuristic search with DQN is introduced. Ultimately, through simulation experiments incorporating diverse layout scenarios, these two approaches demonstrated significant advantages in task scheduling, path planning, and energy consumption optimization. This provides novel research perspectives and solutions for advancing the intelligent development of four-directional vehicle warehousing systems.

2. Materials and Methods

2.1. Dense Three-Dimensional Warehouse Modeling

2.1.1. Overview of the Four-Way Shuttle-Based Warehouse System

The research object of this paper is a four-way shuttle-based warehouse system with several elevators and four-way shuttles. The storage area has three layout forms, namely sparse rectangular layout, dense rectangular layout, and fishbone layout, as shown in Figure 1b–d. The system consists of independent storage units, aisles, conveyor belts located at the entrance, and elevators located on one side of the system that connect each layer and the conveyor belts. Through task scheduling, the pick-up or storage locations for incoming and outgoing goods are specified. Conveyor belts and elevators transport goods between floors, and AGVs follow pre-planned paths through the aisle network’s topological structure to reach designated locations for dispensing or retrieving goods. Through scheduling optimization and path planning, operation efficiency has been significantly improved [26] and energy consumption reduced.

2.1.2. Warehouse Simulation Environment Map

Considering that map environment modeling methods need to satisfy requirements such as quantifiable spatial constraints, dynamic adaptability, and algorithm compatibility, the grid map method is convenient for multi-objective modeling, responds quickly to environmental changes, and satisfies the dynamic obstacle avoidance requirements of multi-shuttle coordination. Therefore, this study chooses the grid map method to construct a reasonable model [27].
As shown in Figure 2, taking a four-way shuttle-based warehousing system with a dense rectangular layout as an example, the warehouse plane is discretised into uniform grid units. Identify feasible areas using a binary matrix G 0 , 1 M × N of ‘0’ and ‘1’, where ‘1’ indicates obstacle areas, and then construct a three-dimensional grid stack G 3 D = G 1 , G 2 , , G K to describe cross-layer connectivity. In order to improve computational efficiency and facilitate direct invocation by search algorithms, a passage cost C i j is defined for the grid i , j :
C i j = 1 + α N i , j
In the equation, N indicates the frequency of historical path intersections, refers to the number of times a planned path has passed through a grid in the past. α indicates conflict sensitivity coefficient, used to adjust the degree of influence of historical path crossing frequency on traffic costs.

2.1.3. Distance Calculation Method

In warehouse system path planning, the accuracy of distance calculation directly affects scheduling efficiency and energy optimization. Based on the four-way shuttle motion constraints and the Manhattan distance algorithm’s strong compatibility and good real-time performance, this study selects the Manhattan distance calculation method as the core metric for path planning and energy consumption optimization [28]. Equation (2) is the equation for calculating the Manhattan distance algorithm.
d = | x 1 x 2 | + | y 1 y 2 |
In the equation, d Indicates the distance travelled by a four-way shuttle. x 1 , y 1 , x 2 , y 2 separately indicate the coordinates of the two points.

2.2. Energy-Saving and Efficiency-Enhancing Modeling

2.2.1. Mathematical Model for the Scheduling Optimization of Four-Way Shuttles and Elevators

In order to achieve efficient and low-energy consumption four-way shuttle collaborative operation scheduling, it is necessary to construct an energy consumption optimization model and operation constraints for the scheduling level. The model covers elevator scheduling behavior modeling, equipment energy consumption analysis, and a system of constraint functions for solving optimization problems.
(1)
Mobile Time Model
For the maximum speed v max and acceleration a of four-way shuttles and elevators, the movement time is divided into acceleration, constant speed, and deceleration segments for modeling [29]. When calculating mobile time, the following two situations should be considered based on the distance:
(1) If the total distance travelled is greater than or equal to twice the distance travelled from 0 to the maximum speed v max with acceleration a , the Equation is expressed as d 2 d a = v max 2 / a . In this situation, the four-way shuttle and elevator can complete all three stages; Total time T is:
T = 2 t a + t c = 2 · v max a + d v max 2 d x v max = v max a + d v max
In the equation, t a indicates acceleration time. t c indicates deceleration time. d a indicates the distance travelled when moving from a speed of 0 to the maximum speed v max with uniform acceleration a .
(2) If the total distance travelled is less than or equal to twice the distance travelled from 0 to the maximum speed v max with acceleration a , the Equation is expressed as d < v max 2 / a . In this situation, the shuttle cannot reach maximum speed; Total time T is:
T = 2 · d a
In the equation, d indicates the distance travelled by a four-way shuttle. a indicates the uniform acceleration of a four-way shuttle.
In summary, the movement time T can be calculated using Equation (5):
T = v max a + d v max , i f d v max 2 a 2 · d a , i f d < v max 2 a
In the equation, d indicates the distance travelled by a four-way shuttle. a indicates the uniform acceleration of a four-way shuttle. v max indicates the highest speed of a four-way shuttle.
(2)
Energy Consumption Mathematical Model
Using a general energy consumption calculation equation, such as that shown in Equation (6), the unit energy consumption parameters are automatically selected based on the equipment status. Using a general energy consumption calculation equation, such as that shown in Equation (6), the unit energy consumption parameters are automatically selected based on the equipment status. Combining the equipment operating trajectory and task status, the energy consumption of the four-way shuttle E F and the energy consumption of the elevator E L are calculated separately, and finally summarized as the total system energy consumption E T o t a l [30]. As shown in Equation (6):
(1) For any handling device (denoted as device k ), the energy consumption of a particular run E i can be uniformly expressed as:
E k = e k l o a d · ρ k + e k e m p t y · 1 ρ k · d k
In the equation, d k indicates the path distance of device operation. The e k l o a d and e k e m p t y indicate the unit load energy consumption and unit no-load energy consumption of the equipment, separately. They come from equipment manuals. ρ i Indicates the status indicator variable. When the equipment k is under load, the value of ρ k is 1. Otherwise, it is 0.
(2) According to Equation (7), the four-way shuttle energy consumption E F , the elevator energy consumption E L , and the total system energy consumption E T o t a l can be calculated as follows:
E F = i = 1 N a E i F = i = 1 N a e F l o a d · ρ i + e F e m p t y · 1 ρ i · d i E L = j = 1 N b E j L = j = 1 N b e L l o a d · ρ j + e L e m p t y · 1 ρ j · d j E T o t a l = E F + E L = i = 1 N a E i F + j = 1 N b E j L
In the equation, i and j respectively indicate four-way shuttle i and elevator j .   N a and N b respectively indicate the number of four-way shuttles and the number of elevators. e F l o a d , e F e m p t y indicate the unit load energy consumption and unit no-load energy consumption of the four-way shuttle, separately. The e L l o a d , e L e m p t y indicate the unit load energy consumption and unit no-load energy consumption of the elevator, separately. ρ i and ρ j respectively indicate the status indicator variable of the four-way shuttle i and the elevator j , with load of 1 and no load of 0.
(3)
Model Constraint
To ensure the practicality and rationality of the model, this section imposes constraints in terms of speed limits, task flow, and resource allocation.
(1) Speed and Acceleration Constraints
To prevent risks caused by excessive acceleration or speed of four-way shuttles and elevators, it is necessary to constrain the safety range of both. The mathematical expression is as follows:
v i F t v max F , i , t a i F t a max F , i , t
v j L t v max L , j , t a j L t a max L , j , t
In the equation, v i F t , a i F t indicate the speed and acceleration of the i-th four-way shuttle at time t separately. The v max F , a max F indicate the maximum speed and maximum acceleration allowed for four-way shuttles separately. The v j L t , a j L t indicate the speed and acceleration of the i-th elevator at time t separately. v max L , a max L indicate the maximum speed and maximum acceleration allowed for elevators separately.
(2) Task Sequence Constraints
To prevent task logic confusion or long waiting times that could reduce job efficiency, the task sequence for the same four-way shuttle must follow a first-in, first-out logic:
t i n t i m t i n + 1 , i
where t i n denotes the start time of the n -th inbound (storage) task, t i m denotes the start time of the m -th outbound (retrieval) task, and t i n + 1 represents the start time of the ( n + 1 )-th inbound task.
The waiting time range for cross-level tasks is given by:
t i w a i t T w a i t max , i
In the equation, t i w a i t indicates the waiting time for any task of the i-th four-way shuttle. T max wait indicates the maximum waiting time for a four-way shuttle to cross layers.
(3) Path Feasibility Constraint
The empty four-way shuttle can move in the lane and longitudinally under the pallet. When loaded, it can only move in the lane [31]. The mathematical expression is as follows:
P k t =     G k , t , δ l o a d , k t = 1 G k , t L k , t , δ l o a d , k t = 0
In the equation, G k , t indicates the path function of shuttle k in the lane network. L k , t indicates the path function of shuttle k in the longitudinal track of the warehouse area. δ l o a d , k t 0 , 1 indicates the load status indicator variable of shuttle k at time t , where ‘1’ indicates a load and ‘0’ indicates no load.
(4) Cooperative Scheduling Constraint
At any given time, only one elevator serves one four-way shuttle, and only one four-way shuttle is allowed to retrieve or store the same goods at the same time:
k = 1 N a u i , k t 1 , i , t k = 1 N b o j , k t 1 , j , t
In the equation, u i , k t , o j , k t Indicates whether a shuttle or piece of equipment is in use.
(5) Weight Restrictions for Goods
The shuttle load must not exceed the threshold value:
| N k N ¯ | Δ N , k
In the equation, N k indicates the load of the four-way shuttle k. N ¯ indicates the rated load capacity of a four-way shuttle. Δ N indicates the maximum value at which the shuttle’s load can exceed the rated load.

2.2.2. Mathematical Model for Four-Way Shuttle Path Planning

(1)
Three-Dimensional Coordinate System and Order Task Set
(1) Three-Dimensional Coordinate System
To enable path computation and energy consumption modeling for four-way shuttles in dense warehouse systems, a unified three-dimensional coordinate system is constructed, as illustrated in Figure 1, to represent both shuttle positions and storage locations. The coordinate of task O j in the racking system is denoted as C O j = x i , y i , z i , where x,y,z are defined as shown in Figure 1.
(2) Order Task Set
Let the task set be R i = { R 1 , , R n } , where R i > 0 indicates that the i -th task is an inbound operation, and R i < 0 indicates that the i -th task is an outbound operation.
(2)
Distance Calculation Model
The distance for inbound and outbound operations is calculated based on the coordinates, integrating both same-layer and cross-layer operation models for the four-way shuttle, as shown in Equations (15) and (16). The distance from the shuttle’s current position x i , y i , z i to the target position x j , y j , z j is given by Equation (17).
For same-layer operations, i.e., z i = z j , the path distance for the four-way shuttle from the inbound location to the outbound location is expressed as:
d = D z i = D z j = | x i x j | × X L + | y i y j | × Y L
where D z i , D z j denote the travel distance of the four-way shuttle from the inbound location to the outbound location, and ( x i , y i ) and ( x j , y j ) represent the current and target coordinates of the shuttle, respectively; X L , Y L are the length and width of the storage unit cell.
For cross-layer operations of the four-way shuttle, the total path consists of the following three segments: the distance D z i from the current position to the elevator on the current layer, the distance D z i , z j for the shuttle to transfer via the elevator to the target layer, and the distance D z j from the elevator position on the target layer to the final target position, as expressed in Equation (16):
D z i , z j = | z i z j | × Z L
where z i , z j denote the initial and target layers for elevator transfer, and Z L represents the height of a single layer in the dense warehouse.
The distance calculation model d is as follows:
d = D z i + Δ z · D z i , z j + D z j
where d represents the travel distance of the four-way shuttle and is identical to that in Equation (2). When the shuttle operates on the same layer, Δ z = 0 ; when it operates across different layers, Δ z = 1 .

2.3. Energy Consumption Reduction and Efficiency Enhancement Scheduling Algorithm

2.3.1. Design of Improved Genetic Algorithm

To optimize the task allocation and scheduling routes for four-way shuttles and elevators during collaborative operations, this study designs an improved genetic algorithm based on path reversal mutation and sequence retention crossover, taking into account the operational characteristics of the warehousing system. The algorithm adopts a hierarchical chromosome encoding structure and introduces a weighting method to balance three objectives: energy consumption, time, and load balancing, thereby enhancing the practicality and global optimality of the scheduling strategy.
(1)
Initialization of Population and Encoding
Cells containing existing goods are marked as 1, while vacant positions are marked as 0, forming an executable task region map. Based on order information, the system classifies all tasks into two categories:
(1) Inbound tasks: Goods need to be transported from the bottom entrance to designated storage locations, and are assigned positive integer identifiers;
(2) Outbound tasks: Goods need to be transported from designated storage locations to the exit and are assigned negative integer identifiers.
According to the collaborative operation logic of four-way shuttles and elevators, this study designs a chromosome structure with V + 1 layers, where V denotes the number of four-way shuttles. Layers 1 to V represent the order sequence of tasks assigned to each shuttle, while the V + 1 -th layer represents the elevator’s operation sequence, which is determined based on the current position of the elevator and the operational relationships from layers 1 to V . For example, if the scheduling task includes four inbound and four outbound tasks, to be executed collaboratively by two four-way shuttles and one elevator, the chromosome structure is as shown in Figure 3. In this structure, the first and second layers correspond to the tasks allocated to the two shuttles, and the third layer is the elevator operation sequence generated by the system according to cross-layer transfer requirements.
(2)
Fitness Calculation
To minimize energy consumption, total operation time, and load balancing index, this study establishes a fitness model F min .
F min = α · E + β · T + γ · L B D
In this equation, α , β , γ are the weighting coefficients for the three objectives, subject to the constraint α + β + γ = 1 . L B D denotes the load balancing index.
To accommodate different operational modes, this study adopts two types of weighting strategies:
(1) Energy-Saving Priority Mode:
In scenarios with moderate task density and a focus on cost control, the weighting coefficients α , γ (for load balancing) are appropriately increased, while the emphasis on time constraint is reduced;
(2) Efficiency Priority Mode:
In peak order scheduling or emergency task handling scenarios, the weighting coefficient β for minimizing total operation time is increased accordingly.
(3)
Design of Genetic Operators
(1) Sequence-Retaining Crossover
To reduce the travel distance and energy consumption of the four-way shuttles, the system operates based on a combination of multiple outbound tasks to a single inbound task. In this design, sequence-retaining crossover is adopted, with chromosome crossover operations conducted in the order of inbound tasks followed by outbound tasks. The specific steps are as follows:
I. For a chromosome of length L , two random numbers c 1 and c 2 c 1 , c 2 L are generated to determine the segment of the chromosome to participate in crossover; crossover operations are performed on the genes between positions c 1 and c 2 ;
II. A random number c 3 c 3 ϵ 1 , 1 is generated to determine the type of task involved in the crossover. If c 3 = 1 , the crossover operation is applied to inbound tasks; if c 3 = 1 , the operation is applied to outbound tasks.
This method effectively enhances the feasibility and rationality of offspring solutions, avoiding decreased fitness due to operation logic errors. Taking the outbound tasks of two four-way shuttles as an example, a schematic diagram of the sequence-retaining crossover operation is shown in Figure 4.
(2) Path Reversal Mutation
The mutation operation is designed to introduce new gene combinations, thereby enhancing chromosome diversity and preventing the algorithm from becoming trapped in local optima. In this study, a local path reversal mutation is employed, in which the task sequence of the four-way shuttle is reversed to simulate the impact of different task orders on scheduling. When there are V four-way shuttles in operation, the specific procedure is as follows:
I. Generate a random positive integer, i.e., randomly select one of the V four-way shuttles to perform the mutation operation.
II. Generate a random integer r 2 r 2 ϵ 1 , 1 , where 1 and –1 represent mutation operations on inbound or outbound tasks, respectively.
Through the mutation operation, both the diversity of the search and the overall convergence performance of the algorithm can be improved. An example of this operation on the task sequences of two four-way shuttles is illustrated in Figure 5.
(4)
Flowchart of the Improved Genetic Algorithm
To achieve efficient collaborative scheduling of four-way shuttles and elevators in a multi-task environment, this study develops an improved genetic algorithm that integrates a hierarchical encoding structure, order-preserving crossover, and path reversal mutation strategies. The optimization process, as shown in Figure 6, consists of six core steps: population initialization, fitness evaluation, selection, crossover, mutation, and updating. The optimal scheduling scheme is obtained through iterative optimization [32].

2.3.2. MADDPG Scheduling Algorithm

(1)
Principle of MADDPG
In a multi-agent system based on the MADDPG algorithm, each agent continuously interacts with the environment to construct and optimize a joint action policy. Specifically, each agent selects an action based on its own local observations, resulting in a joint action set [33]. The environment then provides each agent with an immediate reward based on these joint actions and updates to a new joint state. During this process, agents collect reward information from the environment, calculate cumulative returns, and use these as the basis for adjusting their policies. Through this repeated action–state–reward interactive learning process, agents gradually learn the optimal strategy that maximizes their cumulative return.
This algorithm trains multiple agents using actor and critic networks. In the actor network, the optimal action decisions are derived by integrating the state–action value function with policy gradients and optimizing the parameters θ . In the critic network, the actions from the actor network are evaluated based on temporal-difference (TD) error. Both the actor and critic utilize an evaluation network and a target network. The evaluation network is responsible for estimating the state–action value function, and its parameters are continuously updated during training. The target network retains a copy of the evaluation network’s parameters from an earlier time and is not involved in training. By providing relatively stable target values, the target network enables the calculation of the TD error. The TD error is computed based on the outputs of the target and evaluation networks. By minimizing this error, the parameters of the critic network are optimized, allowing the evaluation network to better estimate the state–action value function. The neural network architecture of the MADDPG algorithm is illustrated in Figure 7.
(2)
Design
(1) State Space Design:
For a warehouse system with a four-way shuttles and b elevators, the system comprises a total of N = a + b agents. Each agent constructs its local observation space according to its scheduling objectives, while the critic network uniformly utilizes the global state.
In the state space of the four-way shuttle, the local state vector for each shuttle S i F is defined as:
S i F = x i , y i , z i , T i , Q i , l i
In the equation, x i , y i , z i indicate the discretized position coordinates; T i indicates the task queue information currently assigned to the i th shuttle; Q i indicates to the target layer information for each task in the task queue; l i indicates the number of remaining tasks for the current mission of the i th shuttle.
In the elevator state space, the local observation S j L for each elevator includes:
S j L = z j , L S j , W Q j
In the equation, z j indicates the current floor where the elevator is located; L S j indicates the current operational status of the elevator; W Q j is the waiting queue, representing the list of four-way shuttles requesting service.
During the centralized training phase, the global states used by the critic network during training includes:
s = s 1 F , , s a F , s 1 L , , s b L , T P , E M
In the equation, s 1 F , , s a F indicates the states of all four-way shuttles; s 1 F , , s b F indicates the states of all elevators; T P indicates the global task pool, which includes the status of all orders, including task ID, type, and status; E M indicates the load and energy consumption statistics.
(2) Action Space Design
To accommodate the hybrid optimization objective of scheduling and path adjustment, the action space is defined by equipment type as follows:
a i F = a i t a s k , a i v , a i L
a j L = a j t a s k , a j z
In these equations, a i F indicates the action space for the four-way shuttle, and a i t a s k indicates the selection of the current task, indicating whether to switch the current task; a i v indicates the velocity setting, a i L indicates whether a elevator request is needed, a j L indicates the action space of the elevator, a j t a s k indicates task selection, i.e., choosing the current service target from the request queue; a j z indicates the layer-switching action, indicating whether to proactively move to the target floor to reduce waiting time.
(3) Reward Function Design
The reward function R i F for four-way shuttle i takes into account the energy consumption penalty, no-load energy consumption penalty, task timeliness reward, and load balancing reward:
R i F = λ 1 E i L λ 2 E i K + λ 3 R i T + λ 4 L E D i λ 5 P i
In the equation, E i L indicates the energy consumption penalty; E i K indicates the no-load energy consumption penalty; R i T indicates the task timeliness reward; L E D i indicates the load balancing reward of four-way shuttles; P i indicates the conflict penalty. λ 1 , λ 2 , λ 3 , λ 4 and λ 5 indicate the weighting of these five consideration metrics.
The reward function R j L for elevator j includes the following components:
R j L = λ 6 E j L λ 7 T j w + λ 8 M j
where E j L is the penalty for vertical transport energy consumption; T j w is the waiting time penalty; M j is the full-load task reward. When M j equals 1, it indicates that the elevator is fully loaded; when M j equals 0, it indicates that the elevator is unloaded.
(3)
Optimization Strategy Combining Genetic Algorithm and MADDPG
(1) Collaborative Design
Since a single algorithm often struggles to balance global search capability and local responsiveness, this study develops a hierarchical optimization algorithm that integrates the Genetic Algorithm (GA) with Multi-Agent Deep Deterministic Policy Gradient (MADDPG), fully leveraging the complementary strengths of both approaches. The hierarchical optimization framework consists of a global optimization layer and a local execution optimization layer, detailed as follows:
Global Optimization Layer: The genetic algorithm is employed to generate the initial task allocation and operation sequence based on task information, the initial status of equipment, and the warehouse topology network.
Local Execution Optimization Layer: MADDPG is used to train multi-agent policies, enabling real-time adjustments to task sequences, speeds, and scheduling decisions during execution to enhance robustness and energy performance.
Unlike traditional optimization methods, this work does not merely utilize the genetic algorithm for static optimization; instead, the scheduling schemes produced by the genetic algorithm are further transformed and injected into the experience replay buffer of MADDPG, providing heuristic prior knowledge for reinforcement learning. This experience injection approach improves the training efficiency of MADDPG, facilitates the rapid acquisition of collaborative strategies, and reduces the risk of convergence to local optima.
(2) System Scheduling Process Design
The system scheduling process is shown in Table 1.
In the scheduling optimization framework proposed in this section, the Improved Genetic Algorithm (IGA) is responsible for global task allocation in the global optimization layer. By employing strategies such as hierarchical chromosome encoding, sequence retention crossover, and path-reversal mutation, the IGA generates high-quality initial task allocation schemes. This foundational scheme ensures the rationality and global optimality of task allocation throughout the entire scheduling process. The MADDPG algorithm operates at the local execution optimization layer, dynamically adjusting task sequences, speeds, and scheduling decisions based on real-time states. By injecting the scheduling schemes generated by IGA into the MADDPG experience pool, reinforcement learning can rapidly acquire collaborative strategies and further optimize local scheduling behaviors. This hierarchical optimization framework integrates global optimization with dynamic responsiveness, fully harnessing the advantages of both IGA and MADDPG algorithms. It overcomes the limitations of single algorithms in four-way shuttle scheduling scenarios and achieves multi-objective optimization for energy consumption, time, and load balancing. As a result, it effectively addresses challenges such as uneven task allocation, poor adaptability to dynamic environments, high idle rates of equipment, and difficulties in energy optimization in multi-shuttle cooperative operations.
By synergistically optimizing the Improved Genetic Algorithm (IGA) and the Multi-Agent Deep Deterministic Policy Gradient (MADDPG) algorithm, in combination with an A*-guided Deep Q-Network (A*-DQN) path planning method, the proposed scheduling algorithm can significantly reduce the energy consumption of four-way shuttle warehouse systems, lower equipment idle rates, and enhance operational efficiency. Furthermore, the algorithm achieves balanced task allocation, improves the degree of load balancing, and strengthens the system’s adaptability and stability in dynamic environments. This integrated optimization not only enhances the overall system performance but also provides robust support for energy conservation and intelligent operation in warehouse systems.

2.3.3. Path Planning Algorithm Based on A*-Guided DQN

(1)
Path Planning Design Based on A* Algorithm
(1) Path Conflicts and Resolution
When multiple four-way shuttles operate on the same level, three types of conflicts may occur, as illustrated in Figure 8: Head-on conflict, as shown in Figure 8a, occurs when the angle between the directions of two shuttles is 180 degrees. Node conflict, as shown in Figure 8b, arises when the angle between the directions of two shuttles is 90 degrees. Path overlap, as shown in Figure 8c, occurs when two shuttles are moving in the same direction. If the leading shuttle makes a sudden stop or turns, a collision may occur with the following shuttle; in this case, the angle between the two shuttles is 0 degrees.
When two four-way shuttles, i and j , are operating simultaneously, their direction vectors are v i = v d i , v j = v d j , respectively. The angle between the vectors is calculated as shown in Equation (26). By converting the value to degrees and referring to the definitions above, the conflict type can be determined according to the criteria in Equation (27).
θ = arccos v i · v j v i · v j
In the equation, θ indicates the angle between the operational direction vectors of the two four-way shuttles, with specific values defined as follows:
θ = 180 ° ,   H e a d o n c o n f l i c t 90 ° ,   N o d e c o n f l i c t 0 ° ,   P a t h o v e r l a p
To address head-on conflicts encountered during shuttle operations, the system assigns a decision-making priority to each shuttle based on the priority of its current task. When a conflict arises, the shuttle with lower priority must proactively yield according to predefined rules. During the yielding process, if it is feasible for the shuttle to reverse or change lanes, it performs the corresponding maneuver. If yielding in place is not feasible, the system dynamically generates an alternative detour path—subject to grid validity constraints—to enable the shuttle to circumvent the conflict.
When a node conflict occurs between two four-way shuttles, i and j , the arrival times t i , t j of both shuttles at the node are calculated according to Equation (28). Whether the time window offset threshold is satisfied is then determined. If the threshold is not met, the moving speed of the lower-priority shuttle must be adjusted.
t i = d i v i , t j = d j v j | t i t j | Δ t min Δ v i = v i 0.5   m / s
In the equation, Δ t min indicates the minimum time window offset threshold. d i and d j respectively indicate the distance from the starting point of four-way shuttle i or four-way shuttle j to the node.
When the paths of the current shuttle i and the following shuttle j overlap, the minimum safe distance between the two shuttles is calculated according to Equation (29). If d i , j d min , the minimum safe distance is not met. The following shuttle must decelerate for adjustment.
d min = max 1.5 m , η · v j · t r v j = min v j , 0.8 · v i
where d min is the minimum safe distance, η is the safety factor used to eliminate system errors, and t r is the system response time.
(2)
A* Path Execution Modes
In this study, the A* path execution process is divided into two modes:
(1) Route-Guided Mode
In this approach, a complete path is generated directly during the path planning stage, and the four-way shuttle executes its tasks strictly according to the pre-defined path. This mode offers high computational efficiency and fast execution speed, making it suitable for scenarios with predetermined tasks and static environments [34].
(2) Step-by-Step Guidance
In this mode, only the currently feasible segment of the path is planned. During execution, subsequent path segments are updated in real time according to environmental feedback until the task is completed. This mode demonstrates strong adaptability to environmental changes, enables dynamic obstacle avoidance, and is suitable for highly dynamic and frequently interactive dense warehousing systems.
In consideration of the practical operational characteristics of four-way shuttle-based warehouse systems and to enhance both path safety and system responsiveness, this study adopts the step-by-step guidance strategy as the path execution mode. The specific steps of the A* algorithm employed in this study are illustrated in Figure 9.
Deep Q-Networks (DQN) typically rely on random exploration in the initial stages, which, in high-dimensional spaces, can lead to inefficient and goal-deviating path behaviors. To improve decision-making efficiency, this study incorporates data from the A* algorithm to guide the decision process. At the early stage of training, the A*-guided DQN decision-making algorithm provides the agent with initial path planning recommendations based on the A* algorithm (see Figure 10).
The design of the A*-guided DQN algorithm is as follows:
(1) Action Selection
By dynamically selecting actions planned by the A* algorithm according to the current state in real time, the agent’s exploration efficiency at the early stage can be significantly improved.
(2) Experience Replay
The paths generated by the A* algorithm are used as experience data for the DQN, enabling the agent to learn effective strategies more rapidly.
(3) Dynamic Adjustment
During the training process, increasing reliance is placed on the strategies learned by the DQN itself.
To enable the transition to neural network-based decision-making, this study adjusts the frequency of neural network decisions by setting the exploration rate ε, where 1-ε represents the probability of using the A* algorithm. The training is designed to allow the neural network to converge and make accurate decisions. In the initial stage, the A* algorithm is prioritized, and as training progresses, the proportion of neural network usage is gradually increased to ensure eventual convergence. During the first one-third of the training period, the exploration rate is fixed at 0.8, after which it is gradually increased to 1.
Through A* planning guidance, decision fusion, and dynamic adjustment of policy weights, the path learning ability of the four-way shuttle in complex environments is effectively enhanced. The structure of the A*-guided DQN algorithm (A*-DQN) is shown in Figure 11.
The parameters of the current value network Q π s t , a t are updated using the loss function:
Q π s t , a t Q π s t , a t + α r t + γ max a Q π s t + 1 , a t Q π s t , a t
In the equation, Q π s t , a t , α indicates the learning rate, r t indicates the current reward, γ is the discount factor, s t + 1 is the next state after executing action a , and max a Q π s t + 1 , a t indicates the Q -value of the optimal action selected at s t + 1 .
(3)
Training Process for Four-Way Shuttle
This study designs a training process tailored for the path planning task of the four-way shuttle, as illustrated in Figure 12.
The procedure of the A*-DQN algorithm is as follows: The agent acquires the current state from the environment and inputs it into the DQN network, which outputs the optimal action. The legality of the action is then determined; if the action is illegal, the episode terminates, otherwise, the action is executed. The system subsequently returns a reward and the next state, forming an experience tuple that is stored in the experience replay buffer. Once the buffer reaches its capacity, mini-batches are sampled to compute the loss function and update the Q-network parameters. At fixed intervals, the parameters of the current Q-network are synchronized to the target network to enhance training stability, until the termination condition is met. This algorithm utilizes the path generated by the A* algorithm as the initial guiding policy for DQN, thereby leveraging the efficient planning capability of A* to improve exploration efficiency. As training progresses, the proportion of DQN’s own policy usage is gradually increased, ultimately enabling convergence of the path planning policy and improving both the efficiency and accuracy of path planning.
This section focuses on the path planning problem of four-way shuttle in dense warehouse environments and proposes an A*-DQN path optimization algorithm that integrates A* heuristic search with deep reinforcement learning. By using the A* path as the initial policy guiding signal, the algorithm enhances exploration efficiency, constructs a four-dimensional state space and a staged reward mechanism, thereby significantly improving the learning efficiency of the model and the quality of the planned paths. At the same time, by incorporating information on the shuttle’s loading status, the model dynamically adjusts path constraints to ensure the feasibility and safety of path planning. Moreover, based on the grid map state space and the designed staged reward mechanism, the agent is further guided to learn the optimal path planning policy, effectively transforming the path planning problem into a reinforcement learning problem and achieving deep integration of the model and algorithm.
In this section, the scheduling algorithm and path planning are tightly coupled and mutually reinforcing. In terms of path planning, the proposed A*-DQN algorithm introduces a path map switching mechanism by integrating the shuttle’s loading status information, thereby enabling adaptive adjustment of path constraints under different operating conditions. The A* algorithm provides initial path planning suggestions for DQN, accelerating the convergence process of DQN. This integrated approach not only improves the efficiency and accuracy of path planning but also significantly enhances the four-way shuttle’s path learning and obstacle avoidance capabilities in complex layouts and dynamic environments. Within the scheduling model, the results of task allocation provide the A*-DQN algorithm with the task sequence and target location information, enabling A*-DQN to dynamically adjust path planning accordingly. Through rational task allocation, the system reduces ineffective movements and waiting times of the four-way shuttles, while efficient path planning further minimizes path conflicts and detours. The synergistic effect of these two aspects achieves reduced energy consumption and improved operational efficiency. This integrated approach not only enhances the overall system performance but also improves the system’s adaptability and stability in dynamic environments, providing strong support for the intelligent upgrading of warehouse systems.

3. Experiments and Results

3.1. Four-Way Shuttle Scheduling Optimization Experiments

3.1.1. Experimental Parameter Design

In this study, the experiments are based on an electrical appliance warehouse system, primarily focusing on multi-shuttle scheduling, energy consumption optimization, and operational efficiency. The performance is analyzed in comparison with various benchmark algorithms. Table 2 presents the overall environmental configuration of the system, while Table A1 lists the kinematic performance and energy consumption parameters of the main equipment.

3.1.2. Simulation Platform Layout

To validate the overall scheduling capability of the integrated algorithm in multi-level warehouses, this study constructed a dense storage system comprising four levels, each containing 800 storage units. Multiple sets of inbound and outbound orders were configured for the experiments. The system generates paths and coordinates the operation of lifts based on the positional information of the tasks. The simulation platform software configuration is as shown in Table A2.

3.1.3. System Function Verification and Testing

As illustrated in Figure 13, experiments with different parameter adjustments were conducted to verify the effectiveness and accuracy of the system’s dynamic layout adjustment functionality.
As shown in Table 3, to verify the accuracy of the four-way shuttle speed parameter settings, two fixed task coordinates, (2,2) and (2,12), were designated. Neglecting acceleration, the theoretical completion time was compared with the actual completion time, and the error consistently remained within approximately 5%, indicating minimal deviation.

3.1.4. Experimental Results and Analysis of the Improved Genetic Algorithm

To verify the optimization performance of the proposed Improved Genetic Algorithm (IGA) in the coordinated scheduling of four-way shuttles and lifts, this study experimentally analyzes the algorithm’s fitness convergence process and key energy consumption evaluation metrics under standard task scenarios.
(1)
Fitness Results and Analysis
A randomly generated fixed order set was used as the test input in this experiment. Both the Improved Genetic Algorithm (IGA) and the standard Genetic Algorithm (GA) were applied to the task scheduling problem. As shown in Figure 14, IGA achieved convergence at approximately the 20th generation, with a final fitness function value of 200. In contrast, the standard GA converged after about 60 generations, with a final fitness value of 291.61. Comparatively, IGA not only significantly accelerated the convergence rate but also improved the convergence accuracy by 31.46%, demonstrating superior search capability and solution quality.
As shown in the scheduling results of Table 4, the IGA outperforms the standard GA in terms of task structure arrangement, load balancing, and overall scheduling effectiveness. The IGA enables alternating execution of outbound and inbound tasks, effectively preventing the occurrence of empty trips caused by consecutive tasks of the same type, while also achieving more balanced task allocation. In terms of energy consumption and operation time, IGA reduces these metrics by approximately 16.23% and 10.86%, respectively, compared to GA, fully demonstrating its advantages in scheduling efficiency and energy optimization. This verifies its engineering applicability and optimization potential in multi-shuttle, multi-task scheduling scenarios.
(2)
Comparison and Analysis under Different Order Sizes
(1) Comparison and Analysis of Fitness Values
In this study, the total order quantities were set to 20, 40, 60, 80, and 100. Both the IGA and the standard GA were used to solve the scheduling problem, and their optimization performance under different scales was compared. The results are shown in Figure 15.
As the number of orders increases, the overall value of the fitness function rises; however, the IGA consistently outperforms the standard GA across all order scales, demonstrating superior optimization capability. When the order volume is relatively small (20–40 orders), the optimization rate of IGA is approximately 25%. As the number of orders exceeds 60, the optimization rate further increases to around 30%. Experimental results indicate that IGA exhibits a more pronounced optimization advantage in large-scale order scheduling tasks.
(1) Comparison and Analysis of Energy Efficiency Optimization Metrics
According to the experimental results presented in Table 5, the IGA demonstrates significant advantages at all order scales. In terms of average energy consumption per order (EPO), the average energy consumption per order, IGA effectively reduces the energy consumption of each order task by approximately 10%, and this advantage gradually stabilizes as the order scale increases. Regarding the shuttle idle energy rate (IER), the proportion of four-way shuttles in an unloaded state relative to the total number of four-way shuttles, IGA significantly reduces ineffective energy consumption, with the reduction rate increasing as the order scale grows. In terms of the load balancing degree (LBD) indicator, the IGA demonstrates a significant advantage in task allocation. Task distribution becomes increasingly balanced as the order scale grows, and this effect is especially pronounced with large-scale orders (80 and 100 orders), where task allocation is much more uniform.

3.1.5. Experimental Analysis of the Integration of Genetic Algorithm and MADDPG Algorithm

The hyperparameters for MADDPG are shown in Table A3.
(1)
Training Results and Analysis
Figure 16 presents a comparison of the cumulative reward curves during training between the integrated algorithm (IGA-MADDPG) and the standard MADDPG algorithm.
As the number of training episodes increases, the reward value of the integrated algorithm continues to rise, eventually stabilizing at around 130. In contrast, the standard MADDPG shows a marked decline in learning efficiency after approximately 175 episodes. This indicates that the initial scheduling solution provided by IGA effectively guides policy learning, resulting in better convergence and higher solution quality.
After 175 episodes, the standard MAD5DPG remains at a relatively low reward value for an extended period, struggling to overcome local optima. The integrated algorithm, however, demonstrates a much stronger capability to achieve global optimality and overall policy optimization.
(2)
Performance Comparison and Analysis of the Algorithms
In the simulated order scheduling tasks, both the integrated algorithm and the conventional IGA were used to optimize randomly generated orders. Their performance was evaluated based on two key indicators: energy consumption and operation time. The comparative results are presented in Table 6 and Figure 17.
As shown in Table 6, the integrated algorithm achieves an average reduction of 16.68% in energy consumption per order and an average decrease of 12.84% in total operation time. These results fully validate the comprehensive advantages of the integrated algorithm in both energy consumption control and task efficiency.

3.2. Experimental Study and Analysis of the A*-Guided DQN Algorithm

This section focuses on the path planning problem in multi-layer dense warehousing systems, where simulation experiments are conducted using a single-layer planar map layout to simplify cross-layer handling operations. The experiments simulate a single-layer environment, allowing the path planning module to be invoked multiple times in the actual system for generating paths on each layer and facilitating multi-layer collaborative operations, thus ensuring algorithm controllability and ease of testing.
The experimental design adheres to the following principles: both the start and end points are either randomly generated or specified; multiple groups of tasks are scheduled for simultaneous path planning; and the number of tasks is controlled.
The architectures of the DQN network are shown in Table A5.

3.2.1. Map Transformation Experiments and Analysis

In this experiment, the four-way shuttle is simulated to perform outbound and inbound tasks at two different positions, (1,1) and (2,8), respectively, to evaluate the system’s map adaptation and path planning performance under various conditions. Depending on whether the shuttle is loaded or unloaded, the experimental results are presented in Figure 18, which illustrates the differences in paths between two map models given the same start and end points. The numbers in the blue and red boxes represent the order of grid cells traversed by the four-way shuttle’s path planned by the A* algorithm, arranged from smallest to largest.
The results indicate that the A* algorithm can dynamically adjust feasible regions based on the current state of the shuttle.
The average data for the two path planning approaches under different warehouse scales are summarized in Table 7. The results demonstrate that the dynamic map switching mechanism can effectively improve path flexibility. Specifically, paths planned with consideration of shuttle state are, on average, shortened by approximately 9.05% compared to those planned without considering shuttle state, confirming the algorithm’s advantages in terms of both energy saving and operational efficiency.

3.2.2. Experimental Results and Analysis of the A*-Guided DQN Algorithm

The hyperparameters for DQN are shown in Table A4.
To evaluate the adaptability and training effectiveness of the A*-guided DQN (A*-DQN) algorithm under various warehouse layouts, three representative warehouse layout models were constructed, as illustrated in Figure 19. The number of storage units and the exploration rate settings for each layout are summarized in Table 8.

3.2.3. Analysis of Training Results

For each completed task, good real-time performance receives 3 points. Accordingly, the maximum rewards for the three layouts are 108, 300, and 258, respectively.
Taking Layout 2 as an example, as shown in Figure 20a, when training with the A*-DQN algorithm, the reward reaches 300 points after approximately 250 training episodes. In contrast, as shown in Figure 20b, when training with the standard DQN algorithm, the cumulative reward reaches only 225 points after 300 episodes. These results demonstrate that the A*-DQN algorithm, by leveraging the A* path guidance mechanism, can significantly accelerate policy convergence and improve task completion rates. This approach is particularly well-suited for path planning and scheduling tasks in warehouse systems with complex layout structures.

3.2.4. Comparison Between A* and A*-DQN Algorithms

(1)
Comparative Analysis of Assigned Task Paths
Under the three different layout models, an inbound and outbound task is randomly assigned: the four-way shuttle retrieves cargo between the entrance/exit and storage units, then returns to the storage unit. The comparative results are shown in Figure 21, indicating that DQN has effectively learned the path decision logic of A*, demonstrating strong path optimization capability and interpretability.
(2)
Performance Comparison of Path Planning in Multi-Task Scenarios
To verify the algorithms’ avoidance capabilities under multi-task conditions, data in Table 9 show that the average number of collisions for A* is 13.4, while A*-DQN is able to complete all tasks without any collisions. This demonstrates that A*-DQN possesses superior path avoidance capability and system stability in complex task environments.
To further validate the computational efficiency of the algorithms, the total computation time required by both algorithms to complete all tasks was recorded. Each experiment was repeated three times, and the average was taken to calculate the average decision time per task. As shown in Table 10, the efficiency advantage of A*-DQN is even more pronounced, demonstrating excellent scalability and practical engineering value.
The A*-DQN algorithm proposed in this study has achieved remarkable results in path planning. Through simulation experiments conducted under various warehouse layouts, the effectiveness and superiority of the algorithm have been validated. The experiments demonstrate that the introduction of a path map switching mechanism based on shuttle loading status significantly enhances the flexibility of path planning and improves energy efficiency. During the training process, the A*-DQN algorithm exhibits faster convergence and stronger obstacle avoidance capabilities, enabling it to efficiently generate safe and optimal paths in complex environments. Moreover, the algorithm demonstrates excellent adaptability and stability in multi-task scenarios, substantially improving the operational efficiency of four-way shuttles and the accuracy of path planning.

4. Conclusions

This paper conducts an in-depth investigation into the operation processes of four-way shuttles and elevators in shuttle-based warehouse systems. Focusing on path planning and task scheduling challenges, a series of optimization strategies are proposed, and a multi-shuttle cooperative operation mechanism oriented toward energy efficiency optimization is successfully established.
In terms of task scheduling, to address issues such as uneven task allocation, poor adaptability to dynamic environments, high idle rates, and difficulties in energy optimization in multi-shuttle cooperative operations, this paper proposes a collaborative scheduling algorithm based on an improved genetic algorithm (IGA) and multi-agent deep deterministic policy gradient (MADDPG) algorithm. By introducing improvements such as order-preserving crossover and path-reversal mutation, the algorithm’s search capability and solution quality are significantly enhanced. Compared to traditional genetic algorithms (GA), integrating shuttle cars and elevators into a unified task framework eliminates unnecessary waiting and empty runs between equipment. In terms of energy consumption and operational time, IGA reduced these metrics by approximately 16.23% and 10.86%, respectively, compared to GA. Furthermore, the hierarchical collaborative optimization approach integrating GA and MADDPG effectively overcomes the limitations of single algorithms in four-way shuttle scheduling scenarios. The preliminary scheduling plan generated by the IGA was used as prior knowledge and input into the MADDPG algorithm, accelerating the scheduling optimization process and achieving global optimality. Experimental results from scheduling models in various warehouse environments indicate that compared to single algorithms, the hybrid algorithm achieves an average reduction of 16.68% in energy consumption per order and a 12.84% decrease in total processing time. The proposed method excels in multi-objective optimization of energy consumption, time, and load balancing, fully demonstrating its adaptability and stability in multi-shuttle, multi-task cooperative execution.
For path planning, the innovative incorporation of shuttle loading status information enabled the construction of a path map switching mechanism, facilitating adaptive adjustment of path constraints according to different operating states of the shuttles. In addition, the A*-guided DQN learning method is proposed in this paper. The preliminary path planning solutions generated by the A* algorithm were incorporated into the experience replay pool of the DQN algorithm during the initial training phase as experience inputs, thereby enhancing the algorithm’s convergence speed and optimization rate under complex layouts and dynamic environments. Compared to the traditional A* algorithm that plans the entire path directly, the A*-DQN algorithm employs a distribution-guided execution mode, dynamically updating path planning in real time based on traffic conditions. Compared to traditional methods, the A* guided DQN learning approach reduces average total task time by 12.84% and average path planning length by approximately 9.05%. The four-directional shuttle vehicle can now complete order tasks with near-zero conflicts. This method provides strong support for the efficient operation of four-way shuttles in complex warehouse environments.
In summary, the joint optimization algorithm proposed in this paper—integrating heuristic search, evolutionary algorithms, and reinforcement learning—not only effectively solves the path planning and task scheduling problems in four-way shuttle warehouse systems but also achieves remarkable results in energy saving, consumption reduction, and intelligent operations. This provides a solid theoretical foundation and engineering application value for the transformation and upgrading of warehouse systems, demonstrating broad application prospects and promotion value.

5. Practical Application in Industry

The practical industrial applications of the IGA-MADDPG algorithm and the A*DQN algorithm are as follows:
The IGA-MADDPG algorithm, featuring a hierarchical structure (global IGA pre-planning + local MADDPG adjustment), has been validated for industrial application through international studies. In real-time scalability, it was applied to AGV scheduling in container yards, enabling real-time decision-making and improving operational efficiency and energy utilization across different yard scales [35]. Its experience injection mechanism boosts training efficiency by 60%, addressing the convergence bottleneck of multi-agent systems under industrial-scale concurrency. For multi-objective optimization, a warehouse research showed that it reduces task completion time and energy consumption by coupling target assignment and path planning [36]. In terms of interpretability, a hierarchical reinforcement learning strategy (analogous to IGA-MADDPG’s structured design) enhanced decision traceability, with maintenance teams reporting a 65% improvement in understanding scheduling logic, which helps meet industrial safety standards [37].
The A*-DQN algorithm (integrating A* heuristic search and Deep Q-Networks) exhibits strong adaptability in industrial dynamic path planning. In dynamic environment adaptation, it was used for maritime autonomous ship navigation: A* generated static optimal paths, while DQN made real-time adjustments for obstacles, achieving collision-free navigation with a path deviation rate below 5%. In e-commerce warehouses, its load-aware map switching mechanism cut shuttle no-load rates from 42% to 6.8%. For energy optimization, a mining robot path planning study showed that DQN-derived frameworks (similar to A*-DQN) reduced path following time by 17% and corresponding energy consumption. Its lightweight design (≤480 KB parameters) enables operation on embedded devices with single-step inference time under 50 ms, fitting resource-constrained industrial scenarios like smart factory AGV navigation.
In summary, the design proposed in this paper demonstrates good practical applicability in industrial settings.

6. Limitations and Extensions

Although this study has achieved good results in path planning and scheduling optimization, limitations still exist. For instance, in large-scale order environments, IGA-MADDPG exhibits high computational complexity, necessitating further optimization of model structure and computational efficiency. In hardware deployment, the MADQN model is constrained by sensor accuracy and communication latency and requires deeper integration with actual devices.
Furthermore, the selection of weighting parameters in multi-objective optimization relies on manual experience and lacks adaptive adjustment mechanisms. Future research will proceed along three directions:
  • Explore lightweight path planning models, such as incorporating graph neural networks (GNNs) and incremental search algorithms, to enhance path computation efficiency and strategy generalization capabilities.
  • Advance integration with physical hardware systems by establishing a closed-loop algorithm control system through interfacing the second-generation Robot Operating System (ROS2) with simulation platforms.
  • Develop adaptive multi-objective optimization strategies, incorporate meta-learning mechanisms for dynamic weight adjustment, and explore green warehouse scheduling models. Introduce synergistic constraints on carbon emissions and energy consumption to advance intelligent warehouse systems toward low-carbon and high-efficiency evolution.

Author Contributions

Completing the initial draft, X.J., Y.X., K.L. and Q.Z.; Reviewing and editing, X.J., Y.X., K.L. and Q.Z.; Collecting samples, X.J. and Q.Z.; Software and data processing, X.J., Y.X., K.L. and Q.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript
IGAImproved Genetic Algorithm
GAGenetic Algorithm
MADDPGMulti-Agent Deep Deterministic Policy Gradient
DQNDeep Q-Network

Appendix A

Appendix A.1

Table A1. Kinematic Performance and Energy Consumption Parameters of Equipment.
Table A1. Kinematic Performance and Energy Consumption Parameters of Equipment.
Parameter ItemNameDataUnit
Storage UnitLength1.9m
Width1.6m
Height2m
AisleWidth1.7m
Four-way ShuttleMaximum speed2m/s
Acceleration0.5m/s2
Unloaded energy500w/h
Loaded energy550w/h
ElevatorMaximum speed1m/s
Acceleration0.3m/s2
Unloaded lift energy1000w/h
Unloaded descent energy100w/h
Loaded lift energy1100w/h
Loaded descent energy110w/h

Appendix A.2

Table A2. Simulation Platform Software and Hardware Configuration.
Table A2. Simulation Platform Software and Hardware Configuration.
Category DescriptionTool/LibraryVersionFunction
Programming LanguagePython3.9Core Algorithm Development and System Integration
Numerical Computing LibraryNumpy1.23.5Matrix Operations and Task Data Preprocessing
Visualizsation EnginePygame2.1.3Dynamic Rendering of Warehouse Environments and Interactive Interface Development
Machine Learning FrameworkPyTorch1.131Presentation of results including energy consumption curves and task scheduling sequence diagrams
Graphical RenderingMatplotlib3.6.2Energy consumption curve and task sequence diagram plotting
HardwareLenovo notebookV14Program operation and thesis writing

Appendix A.3

Table A3. Hyperparameters for MADDPG.
Table A3. Hyperparameters for MADDPG.
NameValue
The Number of Agents7 (5 shuttles and 2 elevators)
The Observation Dimension of each Agent4
The Action Dimension of each Agent1
Learning Rate of Actor Network0.1%
Learning Rate of Critic Network0.1%
Discount Factor0.95
Target Network Soft Update Parameters0.01
Experience Replay Pool Capacity50,000
Batch Size32
Number of Training Rounds200
Maximum Steps Per Turn30
Noise Sevel During Action Selection0.1

Appendix A.4

Table A4. Hyperparameters for DQN.
Table A4. Hyperparameters for DQN.
Parameter NameMeaningValue
memory_sizeMaximum capacity of the experience replay pool, storing interaction experiences to ensure sample diversity.50,000
batch_sizeNumber of samples sampled from the replay pool for each training step, balancing training efficiency and stability.32
GAMMADiscount factor, weighing the importance of future rewards relative to immediate rewards.0.95
TARGET_REPLACE_ITERUpdate frequency of the target network (copying policy network parameters to target network every N steps), stabilizing training.100
epsilon_startInitial exploration rate in epsilon-greedy strategy, controlling exploration intensity in early training.0.8
epsilon_endFinal exploration rate in epsilon-greedy strategy, transitioning to exploitation in late training.1
learning_rateLearning rate of the optimizer, controlling the step size of network parameter updates.0.01
start_training_info_numbeMinimum number of experiences required to start training, ensuring sufficient samples for effective learning.100
a (in Memory class)Priority weight in prioritized experience replay, controlling the impact of experience priority on sampling.0.6
loss_functionLoss function type, calculating the error between predicted Q-values and target Q-values.SmoothL1Loss

Appendix A.5

Table A5. Network Structure Overview.
Table A5. Network Structure Overview.
Network ModuleCore FunctionKey Parameters
Input LayerReceives AGV state/map featuresNumber of channels = 1, Size = map dimensions
Convolutional Feature Extraction LayerExtracts spatial features of the map (obstacles, paths)Output channels: 64→128→256, Pooling kernel = 2 × 2
Fully Connected Fusion LayerFuses spatial features into an abstract vectorInput = 2304 → Output = 256
Output LayerOutputs Q-values for 5 action (action decision)Input = 256 → Output = 5
Dual NetworksStabilizes training and avoids Q-value overestimationTarget Network updated every 100 steps

References

  1. Hu, L.; Zhao, X.; Liu, W.; Cai, W.; Xu, K.; Zhang, Z. Energy benchmark for energy-efficient path planning of the automated guided vehicle. Sci. Total Environ. 2022, 857 Pt 3, 159613. [Google Scholar] [CrossRef]
  2. Zhang, M.; Xiang, Q.; Lv, Z.; Yin, Y.; Yu, Z. Optimization of dense storage location allocation for low energy consumption. J. Donghua Univ. Nat. Sci. Ed. 2023, 49, 88–96+135. [Google Scholar] [CrossRef]
  3. Yuan, M.; Lu, S.; Zheng, L.; Yu, Q.; Pei, F.; Gu, W. Distributed heterogeneous flexible job-shop scheduling problem considering automated guided vehicle transportation via improved deep Q network. Swarm Evol. Comput. 2025, 94, 101902. [Google Scholar] [CrossRef]
  4. Song, J. Research on the Optimization of the Operational Process of a Four-Way Shuttle-Type High-Density Warehouse System. Master’s Thesis, Beijing University of Posts and Telecommunications, Beijing, China, 2021. [Google Scholar] [CrossRef]
  5. Sun, Z.; Qi, Z. Automated warehouse AGV collision avoidance path planning considering vehicle and task matching relevance. Comput. Appl. Res. 2025, 42, 1409–1417. [Google Scholar] [CrossRef]
  6. Jignasu, A.; Rurup, D.J.; Secor, B.E.; Krishnamurthy, A. NURBS-based path planning for aerosol jet printing of conformal electronics. J. Manuf. Process. 2024, 118, 187–194. [Google Scholar] [CrossRef]
  7. Guo, W.; Huang, X.; Qi, B.; Ren, X.; Chen, H.; Chen, X. Vision-guided path planning and joint configuration optimization for robot grinding of spatial surface weld beads via point cloud. Adv. Eng. Inform. 2024, 61, 102465. [Google Scholar] [CrossRef]
  8. Cui, X.; Wang, C.; Xiong, Y.; Mei, L.; Wu, S. More Quickly-RRT*: Improved Quick Rapidly exploring Random Tree Star algorithm based on optimized sampling point with better initial solution and convergence rate. Eng. Appl. Artif. Intell. 2024, 133 Pt C, 108246. [Google Scholar] [CrossRef]
  9. Jiang, Q.; Xu, J. Improving the PSO-PH-RRT* algorithm for intelligent vehicle path planning. J. Northeast. Univ. Nat. Sci. Ed. 2025, 46, 12–19. [Google Scholar] [CrossRef]
  10. Gou, Y.; Ma, K.; Chen, J.; Liu, Z. Improvement of the Northern Goshawk Algorithm and Its Application in Intelligent Vehicle Path Planning. Control Eng. 2025, 1–8. Available online: https://link.oversea.cnki.net/doi/10.14107/j.cnki.kzgc.20240878 (accessed on 9 October 2025).
  11. Fan, X.; Sang, H.; Tian, M.; Yu, Y.; Chen, S. Integrated scheduling problem of multi-load AGVs and parallel machines considering the recovery process. Swarm Evol. Evol. Comput. 2025, 94, 101861. [Google Scholar] [CrossRef]
  12. Sanogo, K.; Benhafssa, M.A.; Sahnoun, M. A multi-agent system simulation of job shop scheduling with human consideration: A comparative analysis of AGVs and AIVs. Simul. Model. Pract. Theory 2025, 139, 103060. [Google Scholar] [CrossRef]
  13. Tang, Q.; Wang, H. Data-driven automated job shop scheduling optimization considering AGV obstacle avoidance. Sci. Rep. 2025, 15, 5. [Google Scholar] [CrossRef]
  14. Cai, Z.; Du, J.; Huang, T.; Lu, Z.; Liu, Z.; Gong, G. Energy-Efficient Collision-Free Machine/AGV Scheduling Using Vehicle Edge Intelligence. Sensors 2024, 24, 8044. [Google Scholar] [CrossRef] [PubMed]
  15. Li, J.; Zou, M.; Lv, Y.; Sun, D. AGV Scheduling for Optimizing Irregular Air Cargo Containers Handling at Airport Transshipment Centers. Mathematics 2024, 12, 3045. [Google Scholar] [CrossRef]
  16. Caridá, F.V.; dos Reis, W.P.N.; Morandin, O., Jr. Multi-attribute and predictive cascaded fuzzy system for the AGV dispatching in a flexible manufacturing system. Int. J. Adv. Manuf. Technol. 2024, 134, 3181–3199. [Google Scholar] [CrossRef]
  17. Liang, T.; Chao, Y.; Kai, W.; Wu, W.; Guo, Y. Quantum computing for several AGV scheduling models. Sci. Rep. 2024, 14, 12205. [Google Scholar] [CrossRef]
  18. Liu, J.Z.; Sang, Y.H.; Zheng, Z.C.; Chi, H.; Gao, K.-Z.; Han, Y.-Y. An effective multi-restart iterated greedy algorithm for multi-AGVs dispatching problem in the matrix manufacturing workshop. Expert Syst. Appl. 2024, 252 Pt B, 124223. [Google Scholar] [CrossRef]
  19. Xu, L.; Zhan, Y.; Lu, J.; Lang, Y. Four-way Shuttle Warehouse System Composite Operation Scheduling Optimization. J. Zhejiang Univ. Eng. Sci. 2023, 57, 2188–2199. [Google Scholar] [CrossRef]
  20. Yin, Y. Research on Multi-Directional Vehicle Path Planning and Real-Time Traffic Dispatch for Dense Depots. Master’s Thesis, Donghua University, Shanghai, China, 2024. [Google Scholar] [CrossRef]
  21. Zhou, S.; Liao, Q.; Xiong, C.; Chen, J.; Li, S. A novel metaheuristic approach for AGVs resilient scheduling problem with battery constraints in automated container terminal. J. Sea Res. 2024, 202, 102536. [Google Scholar] [CrossRef]
  22. Jianxiu, Y.; Xingrui, G.; Bigang, J.; Zhang, S. Adaptive path planning for driverless vehicles considering vehicle rollover stability. Comput. Eng. Appl. 2025, 1–12. Available online: https://link.oversea.cnki.net/urlid/11.2127.TP.20250403.1345.010 (accessed on 9 October 2025).
  23. Yang, X.; Hu, H.; Cheng, C. Collaborative scheduling of handling equipment in automated container terminals with limited AGV-mates considering energy consumption. Adv. Eng. Inform. 2025, 65 Pt A, 103133. [Google Scholar] [CrossRef]
  24. Ma, M.; Yu, F.; Xie, T.; Yang, Y. A hybrid speed optimization strategy based coordinated scheduling between AGVs and yard cranes in U-shaped container terminal. Comput. Ind. Eng. 2024, 198, 110712. [Google Scholar] [CrossRef]
  25. Yue, L.; Fan, H.; Ma, M. Optimizing configuration and scheduling of double 40 ft dual-trolley quay cranes and AGVs for improving container terminal services. J. Clean. Prod. 2021, 292, 126019. [Google Scholar] [CrossRef]
  26. Li, X. Modular Design and Research of Intelligent Four-Way Shuttle Vehicles. Master’s Thesis, Nanjing Agricultural University, Nanjing, China, 2021. [Google Scholar]
  27. Luo, X.; Gao, J.; Long, Z.; Shu, H.; Lu, Z.; Shen, Y.; Peng, G. An intelligent path planning method for patrol robots. J. Hunan Univ. Sci. Technol. Nat. Sci. Ed. 2018, 33, 75–82. [Google Scholar] [CrossRef]
  28. Wang, X.; Yue, Y.; Zhang, F.; Wang, Y.; Zhang, Z. Active Detection of Interphase Faults in Distribution Networks Based on Energy Relative Entropy and Manhattan Distance. Electr. Power Syst. Res. 2025, 241, 111397. [Google Scholar] [CrossRef]
  29. Savvidis, G.; Ramasamy, S.; Bengtsson, K.; Zhang, X. A Smart Tool for Optimal Energy use of AGVs in the Manufacturing Industry. In Proceedings of the 2024 IEEE 29th International Conference on Emerging Technologies and Factory Automation, Padova, Italy, 10–13 September 2024. [Google Scholar] [CrossRef]
  30. Liu, S.; Li, X.; Xiang, S.; Wu, M. Mobile Robot Routing with Energy Consumption Optimization. In Proceedings of the 5th International Conference on Robotics and Artificial Intelligence, Singapore, 22–24 November 2019; pp. 30–35. [Google Scholar] [CrossRef]
  31. Yu, J.; Bai, H. Path Planning for Four-Directional Shuttle Vehicles Based on an Improved A* Algorithm. Mech. Electron. 2020, 40, 54–60. [Google Scholar] [CrossRef]
  32. Chen, X.; Qian, Z.; Xu, S.; Chen, R.; Pan, K. Research on a Cable Intelligent Dispatching System Based on Genetic Algorithms and Priority Queues. Comput. Knowl. Technol. 2025, 21, 113–115. [Google Scholar] [CrossRef]
  33. Heik, D.; Bahrpeyma, F.; Reichelt, D. Study on the application of single-agent and multi-agent reinforcement learning to dynamic scheduling in manufacturing environments with growing complexity: Case study on the synthesis of an industrial IoT Test Bed. J. Manuf. Syst. 2024, 77, 525–557. [Google Scholar] [CrossRef]
  34. Luo, M.; Gao, C.; Wang, Z. Optimization method for vehicle path planning algorithm based on constrained spectrum clustering. Comput. Appl. 2025, 45, 1387–1394. [Google Scholar] [CrossRef]
  35. Gong, L.; Huang, Z.; Xiang, X.; Liu, X. Real-time AGV scheduling optimization method with deep reinforcement learning for energy-efficiency in the container terminal yard. Int. J. Prod. Res. 2024, 62, 7722–7742. [Google Scholar] [CrossRef]
  36. Yang, Z.; Wen, P. Data-driven Reinforcement Learning-based Optimization of Shared Warehouse Storage Locations. Comput. Ind. Eng. 2025, 206, 111195. [Google Scholar] [CrossRef]
  37. Liu, Q.; Gao, J.; Zhu, D.; Pang, X.; Chen, P.; Guo, J.; Li, Y. Multi-Agent Target Assignment and Path Finding for Intelligent Warehouse: A Cooperative Multi-Agent Deep Reinforcement Learning Perspective. arXiv 2024, arXiv:2408.13750v1. Available online: https://arxiv.org/html/2408.13750v1 (accessed on 9 October 2025).
Figure 1. Schematic Diagram of a Four-Way Shuttle Warehouse System.
Figure 1. Schematic Diagram of a Four-Way Shuttle Warehouse System.
Applsci 15 11367 g001
Figure 2. Grid Map Method Diagram.
Figure 2. Grid Map Method Diagram.
Applsci 15 11367 g002
Figure 3. Chromosome Example.
Figure 3. Chromosome Example.
Applsci 15 11367 g003
Figure 4. Sequence-Retaining Crossover Illustration.
Figure 4. Sequence-Retaining Crossover Illustration.
Applsci 15 11367 g004
Figure 5. Schematic diagram of path reversal mutation.
Figure 5. Schematic diagram of path reversal mutation.
Applsci 15 11367 g005
Figure 6. Flowchart of the IGA.
Figure 6. Flowchart of the IGA.
Applsci 15 11367 g006
Figure 7. MADDPG Schematic Diagram of MADDPG Principle.
Figure 7. MADDPG Schematic Diagram of MADDPG Principle.
Applsci 15 11367 g007
Figure 8. Schematic Diagram of Different Path Conflicts.
Figure 8. Schematic Diagram of Different Path Conflicts.
Applsci 15 11367 g008
Figure 9. Flowchart of the A* Algorithm.
Figure 9. Flowchart of the A* Algorithm.
Applsci 15 11367 g009
Figure 10. Schematic Diagram of Path Selection under Different Tasks.
Figure 10. Schematic Diagram of Path Selection under Different Tasks.
Applsci 15 11367 g010
Figure 11. Structure of the A*-DQN Algorithm.
Figure 11. Structure of the A*-DQN Algorithm.
Applsci 15 11367 g011
Figure 12. Training Process of the A*-DQN Algorithm.
Figure 12. Training Process of the A*-DQN Algorithm.
Applsci 15 11367 g012
Figure 13. Test Results of System Layout Adjustment Function.
Figure 13. Test Results of System Layout Adjustment Function.
Applsci 15 11367 g013
Figure 14. Comparison of IGA and GA.
Figure 14. Comparison of IGA and GA.
Applsci 15 11367 g014
Figure 15. Comparison of Fitness Values under Different Order Sizes.
Figure 15. Comparison of Fitness Values under Different Order Sizes.
Applsci 15 11367 g015
Figure 16. Comparison of Reward Values between IGA-MADDPG and MADDPG.
Figure 16. Comparison of Reward Values between IGA-MADDPG and MADDPG.
Applsci 15 11367 g016
Figure 17. Comparison of Optimization Rates for Energy Consumption and Operation Time.
Figure 17. Comparison of Optimization Rates for Energy Consumption and Operation Time.
Applsci 15 11367 g017
Figure 18. Results of Map Transformation Experiments.
Figure 18. Results of Map Transformation Experiments.
Applsci 15 11367 g018
Figure 19. Three Representative Warehouse Layout Models.
Figure 19. Three Representative Warehouse Layout Models.
Applsci 15 11367 g019
Figure 20. Comparison of Rewards between A*-DQN and Standard DQN.
Figure 20. Comparison of Rewards between A*-DQN and Standard DQN.
Applsci 15 11367 g020
Figure 21. Path Comparison under Different Layouts.
Figure 21. Path Comparison under Different Layouts.
Applsci 15 11367 g021
Table 1. System Scheduling Process.
Table 1. System Scheduling Process.
StepObjectiveDescription
1Task Reception and PreprocessingThe system receives a new batch of orders, extracts their location, type, and floor information, and generates a task pool.
2Global Task AllocationThe genetic algorithm generates the initial scheduling scheme based on the initial state and task characteristics.
3Expert Experience InjectionThe scheduling scheme is executed in a simulation environment, and the resulting data is collected and injected into the MADDPG experience pool.
4Local Policy OptimizationMADDPG trains the policy network based on the injected experience and real-time state, optimizing local scheduling behaviors.
5Task Execution and FeedbackAll equipment executes the scheduling strategies as instructed, and the system collects information on energy consumption, efficiency, and conflicts.
6Policy Iteration and TerminationThe final scheduling strategy is output based on the number of training iterations or task completion status.
Table 2. Overall Environmental Configuration of the System.
Table 2. Overall Environmental Configuration of the System.
ParameterValue and Description
Warehouse Scale1–4 floors, each floor containing 10–80 storage areas
Storage UnitEach storage area contains 5 rows and 2 columns, totaling 10 units
Number of Shuttles1–3 units
Number of Elevators1–2 units
Number of Orders10–100 orders
Initial PositionLocated at the elevator position on the first floor
Map StructureGridded aisle layout, including aisles, storage areas, and elevators
Table 3. Theoretical vs. Simulated Completion Time.
Table 3. Theoretical vs. Simulated Completion Time.
Speed (m/s)Theoretical Completion Time (s)Simulated Completion Time (s)Error (%)
1140.00146.004.29%
270.0073.004.28%
346.7049.004.93%
Table 4. Comparison of Optimization Results between IGA and GA.
Table 4. Comparison of Optimization Results between IGA and GA.
AlgorithmShuttle No.Task SequenceEnergy Consumption (Wh)Time (s)
IGA17, −9, 4, −10, 9, −4, 2, −7, 5, −8177.81382.87
210, −1, 3, −5, 1, −6, 6, −3, 8, −2
GA110, −8, −9, −10, −2, −3, 6, 7, −6, −7212.34429.55
2−4, −5, 8, 9, 1, 2, 3, −1
Table 5. Experimental Results of Energy Efficiency Optimization Metrics under Different Scales.
Table 5. Experimental Results of Energy Efficiency Optimization Metrics under Different Scales.
Order ScaleAlgorithmEPO (Wh)IERLBD
20GA8.5442.90%0.19
IGA7.4736.50%0.07
40GA8.6247.50%0.22
IGA7.7333.10%0.04
60GA8.7149.70%0.25
IGA7.8434.00%0.04
80GA8.8253.30%0.23
IGA7.8534.70%0.02
100GA8.9057.30%0.22
IGA7.9734.80%0.01
Table 6. Performance Comparison of Algorithms.
Table 6. Performance Comparison of Algorithms.
ExperimentMetricIGA-MADDPGIGAOptimization Rate
Experiment 1Energy(Wh)53.7365.0517.40%
Time(s)167.00202.0017.33%
Experiment 2Energy(Wh)41.7947.2111.48%
Time(s)118.00120.001.67%
Experiment 3Energy(Wh)145.39177.2517.97%
Time(s)325.00406.0019.95%
Experiment 4Energy(Wh)169.97252.1332.59%
Time(s)281.00319.0011.91%
Experiment 5Energy(Wh)479.25499.143.98%
Time(s)571.00659.0013.35%
Table 7. Path Data for Two Approaches under Different Warehouse Scales.
Table 7. Path Data for Two Approaches under Different Warehouse Scales.
Warehouse ScaleWithout Considering Shuttle StateConsidering Shuttle StatePath Reduction Rate
10 × 1032.4030.306.48%
20 × 2057.9053.307.94%
30 × 3091.2084.507.35%
40 × 40121.60109.909.62%
50 × 50154.70138.5010.47%
75 × 75238.50210.7011.66%
100 × 100310.10275.4011.18%
Table 8. Basic Parameters of Each Layout.
Table 8. Basic Parameters of Each Layout.
Layout TypeNumber of Storage UnitsLayout CategoryExploration Rate
Sparse Rectangular36.00Rectangular0.80
Dense Rectangular100.00Rectangular0.80
Fishbone86.00Fishbone0.80
Table 9. Collision Test Results for A* and A*-DQN.
Table 9. Collision Test Results for A* and A*-DQN.
ExperimentA*A*-DQN
1120
290
3170
4160
5130
Table 10. Efficiency Comparison in Multi-Task Scenarios.
Table 10. Efficiency Comparison in Multi-Task Scenarios.
Layout TypeAlgorithmTime of Experiment 1(s)Time of Experiment 2(s)Time of Experiment 3(s)Average Time(s)Time Reduction Rate
Sparse RectangularA*-DQN1.862.012.041.9743.87%
A*3.523.473.533.51
Dense RectangularA*-DQN4.024.134.174.1163.21%
A*10.9811.0811.4511.17
FishboneA*-DQN5.255.365.845.4866.48%
A*16.8315.9816.2416.35
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xiang, Y.; Jin, X.; Lei, K.; Zhang, Q. Research on Energy-Saving and Efficiency-Improving Optimization of a Four-Way Shuttle-Based Dense Three-Dimensional Warehouse System Based on Two-Stage Deep Reinforcement Learning. Appl. Sci. 2025, 15, 11367. https://doi.org/10.3390/app152111367

AMA Style

Xiang Y, Jin X, Lei K, Zhang Q. Research on Energy-Saving and Efficiency-Improving Optimization of a Four-Way Shuttle-Based Dense Three-Dimensional Warehouse System Based on Two-Stage Deep Reinforcement Learning. Applied Sciences. 2025; 15(21):11367. https://doi.org/10.3390/app152111367

Chicago/Turabian Style

Xiang, Yang, Xingyu Jin, Kaiqian Lei, and Qin Zhang. 2025. "Research on Energy-Saving and Efficiency-Improving Optimization of a Four-Way Shuttle-Based Dense Three-Dimensional Warehouse System Based on Two-Stage Deep Reinforcement Learning" Applied Sciences 15, no. 21: 11367. https://doi.org/10.3390/app152111367

APA Style

Xiang, Y., Jin, X., Lei, K., & Zhang, Q. (2025). Research on Energy-Saving and Efficiency-Improving Optimization of a Four-Way Shuttle-Based Dense Three-Dimensional Warehouse System Based on Two-Stage Deep Reinforcement Learning. Applied Sciences, 15(21), 11367. https://doi.org/10.3390/app152111367

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop