Next Article in Journal
Systemic Failure and Distorted Feedback: A Study on the Implementation Dilemma of Local Government’s Cross-Strait Agricultural Cooperation from a Political Systems Theory Perspective
Previous Article in Journal
Authentic Intelligence in Digital Strategy Systems: A Socio-Technical Analysis of Human-Accountable Decision Governance
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Research on Distribution Optimization Strategy of Front Warehouse Model Based on Deep Reinforcement Learning

1
School of Economics and Finance, Xi’an Jiaotong University, Xi’an 710061, China
2
School of Internet Economics and Business, Fujian University of Technology, Fuzhou 350118, China
*
Author to whom correspondence should be addressed.
Systems 2026, 14(3), 261; https://doi.org/10.3390/systems14030261
Submission received: 31 December 2025 / Revised: 14 February 2026 / Accepted: 26 February 2026 / Published: 28 February 2026
(This article belongs to the Section Artificial Intelligence and Digital Systems Engineering)

Abstract

The multi-depot vehicle routing problem with soft time windows (MDVRPSTW) has long been a focus in both academic and industrial circles. This paper proposes a deep reinforcement learning framework designed to enhance the efficiency and quality of MDVRPSTW solutions, addressing the limitations of traditional heuristic algorithms in large-scale complex scenarios. The framework first transforms the mathematical model into a sequential decision-making problem through a Markov decision process, then extracts path selection strategies using an encoder–decoder architecture based on attention mechanisms and graph neural networks, and employs unsupervised reinforcement learning for model training. Test results on the Solomon benchmark dataset demonstrate that for small-scale problems (N = 20), our method reduces solving time by over 96% compared to comparative algorithms, with the objective value difference from the generalized variable neighborhood search (GVNS) being less than 9%. For medium-to-large scale problems (N = 50/100), our method achieves a 27.7 to 96.3 percent improvement over GVNS, maintaining stable solution times within 3 to 10 s. Compared to exact algorithms and meta-heuristic methods, our approach reduces computational costs by 2–3 orders of magnitude while demonstrating strong adaptability to variations in the number of depots and vehicles. In summary, this method significantly outperforms baseline models in both solution quality and computational efficiency, providing an efficient end-to-end solution for MDVRPSTW in complex scenarios.

1. Introduction

Fresh food e-commerce is not only a key component but also a vital element of sustainable development. As high-frequency essential consumer goods, fresh products are characterized by short shelf life, susceptibility to spoilage, and heavy reliance on cold chains, which impose stringent logistics requirements. Xing Liu’s research (2024) [1] proposed a multi-objective last-mile vehicle routing model for fresh food e-commerce, with three sustainability dimensions as independent objectives, optimizing last-mile routes simultaneously across these dimensions. Yan Huang’s study (2025) [2] developed a hybrid forecasting model integrating Grey Relational Analysis (GRA), Wild Horse Optimization Algorithm (WHO), and Time Series Convolutional Network (TCN). This model addresses challenges in digital transformation in cold chain logistics demand forecasting, such as insufficient feature extraction and highly nonlinear data, thereby enhancing prediction accuracy.
The just-in-time delivery model based on front-end warehouse operations is gradually becoming the mainstream approach in fresh food e-commerce logistics and distribution. Scholars and industry professionals are increasingly focusing on optimizing existing logistics distribution models. Research on VRP (Vehicle Routing Problem) directly impacts the optimization results of front-end warehouse distribution patterns. In recent years, significant progress has been made in path planning studies across various fields. For instance, in the drone field, Xiaoduo Li proposed a hybrid heuristic algorithm (ALSA-RFC) with a robust feasibility test. This algorithm combines the advantages of adaptive large-scale neighborhood search and simulated annealing, enabling efficient handling of large-scale path planning problems [3]. Eryang Guo developed a two-stage evolutionary algorithm (TSC-PSODE) based on hybrid penalty strategies. This method not only provides an effective constraint processing mechanism but also achieves a good balance between exploration and exploitation by maintaining population diversity and accelerating convergence, effectively solving optimization problems with complex and dynamic constraints [4]. Phan Duc Hung proposed an adaptive ant colony optimization algorithm for unmanned vehicle path planning with time windows. This method is able to generate high-quality solutions in complex environments with random requests and tight time constraints [5].
Previous studies have achieved remarkable outcomes in combinatorial optimization domains, including vehicle routing and logistics scheduling. However, academic exploration of the MDVRPSTW (Multi-depot Vehicle Routing Problem with Soft Time Windows) remains limited. For instance, most existing research on MDVRPSTW employs heuristic algorithms [6,7,8,9], while studies that use deep reinforcement learning to solve the VRP often exclude time window constraints [10,11,12]. This paper proposes an innovative reinforcement learning approach to address these challenges in MDVRPSTW. First, we establish a learning decision mechanism through a Markov Decision Process (MDP) and detail decision-making elements such as vehicle path selection within the MDVRPSTW framework using a state–action–reward–policy architecture. The research team designed a multi-vehicle time-action mechanism based on an attention mechanism and a graph neural network encoder–decoder deep learning architecture, achieving network optimization through unsupervised reinforcement learning algorithms. Compared with traditional algorithms, this end-to-end solution framework can rapidly obtain high-quality solutions at various scales and effectively address the MDVRPSTW problem. Experimental results demonstrate that as problem complexity increases, the model exhibits enhanced performance in both solution quality and computational efficiency.

2. Literature Review

2.1. VRP Exact Algorithm and Heuristic Algorithm

Previous studies on VRP have primarily employed three approaches: exact algorithms, approximate algorithms, and heuristic algorithms. Exact algorithms are designed to find optimal solutions, though they perform well for small-scale VRP cases but struggle with large-scale ones. For instance, Najib Errami’s VRPSolverEasy can solve multiple VRP variants optimally and obtain suboptimal solutions within time constraints, though its computational time becomes prohibitively long when applied to over 100 customer nodes [13]. Meta-heuristic algorithms have gained widespread adoption due to their superior performance. Malek Masmoudi’s the Generalized Variable Neighborhood Search (GVNS) algorithm, which is specifically designed for VRP problems with time windows, features dynamic penalty mechanisms, multi-neighborhood structures, and systematic parameter tuning, and demonstrates high efficiency, robustness, and performance advantages in both solution quality and computational time [14]. Brenner Humberto Ojeda Rios developed three methods for the stochastic capacity-constrained multi-depot VRP (SCMVRPPD, Stochastic Capacitated Multi-depot Vehicle Routing Problem with Pickup and Delivery): tabu search (TS) heuristic, generalized variable neighborhood search (GVNS), and iterative local search (ILS-VND). Their research revealed that ILS-VND consistently achieved the best solution quality [15]. Shifeng Chen proposed a hybrid approach to solve the VRP problem with dynamic requests. The method first constructs an initial solution using GRASP, then explores and refines it with VND, and finally demonstrates its competitive advantage through two benchmark cases of dynamic pickup and dynamic delivery [16].

2.2. VRP Deep Reinforcement Learning Algorithm

Researchers have employed deep reinforcement learning in conjunction with neural networks to address the vehicle routing issue, taking advantage of these techniques’ strong capacity to represent and learn from data. In the VRP, selecting decision variables and making sequential decisions in discrete decision spaces are key steps, and RL is well-suited for the latter due to its inherent advantages. The functions of DRL have natural similarities with this sequential decision-making process; the “offline training” and “online decision-making” features of DRL enable real-time online solving of the VRP. Qingshu Guan proposed a dynamic embedding-based DRL (DE-DRL) for heterogeneous capacity vehicle routing (HCVRP), which utilizes an innovative encoder–updater–decoder (EUD) framework. Empirical results demonstrate that DE-DRL consistently outperforms heuristic methods and other DRL approaches [17]. VRP-STC, which is a complex extension of the classical VRP paradigm, incorporates stochastic elements into travel cost calculations. In this problem, the vehicles face not only capacity constraints but also varying travel costs between each node pair, which are characterized by random variables. Hao Cai developed a Graph Attention Network (GAT)-AM model combining GAT and AM mechanisms. The GAT-AM model adopts an encoder–decoder architecture and employs deep reinforcement learning algorithms to solve VRP-STC. Empirical findings indicate that the model achieves better solution quality as the problem complexity increases [18]. Yujun Wang also investigated the heterogeneous vehicle routing problem with service time constraints (HVRP-STC) and modeled it as a Markov decision process with service time constraints. He proposed a novel deep reinforcement learning-based model called TDRL (Weighted Deep Reinforcement Learning). Empirical results demonstrated that TDRL consistently outperformed the most advanced DRL methods at that time [19].
The types of VRP variants studied in the aforementioned literature are shown in Table 1.

3. Problem Definition

This study centers on the central aspect of the front warehouse mode distribution optimization problem in relation to the multi-depot vehicle routing problem with soft time windows, drawing upon the research conducted by previous scholars. We propose a specific mathematical model of MDVRPSTW and model it as a representation of the relevant elements of a Markov process (MDP) in this section.

3.1. Problem Description

The optimization problem based on MDVRPSTW can be formulated as follows: We model the road network as a connected graph G = (V,E), where V represents the set of all nodes and E represents the set of all edges. Specifically, the distribution area contains M distribution centers, each of which is equipped with Km delivery vehicles with a rated cargo capacity of Q units. The area is served by N customers, each with demand quantity qi and known time windows. Vehicles arriving before the Ei or after the Li fail to meet customer demand within their designated time windows and thus incur penalty costs for premature or delayed arrivals. Vehicles arriving within the time window [Ei, Li] for customer i incur no penalty costs. All penalty costs are uniformly converted into unit time costs and incorporated into the edge weight of E.
Given the locations xi of each customer node, the time windows [Ei, Li], the demand quantities the qi, the locations of depots, the rated capacity Q of each vehicle, and the total number of vehicles K, optimal routes should be designed to minimize the total delivery time cost.
The definition of mathematical symbols is shown in Table 2.

3.2. Constraints

Based on the operational rules of front-end warehouse distribution in fresh food e-commerce, we established 11 rigorous mathematical constraints, with their expressions and physical meanings described and shown in Table 3.

3.3. Overview of MDP

By modeling the route solution of the MDVRPSTW as an MDP, this paper transforms the problem into a sequential decision-making problem. The specific process is shown in Figure 1. The tuple M = {S, A, τ, r} serves as the primary definition of the MDP. This section outlines the state space, action space, state transition, and reward function, respectively.
The state space consists of two components, the global static state Sg and the vehicle dynamic states Sd, and these two components together form the complete decision state.
Global Static State Sg: The inherent invariant parameters of the problem, including geographic coordinates (xi, yi) of all nodes, the demand qi of each customer node, customer service time windows [Ei, Li], vehicle rated load Q, number of depots M, vehicle allocation per depot Km, total vehicle count K, the travel cost Dij between each pair of nodes, early arrival penalty factor α, and late arrival penalty factor β.
Dynamic State Sd: The parameters updated in real-time during the decision-making process, which represent the decision states of individual vehicles. Each vehicle’s dynamic state is denoted as Sdk = {(x0, d0), (x1, d1), …, (xt, dt)}, where xt denotes the customer node/lot node that vehicle k is ready to access at time t, and dt represents the remaining load after vehicle k accesses xt. The dynamic state must also include a global node mask at time t. Note that if all customer nodes in the global node mask are prohibited and the current action ends, the system will first force the next action to return all vehicles to the depots before terminating the decision-making process.
The overall state space S = Sg ∪ {Sd1, Sd2, …, SdK} represents the set of global static parameters and all vehicle dynamic states, where every state is valid.
Action Space: The core decision-making objective of the action space is to assign the next node to visit for each vehicle currently in a valid decision state, enabling synchronized path decisions among multiple vehicles. The action form of a single decision is defined as at = {xt1, xt2, …, xtK}, where K denotes the total number of vehicles, and xtk represents the next node to visit assigned to the k-th vehicle at step t (including customer nodes and depot nodes; if no movement occurs, the vehicle remains at the node from time t − 1). To ensure the legitimacy of action space A, a vehicle capacity mask is computed based on dt before node selection (nodes from time t − 1 are always available). The global mask is then replicated, AND-ed with the vehicle capacity mask, and the current node is set as accessible before calculating action probabilities. Once the i-th vehicle selects its node, the global node mask is immediately updated.
State transition: The state transition function τ: S × AS is defined as the process where, under the legal state st at time t, executing the synchronous decision action at updates the decision state from st to the legal state st+1 at time t + 1. The core logic of state transition involves independent updates of multiple vehicle states, unified global constraint verification, real-time mask synchronization, and precise sequential progression. All update operations strictly adhere to the constraints of MDVRPSTW (vehicle load capacity, unique node access, number of vehicles departing from the depot, etc.). The specific update rules are as follows: 1. The global static state Sg remains unchanged and contains inherent parameters such as node coordinates, demand quantities, time windows, and penalty factors; these parameters remain constant throughout the decision-making process and serve solely as constraints for state transitions. 2. Vehicle Dynamic State Sd: For each vehicle k (k = 1, 2, …, K) which is synchronized independently, its dynamic state Sdk is updated based on the next node xtk assigned in action at (where t is the t-th step assigned to the k-th vehicle). The core update rule is as follows: When updating the access-ready node, replace “node xt ready to access at time t in Sd+1” with “node xt+1 ready to access at time t + 1”, i.e., update Sd+1’s core structure from {(x0, d0), …, (xt, dt)} to {(x0, d0), …, (xt, dt), (xt+1, dt+1)}. Remaining load updates are as follows: dt+1 = dtqt+1 (where qt+1 is the customer demand for xt+1; if xt+1 is a time node at step t, dt+1 = dt; if a depot node, dt+1 = Q). Global node mask updates are as follows: Strictly follow the rule “update immediately after the i-th vehicle selects its node”—during action execution, mark xtk (customer node) as “allocated” immediately after the completion and validation of the assignment of xtk to each vehicle, ensuring subsequent vehicle node selection uses the updated mask to prevent redundant allocation. Time updates are as follows: The global decision-making time uniformly progresses from t to t + 1, which guarantees complete synchronization of all vehicles’ decision timing to avoid constraint conflicts caused by temporal asynchrony.
Reward function: The goal of the MDVRPSTW is to reduce the total time needed to complete the delivery task. This study defines the reward function as the negative value of the objective function R = i = 1 m t = 1 T r t m = i = 1 m t = 1 T ( d t m + ε t m ) . The objective is to minimize the sum of vehicle driving distance and penalty time.

4. Theory

This section presents a deep reinforcement learning model based on an attention mechanism and Graph Neural Network (GNN) for the Markov Decision Process (MDP) model defined in Part 3 of MDVRPSTW. The model adopts an end-to-end encoder–decoder architecture, employing unsupervised reinforcement learning to achieve policy representation that precisely meets path sequence decision-making requirements under multi-vehicle, multi-vehicle, and soft time window constraints. We supplement the interpretability analysis from three dimensions: the physical significance of feature selection, the logical correlation of feature fusion, and the generation mechanism of path decisions. Figure 2 and Figure 3 respectively demonstrate the decision execution process and network architecture of the model for the MDVRPSTW problem. All stages of the model’s encoding, decoding, and training are centered around the core objective of minimizing total delivery time cost in MDVRPSTW. Key constraints such as global node masks, vehicle capacity masks, and multi-vehicle dynamic state updates are integrated to ensure the legitimacy and interpretability of the output decisions.

4.1. Model Overview

The framework of this model follows a “feature selection–feature fusion–constraint filtering–probability decision–state update” sequence. Feature selection: The encoder selectively retains features strongly relevant to the MDVRPSTW’s optimization objective (geographical coordinates, demand volume, time windows, driving costs, vehicle load capacity) while eliminating irrelevant redundant features to ensure rational feature selection. Feature fusion: Through multi-head attention mechanisms, the system captures feature correlations between customer nodes, while Graph Neural Networks (GNNs) identify topological relationships between vehicle depots and customers. The integrated features directly reflect “delivery priority, route adjacency, and resource compatibility”. Constraint filtering: The decoder pre-filter invalid decisions using capacity masks and global node masks, enforcing practical delivery rules such as “prohibiting vehicle overloading and avoiding duplicate customer service”. Probability decision: The decoder outputs node selection probability distributions, where weights directly correspond to the model’s assigned delivery priority for each node; higher probabilities indicate higher service priority under current conditions. State update: Decision-based state transitions strictly adhere to Markov Decision Process (MDP) rules, implementing real-time updates of the remaining vehicle load capacity, served nodes, and delivery time after the vehicle completes service at each node.
The model defines the random selection strategy P as the probability distribution output by the policy network πθ over the action space A. Specifically, for the current state st, the policy network generates probability distributions for multi-vehicle selection at each node. The random strategy P samples valid actions from these distributions, ensuring the exploration of a more optimal path decision space during training. The generation of probability distributions represents the core manifestation of the model’s interpretability. The random selection strategy P is defined as follows:
p ( s τ s 0 ) = t = 0 τ 1 π θ ( a t s t ) p ( s t + 1 s t , a t )

4.2. Encoder Architecture

This subsection will provide a comprehensive introduction to the distinct structures of node and graph encoders. The node encoder is employed to acquire the attributes of each node, while the graph encoder is utilized to ascertain the possible connections among nodes. To begin with, incorporate feature Xi = [xi, di, ei, li] of all customer nodes into the plane coordinate system, where xi is the location information of the customer node and distribution center, di is the distribution demand of the customer node, and ei and li are the time window information. We begin the customer node embeddings by calculating them using learnable linear projections W0 and b0:
h i ( 0 ) = W 0 C o n c a t ( x i , d i , e i , l i ) + b 0
The initial node embeddings hi(0) are then fed into the node encoder and graph encoder.

4.2.1. Node Encoder

In layer l, the node encoder is composed of multi-head attention, with dk denoting the dimension of each query/key, dv representing the dimension of value, and h indicating the number of heads in the attention head. In particular, the MHA of the l-th layer encoder initially computes a for every attention head, subsequently merging these attention heads. The following are the steps to take:
Q i c , l = W i Q , c , l h i l , K i c , l = W i K , c , l h i l , V i c , l = W i V , c , l h i l
h e a d i c , l = s o f t max ( Q i c , l ( K i c , l ) d k ) V i c , l
M H A ( h i l ) = C o n c a t ( h e a d i 1 , l , h e a d i 2 , l , , h e a d i C , l ) W i O , l
Here W i Q , c , l R h × d × d k , W i K , c , l R h × d × d k , W i Q , c , l R h × d × d ν and W i O , l R d × d are learnable parameters in the MHA layer. On each attention head, the correlation between them is calculated, and finally, the attention features obtained by each attention head are spliced to obtain better feature representation. After that, the feedforward neural network, residual connection and batch normalization are used to process the multi-head attention output of the l-th layer with the following formulas:
m i l = B N ( h i l + M H A l ( h i l ) )
h i l + 1 = B N ( M i l + F F ( M i l ) )
Ultimately, by utilizing the encoder node embedding hi(0), the problem instance’s node encoding is acquired.

4.2.2. Graph Encoder

A graph neural network (GNN) is employed to capture the potential relationships between nodes in a convenient and dynamic manner, thereby representing the graph structure of the problem. The following are the definitions for each layer:
G N N l ( X i l 1 ) = λ X i l 1 Θ + ( 1 λ ) Φ θ ( X i l 1 / N ( i ) )
H t c = G N N l ( h t c )
The graph’s edge weight can be adjusted using the trainable parameter λ. Θ is a parameter that can be trained, N(i) is the set of nodes that are next to each other, and Φθ is a function that aggregates contextual information. Finally, the graph embedding Hct is obtained through the graph encoder.

4.3. Decoder Architecture

The decoding process of the decoder begins with the input of the encoded depot and node features and graph embeddings, followed by the generation of the probability distribution of all nodes and depots through the attention mechanism. Masking is employed to conceal depots and nodes that fail to satisfy the relevant constraints. Ultimately, node selection is accomplished by employing various search techniques, for example, a greedy search that selects the node with the highest probability and a sampling search based on a probability distribution. The decoder uses the encoder’s embedding, the output of the previous time step, the time of the current time step, and the vehicle’s remaining load capacity to select the next node that needs service at each time step. The decoding process continues until all customers are served. The ultimate goal of decoding is to generate an optimal routing solution by learning effective path selection strategies. Initially, establish the contextual information htc, comprising the graph embedding Htc, the preceding node traversed by the chosen vehicle k, and the residual capacity Dm,t of the vehicle.
h t c = [ H t c , h π t 1 , D m , t ] , t > 1 [ H t c , h π 0 , D m , t ] , t = 1
Then, we generate probability distribution based on the relationship between contextual information Htc and embedded node hN, and select probability vectors through single-headed attention output nodes:
u t = C × tanh ( W Q H t c ) · ( W K h N ) d K
The decoder executes a mask operation on the client points by utilizing C = 10 to truncate ui within [−C, C], where WQ and Wk are trainable parameters. Utilize the e function for normalization in order to derive the selection probability pi,t for every node; subsequently,
p t = s o f t max ( u t ) = e u t j e u j
At the training stage, we adopt the sampling decoding method based on the output probability pi,t of the decoder. In the test phase, the model adopts the greedy decoding method that selects the node with the maximum probability value pi,t.

4.4. Training Strategy

Policy gradient is used here to train the model. The target L(θ|s) is the expected reward, which will be evaluated based on parameter θ:
θ L ( θ s ) E p θ ( a s ) [ ( R ( a s ) R B L ( s ) ) log p θ ( a s ) ]
In this training process, two network representations are employed: (1) Policy network R(a|s), which analyzes the probability distribution pi,t to determine the overall cost of the sample; (2) Baseline network RBL(s), which evaluates the training process by identifying the positive and negative deviations from the baseline RBL(s) and eliminating disparities in training. The average target ▽θL(θ|s) for each training cycle will be obtained through the sampling strategy, and the parameter θ is updated using the Adam optimizer.
The algorithm flow is shown in Algorithm 1.
Algorithm 1: Deep Reinforcement Learning Algorithm
Input:Batch size B;
Data size D;
Number of epochs N;
Maximum training steps Γ;
Initial parameters θ for policy network πθ;
Initial parameters Φ for baseline network vΦ;
Output:The optimal parameters θ, Φ
1 T D / B
2 for epoch = 1, 2, …, N do
3 for t = 1, 2, …, T do
4 Retrieve batch b = Bt
5 for i = 0, 1, …, Γ do
6 Select an action a t , b π θ ( a t , b s t , b ) ;
7 Calculate rewards rt,b and update status st+1,b;
8 end
9 R b = m a x ( t = 0 Γ r t , b )
10 GreedyRollout with baseline V ϕ and Compute R b B L
11 d θ 1 B j = 1 B [ ( R b ( a s ) R b B L ( s ) ) log p θ ( a s ) ]
12 θ A d a m ( θ , d θ )
13 If PairedTTest ( π θ , ν ϕ ) < 5%
14 ϕ ϕ
15 end
16 end

5. Experiment

This paper generates the training data through random distribution, compares the proposed model with the baseline model using Solomon’s (1987) [20] dataset, and validates the proposed approach by evaluating the computational efficiency and model transferability of both approaches. Finally, parameter sensitivity experiments are conducted to analyze the impact of various parameters on learning performance.

5.1. Experimental Setup

Within this section, we elucidate the experimental configuration and the methodology employed to generate the data. The warehouses and customers are evenly distributed, and customer demand is evenly distributed among them. The distribution of time windows follows a uniform pattern across 20 client nodes. The time windows are evenly distributed between 50 and 100 client nodes. Each vehicle has a load capacity of 60, 150, and 300 nodes, respectively, when measured at 20, 50, and 100 nodes. Table 4 provides a comprehensive description of the fundamental configurations for generating data. The early arrival penalty factor α and late arrival penalty factor β of the time window are randomly generated from [0.2, 0.4] and [0.6, 0.8], respectively. Here, we take into account two depots and three depots, each accommodating a maximum of two and three vehicles, respectively.
The training instances are generated at random according to the settings, with an iteration size of 1,280,000 and a batch size of 1024, for a total of 100 epochs at each problem size. The Adam optimizer is employed to train the policy parameters while the learning rate is fixed at 1 × 10−4. Generally, the more time spent training, the more successful the performance will be. If the performance improvement is not obvious after a large number of iterative trainings, the iterations can be stopped before convergence, which can greatly save training time, and the learning effect is very competitive. The tactic of halting early is established at this location for a duration of 20 rounds. The platforms utilized for conducting all experiments here include NVIDIA GeForce RTX 3090 (24G video memory) and Xeon (R) Platinum 8255C. The experiments were carried out using PyTorch 2.7.1. The hidden layer dimension is set to 128, there are 8 attention heads, and the model has 3 layers.

5.2. Baseline

This study conducts comparative analysis between two representative baseline algorithms. The selection of these algorithms adheres to the principle of “covering different solution paradigms while aligning with the core characteristics of the problem”: one is an exact algorithm for VRP-type problems, used to verify the optimal boundary of solutions in small-scale scenarios; the other is a classical meta-heuristic algorithm designed for time-windowed VRP (VRPTW), used to validate the model’s generalization and superiority in complex scenarios. All comparative experiments are based on the classic VRPTW dataset by Solomon (1987) [20] and undergo standardized preprocessing tailored to the MDVRPSTW problem characteristics, ensuring fairness and validity of the comparison.

5.2.1. Exact Algorithm: VRPSolverEasy [13]

Core algorithm features: The algorithm operates without heuristic rules and employs an enumeration-and-pruning approach to traverse the solution space. Given sufficient computational time, it can rigorously compute the global optimal solution. Even under time constraints, the adaptive pruning strategy efficiently filters out high-quality suboptimal solutions, balancing computational accuracy with time efficiency.
Adaptation to the research problem: VRPSolverEasy is natively designed for single-vehicle depot VRP problems. To address the multi-depot (MDVRPSTW) characteristics of this study, the “subproblem decomposition” strategy is adopted. The multi-depot problem is decomposed into multiple independent single-depot VRPTW subproblems, each corresponding to a depot’s customer set while maintaining the original constraints (e.g., time windows, demand, and load capacity) for customer nodes. The synchronization delivery constraint is implemented by adding a “cross-depot delivery rhythm penalty term” to the objective function, ensuring the algorithm’s solution logic aligns with the study’s problem definition.
Evaluation parameter settings: To match the experimental environment of this study, the maximum solution time threshold for VRPSolverEasy was set as follows: 300 s for small-scale problems (N = 20), 1800 s for medium-scale problems (N = 50), and 3600 s for large-scale problems (N = 100). Other parameters were maintained at the algorithm’s default configuration to maximize its solution performance.
Core objectives: The study evaluates the DRL model’s solution quality and deviation from global optimal solutions in small-scale scenarios using VRPSolverEasy’s results, while quantifying the time cost of the exact algorithm in medium-to-large-scale scenarios to demonstrate the DRL model’s efficiency advantage.

5.2.2. Meta-Heuristic Algorithm: Generalized Variable Neighborhood Search (GVNS) [14]

Core algorithm features: This algorithm is specifically designed for the VRPTW with synchronization constraints, and it demonstrates strong local optimization capability and convergence efficiency. Its core neighborhood structure incorporates four operations including intra-path node exchange and inter-path node redistribution, effectively avoiding local optima. In bike parking VRPTW scenarios, it outperforms traditional meta-heuristic algorithms such as Simulated Annealing (SA) and Genetic Algorithm (GA) in both solution quality and computational efficiency.
Adaptation to the research problem: Given that GVNS is originally designed for single-vehicle scenarios, we implement a two-phase adaptation strategy (allocate first, optimize later) for the multi-vehicle scenario (MDVRPSTW). Phase 1: Pre-allocation. Using K-means clustering, all customer nodes are assigned to designated vehicle depots (with cluster centers corresponding to depot locations), ensuring the total customer demand of each depot satisfies the vehicle capacity constraints. Phase 2: Optimization. For each depot’s customer subset, the GVNS algorithm is independently executed to compute optimal routes. The synchronization constraint is achieved by adjusting the time window relaxation coefficient, ensuring synchronized delivery schedules across different depots.
Evaluation parameter settings: Based on Malek Masmoudi’s original study [14], the GVNS algorithm was configured with 1,280,000 iterations (consistent with the DRL model training iterations in this study). The neighborhood structure incorporated four operations: node swapping (N1), node insertion (N2), path splitting (N3), and path merging (N4). The perturbation intensity was dynamically adjusted during iterations, with strong perturbations in early stages to ensure global exploration and weak perturbations in later stages to enhance local optimization. The maximum iteration stagnation threshold was set at 20,000 steps to trigger the early stopping mechanism.
Core comparison: As the benchmark algorithm for VRPTW in single-vehicle scenarios, GVNS demonstrates its solution quality in multi-vehicle scenarios, validating the DRL model’s superiority in cross-vehicle generalization and complex constraint adaptation. The comparative analysis of solution time and quality across problem scales further highlights DRL’s efficiency and stability in medium-to-large-scale scenarios, particularly its time advantage during exponential solution space expansion.

5.2.3. Solomon (1987) [20] Dataset Preprocessing

To ensure consistency in comparative experiments, the Solomon dataset underwent the following preprocessing steps: 1. Sample Screening: The R1 and C1 series (covering scenarios with varying customer distribution densities) were filtered, excluding samples with demand exceeding the rated load capacity of vehicles in this study. 2. Coordinate Normalization: The node coordinates of the original dataset were normalized to a 2D square space [0, 1] × [0, 1] (consistent with the input features of the DRL model). 3. Multi-Depot Expansion: A total of 2 or 3 depot locations were randomly generated within the normalized space to ensure uniform distribution. 4. Constraint Adaptation: The departure time windows for vehicles in each depot were unified to a single time range to align with synchronous delivery constraints, with early arrival/late arrival penalty factors consistent with the DRL model settings (α ∈ [0.2, 0.4], β ∈ [0.6, 0.8]).
The preprocessed data were categorized into three scales, small scale (N = 20), medium scale (N = 50), and large scale (N = 100), which perfectly matched the experimental scale of this study.

6. Results

This section compares the deep reinforcement learning model proposed in the text with the baseline model on the MDVRPSTW problem in terms of solution quality and computational efficiency. Figure 4 presents the model’s solution for a scenario with 2 depots, 2 vehicles per lot, and 20 customers. Table 5 presents the test results across three problem scales with varying numbers of depots and vehicles per depot. Based on these findings, the following conclusions can be drawn.
Objective metrics: DRL outperforms VRPSolverEasy by 32.9% at N = 20 and achieves an objective value 27.7% lower than GVNS at N = 50 and 96.3% lower than GVNS at N = 100, demonstrating a “scale-dependent advantage” pattern. Time metrics: DRL’s average execution time (4.63 s) across all scenarios is merely 0.19% of GVNS (2468.06 s) and 0.67% of VRPSolverEasy (696.03 s, excluding timeouts), showing exceptional time stability (coefficient of variation: 18.7%). Configuration adaptability: DRL’s objective response to configuration changes (−20.15%) is 4.77 times stronger than VRPSolverEasy’s (−4.22%), making it the most adaptable to multi-depot and multi-vehicle configurations.

6.1. Generalization Experiments

It is important to take into account that the quantity of customers and the weight of the vehicle are not predetermined during the actual delivery process. For instance, when attempting to resolve the issue of 100 customer points, certain customer points may not need to be delivered on the same day, or the vehicle may need to be replaced and its weight altered. This model is adept at addressing this issue. This model, which has been trained, is utilized to address issues of varying customer sizes, and any customer point can still be addressed swiftly. The trained models with 20, 50, and 100 customer sizes are adjusted according to the case size, and the results are displayed in Table 6, Table 7 and Table 8.

6.2. Parameter Sensitivity Analysis

Within this section, we examine the model’s parameter sensitivity and investigate the influence of the graph neural network’s layer count, attention head count, and encoder layer count on the quality of the solution. The training loss for various parameters is depicted in Figure 5. We adjusted the number of GNN layers to 2, 4, and 6, respectively, in order to retrain the model. It is evident from Figure 5 that when the number of layers is increased, the convergence effect is not substantial, indicating that increasing the number of GNN layers has no significant positive effect on the model’s convergence. The sensitivity of the number of layers is negligible. We adjusted the quantity of attention heads to 2, 4, and 6 and carried out training in that order. The model’s convergence performance slightly deteriorated with the increase in the number of attention heads, but when it continued to increase, there was no obvious difference in the effect. It is noteworthy that an increase in the number of attention heads will lead to an increase in the number of parameters and calculations. By adjusting the encoder layers to 1, 3, and 5, we observed a significant impact on the training outcome when the number of encoder layers is 1. The shallower network structure is not conducive to network training, which affects the final solution quality of the model. When the number of encoder layers increases and the encoder structure has 3 layers, the training of the model converges faster, and the quality of the solution also increases. Nevertheless, the impact is only marginally enhanced when the number of encoder layers is consistently augmented to 5 layers. The selection of deep learning model layers requires a judicious trade-off between training performance and computational complexity.

7. Discussion

The DRL model proposed in this study exhibits a “scale-dependent performance inversion” characteristic, strongly supporting the research hypothesis that “DRL can balance solution quality and efficiency in MDVRPSTW.” In small-scale problems, although its objective value is slightly higher than the other two algorithms, the gap with GVNS (averaging 8.9%) remains small, while the solution time is reduced by 96.8%, maintaining practical application competitiveness. In medium and large-scale problems, the DRL model achieves comprehensive superiority: at N = 50, the objective value is 27.7% lower than GVNS and 47.8% lower than VRPSolverEasy; at N = 100, the objective value is 96.3% lower than GVNS and 75.4% lower than VRPSolverEasy (excluding timeout cases). This advantage stems from the model’s encoder–decoder architecture—graph neural networks capture topological associations between depots and customers, while multi-head attention fuses spatiotemporal features [18], enabling simultaneous optimization of multiple vehicle paths and avoiding the constraints of traditional “divide-and-conquer” strategies. Additionally, the DRL model maintains stable solution times across scales (N = 20: average 3.51 s; N = 50: average 6.09 s; N = 100: average 4.29 s), thanks to the “offline training–online decision-making” paradigm of DRL [17]—strategy networks trained with large-scale random data can directly output routing paths for new problem cases without re-exploring the solution space, perfectly matching the real-time demands of fresh e-commerce forward warehouse distribution.
These findings transcend mere algorithm performance comparisons. Academically, this study fills the research gap in DRL-based solutions for the MDVRPSTW problem, demonstrating that the Markov Decision Process (MDP) model integrating global static states with vehicle dynamics effectively handles complex constraints like multi-vehicle depots, multiple vehicles, and soft time windows. Industrially, the stable and efficient performance of the DRL model provides a viable technical solution for optimizing forward warehouse distribution in fresh food e-commerce enterprises. Compared to traditional algorithms, it reduces total distribution costs by 27.7% to 96.3% and shortens response times from hours/minutes to seconds, addressing the pain points of high cold chain costs and stringent delivery timeliness constraints for fresh products.

8. Conclusions

This section summarizes the key findings of the experimental results. The deep reinforcement learning model proposed in this paper demonstrates outstanding performance in solving the MDVRPSTW problem under the front-end warehouse model. For small-scale problems (N = 20), the model maintains strong competitiveness, and its solution time is reduced by 96.8% and 96.2% compared with GVNS and VRPSolverEasy, respectively, and there is only a 8.9% difference from the GVNS target value.
In medium-scale and large-scale problems (N = 50/100), the model demonstrates comprehensive superiority in both solution quality and efficiency, achieving 27.7% to 96.3% lower objective values compared with the baseline algorithms, with solution times consistently maintained between 3 and 10 s, effectively avoiding the exponential growth characteristic of traditional algorithms.
The model demonstrates exceptional adaptability to complex configurations: as the number of depots and vehicles increases, the target value decreases continuously (with a maximum reduction of 28.99%), while the solution time shows no exponential growth, fully meeting the diverse configuration requirements of forward warehouse distribution.
In conclusion, the DRL model effectively balances solution quality and efficiency for the MDVRPSTW problem, providing a novel technical approach for optimizing distribution in fresh e-commerce forward warehouses. It fills the gap in applying deep reinforcement learning to MDVRPSTW and lays the groundwork for future research on more complex vehicle routing problems.
While this study simplifies vehicle time variations, resulting in discrepancies from real-world scenarios, it remains unsuitable for time-sensitive applications. Future research could focus on three key areas: (1) incorporating dynamic demand constraints (e.g., real-time order fluctuations) to enhance model adaptability to actual delivery scenarios; (2) optimizing the distance-time conversion ratio between nodes to 1:1; and (3) developing lightweight model architectures to reduce training resource consumption and facilitate deployment on edge devices such as logistics delivery vehicles.

Author Contributions

M.J.: Conceptualization, methods, writing. G.C.: content. J.C.: validation. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data Availability Statement: Additional data available in Table 4.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Liu, X.; Gou, X.; Xu, Z. Multi-Objective Last-Mile Vehicle Routing Problem for Fresh Food E-Commerce: A Sustainable Perspective. Int. J. Inf. Technol. Decis. Mak. 2024, 23, 2335–2363. [Google Scholar] [CrossRef]
  2. Huang, Y. Evolutionary game analysis of collaborative strategies in fresh e-commerce and cold chain logistics: The role of incentive mechanisms and supervision policies. Int. Rev. Econ. Financ. 2025, 104, 104773. [Google Scholar] [CrossRef]
  3. Li, X.; Luo, H.; Wang, G.; Song, Z.; Gou, Q.; Meng, F. Optimizing multi-drone patrol path planning under uncertain flight duration: A robust model and adaptive large neighborhood search with simulated annealing. Appl. Soft Comput. 2025, 176, 113107. [Google Scholar] [CrossRef]
  4. Guo, E.; Gao, Y.; Hu, C. A two-stage evolutionary algorithm based on hybrid penalty strategy and its application to multi-UAV path planning. Expert Syst. Appl. 2025, 298, 129698. [Google Scholar] [CrossRef]
  5. Hung, P.D.; Tam, N.T.; Binh, H.T.T. Adaptive ant colony optimization for solving dynamic vehicle and drone routing with time window constraints. Evol. Intell. 2025, 18, 110. [Google Scholar] [CrossRef]
  6. Wang, S.; Sun, W.; Huang, M. An adaptive large neighborhood search for the multi-depot dynamic vehicle routing problem with time windows. Comput. Ind. Eng. 2024, 191, 110122. [Google Scholar] [CrossRef]
  7. Wang, Y.; Chen, P. An adaptive large neighbourhood search for multi-depot electric vehicle routing problem with time windows. Eur. J. Ind. Eng. 2024, 18, 606–636. [Google Scholar] [CrossRef]
  8. Bezerra, S.N.; Souza, M.J.F.; de Souza, S.R. A variable neighborhood search-based algorithm with adaptive local search for the Vehicle Routing Problem with Time Windows and multi-depots aiming for vehicle fleet reduction. Comput. Oper. Res. 2022, 149, 106016. [Google Scholar] [CrossRef]
  9. Rabbani, M.; Pourreza, P.; Farrokhi-Asl, H.; Nouri, N. A hybrid genetic algorithm for multi-depot vehicle routing problem with considering time window repair and pick-up. J. Model. Manag. 2018, 13, 698–717. [Google Scholar] [CrossRef]
  10. Anuar, W.K.; Lee, L.S.; Seow, H.-V.; Pickl, S. A Multi-Depot Dynamic Vehicle Routing Problem with Stochastic Road Capacity: An MDP Model and Dynamic Policy for Post-Decision State Rollout Algorithm in Reinforcement Learning. Mathematics 2022, 10, 2699. [Google Scholar] [CrossRef]
  11. Arishi, A.; Krishnan, K. A multi-agent deep reinforcement learning approach for solving the multi-depot vehicle routing problem. J. Manag. Anal. 2023, 10, 493–515. [Google Scholar] [CrossRef]
  12. Zhang, K.; Lin, X.; Li, M. Graph attention reinforcement learning with flexible matching policies for multi-depot vehicle routing problems. Phys. A Stat. Mech. Appl. 2023, 611, 128451. [Google Scholar] [CrossRef]
  13. Errami, N.; Queiroga, E.; Sadykov, R.; Uchoa, E. VRPSolverEasy: A Python Library for the Exact Solution of a Rich Vehicle Routing Problem. INFORMS J. Comput. 2024, 36, 956. [Google Scholar] [CrossRef]
  14. Masmoudi, M.; Borchani, R.; Jarboui, B. Generalized variable neighborhood search algorithm for vehicle routing problem with time windows and synchronization. Comput. Oper. Res. 2025, 183, 107193. [Google Scholar] [CrossRef]
  15. Rios, B.H.O.; Xavier, E.C. Metaheuristic approaches for the stochastic capacitated multi-depot vehicle routing problem with pickup and delivery. Expert Syst. Appl. 2025, 290, 128258. [Google Scholar] [CrossRef]
  16. Chen, S.; Yin, Y.; Sang, H.; Deng, W. A hybrid GRASP and VND heuristic for vehicle routing problem with dynamic requests. Egypt. Inform. J. 2025, 29, 100638. [Google Scholar] [CrossRef]
  17. Guan, Q.; Xue, S.; Tan, J.; Jia, L.; Cao, H.; Chen, B. Dynamic embedding-based deep reinforcement learning for heterogeneous capacitated VRPs with unloading time constraints. Expert Syst. Appl. 2025, 293, 128660. [Google Scholar] [CrossRef]
  18. Cai, H.; Xu, P.; Tang, X.; Lin, G. Solving the Vehicle Routing Problem with Stochastic Travel Cost Using Deep Reinforcement Learning. Electronics 2024, 13, 3242. [Google Scholar] [CrossRef]
  19. Wang, Y.; Hong, X.; Wang, Y.; Zhao, J.; Sun, G.; Qin, B. Token-based deep reinforcement learning for Heterogeneous VRP with Service Time Constraints. Knowl.-Based Syst. 2024, 300, 112173. [Google Scholar] [CrossRef]
  20. Solomon, M.M. Algorithms for the vehicle routing and scheduling problems with time window constraints. Oper. Res. 1987, 35, 254–265. [Google Scholar] [CrossRef]
Figure 1. Detailed process of Markov decision process. The blue dashed box represents the Deep Learning System, the core module of the framework. The orange dashed box inside the blue box represents the Encoder, which is responsible for sequence learning. The green oval represents the State of the environment. The arrow from the Encoder to the Decoder represents the flow of encoded latent representations. The arrow labeled Action represents the decision output from the Decoder to the environment. The arrow labeled Reward represents the feedback signal from the environment to the Decoder for policy optimization. The arrow from the State back to the Encoder represents the closed-loop input of the new environment state for the next iteration.
Figure 1. Detailed process of Markov decision process. The blue dashed box represents the Deep Learning System, the core module of the framework. The orange dashed box inside the blue box represents the Encoder, which is responsible for sequence learning. The green oval represents the State of the environment. The arrow from the Encoder to the Decoder represents the flow of encoded latent representations. The arrow labeled Action represents the decision output from the Decoder to the environment. The arrow labeled Reward represents the feedback signal from the environment to the Decoder for policy optimization. The arrow from the State back to the Encoder represents the closed-loop input of the new environment state for the next iteration.
Systems 14 00261 g001
Figure 2. Process of solving MDVRPSTW. The arrows in the figure with different colors, represent the action sequences of different vehicles.
Figure 2. Process of solving MDVRPSTW. The arrows in the figure with different colors, represent the action sequences of different vehicles.
Systems 14 00261 g002
Figure 3. Specific architecture of neural network. The blue and black spheres represent the nodes in the distribution network, where black spheres denote depots and blue spheres denote customer nodes. The outermost dashed box labeled Graph Encoder represents the overall graph encoding module. The inner dashed box labeled Node Encoder represents the sub-module responsible for encoding node features. The dashed box labeled Decoder represents the decision-making module that generates solutions. Black arrows represent the flow of data and feature information through the network layers. Red arrows represent the flow of key intermediate representations (node embeddings) between the encoder and decoder, as well as the final output of the solution.
Figure 3. Specific architecture of neural network. The blue and black spheres represent the nodes in the distribution network, where black spheres denote depots and blue spheres denote customer nodes. The outermost dashed box labeled Graph Encoder represents the overall graph encoding module. The inner dashed box labeled Node Encoder represents the sub-module responsible for encoding node features. The dashed box labeled Decoder represents the decision-making module that generates solutions. Black arrows represent the flow of data and feature information through the network layers. Red arrows represent the flow of key intermediate representations (node embeddings) between the encoder and decoder, as well as the final output of the solution.
Systems 14 00261 g003
Figure 4. Example of results for 2 depots, 2 vehicles per depot, and 20 customer nodes. Red lines represent the route of the 1st vehicle from depot D1. Blue lines represent the route of the 1st vehicle from depot D2. Green lines represent the route of the 2nd vehicle from depot D2. Black squares (D1, D2) represent the depots (distribution centers). Red dots represent the customer nodes visited by the 1st vehicle from D1. Blue dots represent the customer nodes visited by the 1st vehicle from D2. Green dots represent the customer nodes visited by the 2nd vehicle from D2. The table below shows the actions selected by the four vehicles at each time step.
Figure 4. Example of results for 2 depots, 2 vehicles per depot, and 20 customer nodes. Red lines represent the route of the 1st vehicle from depot D1. Blue lines represent the route of the 1st vehicle from depot D2. Green lines represent the route of the 2nd vehicle from depot D2. Black squares (D1, D2) represent the depots (distribution centers). Red dots represent the customer nodes visited by the 1st vehicle from D1. Blue dots represent the customer nodes visited by the 1st vehicle from D2. Green dots represent the customer nodes visited by the 2nd vehicle from D2. The table below shows the actions selected by the four vehicles at each time step.
Systems 14 00261 g004
Figure 5. Training Loss Curves for Sensitivity Analysis of GNN Layers, Attention Heads, and Encoder Layers.
Figure 5. Training Loss Curves for Sensitivity Analysis of GNN Layers, Attention Heads, and Encoder Layers.
Systems 14 00261 g005
Table 1. Literature comparison.
Table 1. Literature comparison.
DocumentMulti-DepotCapacity RequirementsWindow RequirementDynamic Request
[13]√ *
[14]
[15]
[16]
[17]
[18]
[19]
This article
* The √ symbol indicates that the research question in the corresponding literature involves a specific variant of VRP.
Table 2. Variable Definitions.
Table 2. Variable Definitions.
SymbolDefinitionRemarks
Ztotal distribution cost min Z = k = 1 K i = 0 N j = 0 N d i j x i j k + α i = 1 N max ( E i t i k , 0 ) + β i = 1 N max ( t i k L i , 0 )
DijThe travel cost from node i to node j-
MNumber of vehicle lots (front warehouses)2 or 3
KTotal number of vehiclesM × Km
KmNumber of configured vehicles in the m-th depot2 or 3
NNumber of client nodes20 or 50 or 100
tikThe time when the k-th vehicle arrives at the i-th customer nodeStart timing from the depot
xijkWhether the k-th vehicle is from i to j0 = false, 1 = true
xi, yiGeographical coordinates of the i-th customer nodecoordinate normalization
qiDemand of the i-th customer noderandomly distributed experiment
Qrated load capacity of vehicleQ is adjusted according to the number of customer nodes
EiService start time for the i-th client nodeDetermine the random range based on the number of client nodes
LiService end time for the i-th client node
αearly arrival penalty factor[0.2, 0.4]
βlate time penalty factor[0.6, 0.8]
Table 3. Mathematical Constraints and Their Physical Meanings.
Table 3. Mathematical Constraints and Their Physical Meanings.
ConstraintFormulaDescription
vehicle load i = 1 N q i x i j k Q , k K The total demand of all customer nodes served by a single vehicle should not exceed its rated load capacity.
Unique client node access k = 1 K j = 0 N x i j k = 1 , i N Each client node can only be accessed by one vehicle to ensure service uniqueness
conservation of flow of vehicles i = 0 N x i j k = j = 0 N x j i k , k K , i N For any vehicle, the number of times it leaves a node equals the number of times it arrives at that node, avoiding path interruption.
vehicle deployment frequency i = 0 N x 0 i k 1 , k K Each vehicle can only be dispatched once from the depot, with no repeated empty trips.
Number of vehicles departing from the depot k = 1 K m x 0 i k K m , m M The number of vehicles dispatched from each depot does not exceed the total number of vehicles allocated to it.
vehicle path closure i = 1 N x i 0 k = i = 0 N x 0 i k , k K The vehicle must return to its original depot after completing the delivery, forming a closed path.
Vehicle arrival time updated t j k = t i k + x i j k , i , j N , k K The time for a vehicle to reach the next customer node equals the sum of the current node arrival time and the travel time between the two nodes, with the travel time assumed to be 1 unit time.
nonnegative decision variable x i j k 0 , i , j N , k K
Decision variable x i j k 0,1 , i , j N , k K xijk = 1 indicates that the k-th vehicle travels from node i to node j, while xijk = 0 denotes the opposite scenario.
The arrival time is non-negative t i k 0 , i N , k K The time when the vehicle arrives at the customer node is a non-negative value.
depot Number 0 M , i , j N M Set the depot number to 0 and the customer node number to 1 to N and include them in the unified path planning system.
Table 4. Basic settings for data generation.
Table 4. Basic settings for data generation.
Problem SizeNumber of Customer NodesNode CoordinatesCustomer Point DemandMaximum Vehicle Load CapacityTime Window Distribution
MDVRPSTW2020 x i [ 0 , 1 ] 2 d i [ 1 , 9 ] Q = 60[0, 20]
MDVRPSTW5050 x i [ 0 , 1 ] 2 d i [ 1 , 9 ] Q = 150[0, 50]
MDVRPSTW100100 x i [ 0 , 1 ] 2 d i [ 1 , 9 ] Q = 300[0, 100]
Table 5. Comparison of the effects of solving MDVRPSTW calculation examples.
Table 5. Comparison of the effects of solving MDVRPSTW calculation examples.
DepotVehicleMethodN = 20N = 50N = 100
ObjTime/sObjTime/sObjTime/s
22VRPSolverEasy60.1289.56289.451789.23876.54Timeout (3600 s)
GVNS74.2598.76214.89456.89488.765689.34
DRL88.983.45175.673.12261.894.58
3VRPSolverEasy57.8995.34278.671890.56865.43Timeout (3600 s)
GVNS69.87105.43207.56525.78477.896123.45
DRL74.894.23142.563.58185.784.27
32VRPSolverEasy58.9892.67290.781901.34888.67Timeout (3600 s)
GVNS71.98110.89204.67610.98468.907201.56
DRL76.893.12135.897.89210.674.12
3VRPSolverEasy56.7898.90277.892010.67864.32Timeout (3600 s)
GVNS68.76118.56194.56688.76457.897890.12
DRL69.873.25139.899.78228.904.20
Table 6. Generalization experiment at 20-node scale.
Table 6. Generalization experiment at 20-node scale.
DepotVehicleN = 16N = 17N = 18N = 19N = 20
2277.47772.62579.33389.00689.594
371.32273.86574.98773.35475.488
3280.70479.30775.80976.79877.331
377.73278.78178.78174.21370.43
Table 7. Generalization experiment at 50-node scale.
Table 7. Generalization experiment at 50-node scale.
DepotVehicleN = 46N = 47N = 48N = 49N = 50
22173.552173.291158.226154.212176.51
3143.702143.335130.600149.620149.067
32132.082119.198124.564119.198143.143
3134.751125.073140.793167.620140.642
Table 8. Generalization experiment at 100-node scale.
Table 8. Generalization experiment at 100-node scale.
DepotVehicleN = 96N = 97N = 98N = 99N = 100
22217.180227.685238.859259.606262.250
3188.324194.221195.137172.594186.378
32206.452184.475220.807214.949211.114
3212.549205.075210.255200.292222.580
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, J.; Jiang, M.; Chen, G. Research on Distribution Optimization Strategy of Front Warehouse Model Based on Deep Reinforcement Learning. Systems 2026, 14, 261. https://doi.org/10.3390/systems14030261

AMA Style

Chen J, Jiang M, Chen G. Research on Distribution Optimization Strategy of Front Warehouse Model Based on Deep Reinforcement Learning. Systems. 2026; 14(3):261. https://doi.org/10.3390/systems14030261

Chicago/Turabian Style

Chen, Jiaqing, Ming Jiang, and Guorong Chen. 2026. "Research on Distribution Optimization Strategy of Front Warehouse Model Based on Deep Reinforcement Learning" Systems 14, no. 3: 261. https://doi.org/10.3390/systems14030261

APA Style

Chen, J., Jiang, M., & Chen, G. (2026). Research on Distribution Optimization Strategy of Front Warehouse Model Based on Deep Reinforcement Learning. Systems, 14(3), 261. https://doi.org/10.3390/systems14030261

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop