1. Introduction
Against the backdrop of rapid global urbanization and concurrent climate change, the sustainable transformation of urban logistics systems constitutes a critical link in achieving the United Nations Sustainable Development Goals. Serving as the lifeline for ensuring food safety and pharmaceutical efficacy, cold chain logistics, while underpinning modern living and health security, has become a focal point of environmental and operational cost pressures due to its high energy consumption characteristics [
1,
2,
3,
4]. Currently, the annual carbon dioxide emissions from China’s cold chain logistics have reached approximately 402 million tons, with the transportation sector contributing over 60% of this total—a figure projected to continue rising under the existing operational paradigm [
5,
6]. Simultaneously, consumer expectations regarding delivery timeliness and reliability are continually escalating. An industry survey indicates that over 35% of cold chain orders incur customer complaints due to delivery delays or temperature control deviations, directly leading to an average increase of 15% in cargo damage costs [
7]. These severe emission data points and significant service cost issues collectively form a quantitative challenge that cold chain logistics operations must urgently confront.
The green transition of the industry offers a new pathway to address this dilemma. Driven by both policy and market forces, the penetration rate of new energy (electric) refrigerated trucks is projected to reach 42.4% by 2025, signifying a fundamental transformation in fleet composition [
8,
9]. However, this shift gives rise to a complex “mixed fleet paradox” at the operational level: while electric refrigerated trucks can achieve zero tailpipe emissions during operation and offer more precise temperature control, they are constrained by higher purchase costs, limited driving range (averaging 200–300 km), and inadequate charging infrastructure coverage (particularly below 40% in suburban areas). Conversely, traditional fuel-powered refrigerated trucks hold advantages in purchase cost and mileage flexibility, but their carbon emissions per unit distance are typically 2–3 times higher than those of electric vehicles [
10]. In practical scheduling, enterprises must perform quantitative trade-offs on this paradox while simultaneously meeting stringent requirements for timeliness and temperature-controlled service delivery [
11,
12,
13]. Consequently, developing a decision-making model capable of concurrently optimizing total operational cost, full-journey carbon emissions, and customer satisfaction (quantified by time window fulfillment rate) is no longer merely a theoretical frontier issue but an urgent, practical necessity for the industry to tackle the aforementioned quantitative challenges. Existing research often focuses on single objectives or fails to adequately account for the multi-objective trade-off mechanisms under mixed fleet and charging constraints, resulting in limited decision-support capability. This paper aims to bridge this gap by proposing an innovative multi-strategy optimization algorithm to provide a systematic, quantitative solution for resolving the “cost-emission-service” trilemma of mixed fleets.
Amid intensifying market competition and increasingly dynamic operational environments, research on cold chain logistics distribution scheduling has primarily evolved along two technical pathways: multi-objective programming models based on exact solution methods and multi-objective optimization algorithms based on meta-heuristics. The former continues to deepen in terms of model complexity and realism. For instance, Deng et al. [
14] constructed a distribution cost model comprehensively considering temperature, carbon emissions, customer satisfaction, and traffic conditions, while Hong et al. [
15] planned cold chain logistics routes considering congestion avoidance with the objectives of minimizing carbon emissions and total costs. Notably, recent research has achieved new progress in green logistics and refined modeling. For example, Tavana et al. [
16] systematically analyzed the link between food miles and carbon footprint, revealing the environmental impact of logistics activities from the source of the supply chain, thus providing a macro-level carbon emission consideration background for this study. Wu et al. [
17] innovatively introduced trapezoidal fuzzy numbers to handle interval fuzzy demand in green multimodal transport path optimization, and considered mixed time windows and carbon trading policies, offering direct reference for addressing demand uncertainty and environmental policy synergy in logistics. Sun et al. [
18] explicitly focused on “modeling the green multimodal transport path problem with soft time windows considering interval fuzzy demand.” The fuzzy multi-objective optimization framework they constructed provides important methodological inspiration for this study in handling uncertain customer requirements and time windows. Given the NP-hard nature of the problem and the requirements for real-time scheduling, multi-objective optimization algorithms based on meta-heuristics have become the mainstream of current research. Zhang et al. [
19] developed a multi-objective hybrid genetic algorithm combined with large neighborhood search to enhance efficiency; Liu et al. [
20] integrated multiple strategies to develop a hybrid ant colony optimization algorithm; Leng et al. [
21] introduced adaptive crossover and mutation strategies into traditional genetic algorithms. Recent algorithm research places greater emphasis on system integration and multi-dimensional performance evaluation. For instance, Yi et al. [
22] constructed a multi-objective model simultaneously considering heterogeneous fleets, soft time windows, and path flexibility.
However, existing research still exhibits notable limitations. Firstly, the majority of studies focus on traditional objectives such as distribution cost and timeliness, generally overlooking the core reality of mixed-temperature fleets composed of electric and fuel-powered vehicles. They fail to systematically integrate critical factors like electric vehicle range and charging constraints, as well as refrigeration energy consumption characteristics, into the optimization models. Consequently, these models struggle to accurately reflect the complex decision-making environment of cold-chain logistics under the triple bottom line of economic cost, service quality, and environmental impact. Secondly, at the algorithmic level, to address the shortcomings of traditional algorithms like NSGA-II in balancing multi-objective relationships and maintaining solution diversity, NSGA-III has been increasingly adopted due to its reference point mechanism. Scholars have improved it in aspects such as adaptive reference point setting [
23,
24], integration of local search strategies [
25,
26,
27], and design of problem-adapted operators [
28,
29,
30]. Nonetheless, these generic improvements do not adequately account for the unique solution space structure induced by mixed-temperature fleets in cold-chain distribution—such as the coupling of charging/refueling decisions with temperature control energy consumption—and the complex constraint systems. This results in insufficient specificity when applied to the problem at hand. Even with the construction of more comprehensive models, the standard NSGA-III algorithm still faces three major bottlenecks when addressing such high-dimensional, multi-constrained, and strongly coupled cold-chain routing optimization problems: (1) the initial population lacks quality and guidance within the hybrid solution space; (2) there is insufficient capability for fine-grained local search on continuous variables (e.g., speed, temperature); (3) the fixed operator mechanism struggles to adapt to problem-specific characteristics, leading to an imbalance between exploration and exploitation, which ultimately compromises convergence efficiency and the distribution quality of the Pareto front.
To address the aforementioned challenges, this paper proposes a systematically improved NSGA-III algorithm, with its core innovations comprising: the design of a hybrid heuristic population initialization strategy to enhance the quality and diversity of initial solutions; the introduction of an adaptive simulated binary crossover operation to strengthen the search capability in continuous spaces; and the development of a dynamic polynomial mutation mechanism to achieve intelligent balancing between exploration and exploitation. This study aims to provide a quantitative decision-support tool for this complex sustainable logistics problem through advanced computational intelligence methods, thereby promoting the synergistic optimization of cold chain logistics distribution systems across the three dimensions of cost, environment, and service.
3. Algorithm Design
3.1. Principles and Limitations of the Classical NSGA-III Algorithm
The key step in the NSGA-II algorithm is to determine which individuals proceed to the next generation by calculating their crowding distance. However, this algorithm exhibits weak convergence. NSGA-III shares a similar framework with NSGA-II but addresses the issue of poor population convergence more effectively by introducing reference points and a niche-preservation strategy based on those points. Nevertheless, the uniformity of the reference points remains difficult to ascertain [
30,
39,
40,
41,
42].
Key Steps of NSGA-III: Assume the population size at generation is . The algorithm begins by performing a series of genetic operations on the parent population to generate an offspring population . Subsequently, and are combined to form a temporary combined population , where . At this stage, the combined population has a size of . The objective is to select individuals from to form the next generation’s parent population . Thus, the selection is made from these non-domination fronts based on both Pareto rank and diversity. The smaller the hierarchy level, the greater the probability that the individual will enter the next generation population . Individuals from fronts are sequentially incorporated into set until the size of S exceeds , meaning the number of individuals in must be at least . The front at which this condition is first met is referred to as the critical layer, indicating that its incorporation into satisfies this requirement. Among these, assuming , all individuals from fronts to are incorporated into , at which point . If , meaning the inclusion of the critical front causes the population size of to exceed , it becomes necessary to select only a portion of the individuals from the critical front for inclusion in . This selection is accomplished using a reference point-based niching strategy. The procedure is described as follows: First, the individuals in set are normalized, with the ideal point after normalization serving as the origin. Subsequently, a set of reference points is randomly generated. Rays emanating from the origin and passing through these reference points define reference lines. For each individual in , the perpendicular distance to each reference line is calculated. Each individual is associated with the reference point whose corresponding reference line yields the smallest perpendicular distance. Let denote the number of individuals associated with a reference point , defined as its niche count. Next, within the critical front , the reference point with the smallest niche count is selected. A decision is then made based on the value of : When , the algorithm selects the individual from frontier with the smallest perpendicular distance and inserts it into the new population and is incremented by 1; if , an individual associated with is randomly chosen from and added to , and is similarly incremented. This process is repeated iteratively until the number of individuals in the next-generation population equals .
The traditional NSGA-III algorithm exhibits limitations when applied to the cold chain logistics vehicle routing problem, including slow convergence due to reliance on randomly generated initial populations, insufficient optimization capability for continuous variables through its crossover operation, and a mutation mechanism that struggles to balance global exploration and local exploitation. To address these issues, PSO is introduced to optimize the initial population, enhancing the quality and diversity of initial solutions. SBX is employed to strengthen the local search capability for continuous variables. Additionally, a polynomial mutation mechanism is incorporated to maintain population diversity. These enhancements effectively overcome the original algorithm’s shortcomings in convergence speed, optimization precision, and solution distribution uniformity, thereby significantly improving the traditional NSGA-III algorithm’s solving performance and robustness for the cold chain logistics vehicle routing problem.
3.2. Initial Population Generation Strategy Based on Particle Swarm Optimization
In the NSGA-III algorithm, the quality of the initial population directly influences both the subsequent convergence speed and the quality of the final solution set. The traditional NSGA-III algorithm employs a random method to generate the initial population. While this approach is straightforward, it often results in insufficient population diversity, slow convergence, and a heightened risk of becoming trapped in local optima.
To address the aforementioned issues, this study leverages the powerful global guided search capability of Particle Swarm Optimization (PSO) to design a hybrid initialization strategy aimed at generating a high-quality and highly diverse initial population. The core of this strategy involves executing a short-cycle PSO search, where the encoding and fitness function are specifically designed to adapt to the path optimization problem. The detailed procedure is as follows:
Step 1: Encoding. Particle position
represents the visitation sequence of customer points. Simultaneously, real-number encoding is employed to denote continuous parameters in another particle position vector
, which requires a decoding process to transform these parameters into actual routes. During initialization,
particles are randomly generated, with each particle’s position and velocity initialized as follows:
where
denotes the uniform distribution,
represents the position boundaries, and
defines the velocity boundaries.
Step 2: Fitness Function Design. The fitness function is used to evaluate the merit of a particle and is typically related to the model’s objective functions. For the cold chain logistics distribution routing problem, the fitness function can be defined as:
where
represents the total distribution cost,
represents the customer satisfaction,
represents the total carbon emissions, and
are weight coefficients used to balance the multiple objectives.
Step 3: Particle Velocity and Position Update. During each iteration, particles update their velocity and position based on their personal best position and the global best position according to the following equations:
where
is the inertia weight, controlling the momentum of the particle’s velocity;
are acceleration coefficients, governing the influence of the particle’s individual experience and social experience, respectively;
are random numbers uniformly distributed in the interval [0, 1], introducing stochasticity to the search process.
Step 4: Individual and Global Best Update. After each iteration, the personal best position of each particle and the global best position are updated as follows:
Step 4: Termination Condition and Output. The PSO process terminates when either the maximum number of iterations is reached or the fitness values converge. Upon termination, the optimized particle swarm, representing the final personal best positions, is output as the high-quality initial population for the subsequent NSGA-III algorithm.
Based on the above design, the pseudocode for the hybrid initialization strategy is shown in Algorithm 1:
| Algorithm 1: PSO-based Hybrid Population Initialization. |
| Line Number | Pseudocode |
| 1 | Input: pop_size |
| 2 | Output: P |
| 3 | P_elite ← calpso(cs, pop_size) //Obtain a set of elite solutions via Particle Swarm Optimization |
| 4 | P_unique ← remove_duplicates(P_elite) //Remove duplicate solutions |
| 5 | num_elite ← size(P_unique, 1) |
| 6 | P_random ← generate_random_individuals(pop_size-num_elite) |
| 7 | P ← [P_unique; P_random] //Combine the elite solutions with the random solutions |
| 8 | return P |
This strategy leverages the guided search of Particle Swarm Optimization to rapidly direct the population toward promising regions of the solution space, thereby ensuring the quality of the initial solutions. Concurrently, it enhances population diversity by hybridizing the PSO-optimized solutions with completely randomly generated solutions, establishing a robust foundation for the subsequent evolutionary process of NSGA-III.
3.3. Local Search Enhancement Mechanism Based on Simulated Binary Crossover
In the NSGA-III algorithm, the Simulated Binary Crossover (SBX) operator is adopted, primarily due to its mathematical properties and adaptability to the characteristics of the cold chain logistics distribution routing model. The specific reasons are as follows: ① Mathematical Principle and Characteristics of SBX. SBX is a real-valued crossover algorithm whose core idea is to simulate single-point crossover behavior in binary-coded genetic algorithms but implemented in real-valued space. It generates offspring around parents based on a distribution index, allowing for controlled exploration near the current solutions. ② Motivation for Adopting SBX: Adaptation to the Continuous Variable Nature of the Model. The cold chain logistics distribution routing problem typically involves continuous variables, such as delivery time, vehicle load, and customer satisfaction. Traditional binary crossover is ill-suited for directly optimizing such variables. SBX, through its real-valued crossover operation, can effectively handle continuous variables while preserving solution feasibility and diversity. Enhancement of Local Search Capability. The distribution parameter of SBX allows offspring to be generated in the vicinity of the parent individuals, thereby strengthening local search capability. For the cold chain logistics routing problem, improved local search helps quickly locate high-quality solutions within the complex solution space, avoiding the degradation in solution quality that can result from random crossover. Preservation of Population Diversity. By randomly generating the value, SBX ensures that offspring are generated in different regions near the parents, thereby maintaining population diversity. This is particularly crucial for multi-objective optimization problems, as it helps effectively prevent the algorithm from converging to local optima and enhances global search capability.
However, the distribution index () of the standard Simulated Binary Crossover (SBX) operator is typically a fixed parameter, which constrains its capacity to dynamically balance exploration and exploitation throughout the search process. To address this limitation, this paper introduces an adaptive SBX scheme wherein the core innovation is enabling to self-adjust based on the population’s evolutionary state. The detailed procedure is outlined as follows:
Step 1: Parent Selection. Two parent individuals, denoted as and , are selected from the current population using a tournament selection method.
Step 2: Random Number Generation . For each decision variable, generate a uniformly distributed random number within the interval [0, 1].
Step 3: Calculation of Spread Parameter
.
where
is a uniformly distributed random number in the interval [0, 1];
is the distribution index, controlling the proximity of offspring to their parents. A larger
value results in offspring being generated closer to the parents, thereby enhancing local search capability; conversely, a smaller
value promotes a wider spread of offspring, thus strengthening global search capability.
Step 4: Offspring Generation. The two offspring individuals
and
are generated for each decision variable using the following formulas:
Step 5: Constraint Handling and Feasibility Check. Verify whether the generated offspring individuals satisfy all constraints specified in the model. If constraints are violated, apply repair mechanisms or regenerate the offspring to ensure feasibility.
Step 6: Population Update. Incorporate the feasible offspring into the new population, replacing the parent individuals according to the selection and replacement strategy of the NSGA-III algorithm.
Based on the aforementioned design, the pseudocode for the adaptive simulated binary crossover is shown in Algorithm 2:
| Algorithm 2: Adaptive Simulated Binary Crossover. |
| Line Number | Pseudocode |
| 1 | Input: P1, P2 |
| 2 | Output: C1, C2 |
| 3 | ←Compute the adaptive distribution index() //Based on population diversity |
| 4 | For each decision variable i do |
| 5 | ← compute_distribution_factor( , *u*) //*u* ∼ U(0,1) |
| 6 | C1[*i*], C2[*i*] ← generate_offspring(P1[*i*], P2[*i*], β) |
| 7 | end for |
| 8 | return C1, C2 |
This adaptive strategy dynamically adjusts the distribution index of the crossover operator, enabling the algorithm to autonomously balance global exploration and local exploitation during the evolutionary process, thereby significantly enhancing search efficiency and convergence precision in complex solution spaces.
3.4. Global Search Optimization Method Based on Polynomial Mutation
In the vehicle routing problem for cold chain logistics distribution, the polynomial mutation operator is employed to fully leverage its mathematical properties for real-valued encoding, addressing the difficulties traditional mutation methods face in handling continuous variables and complex constraints. By enhancing global search capability, maintaining population diversity, and adapting to intricate constraints, polynomial mutation can significantly improve the algorithm’s optimization performance and robustness, providing high-quality solutions for multi-objective routing problems.
The specific procedure of polynomial mutation in this context is as follows:
Step 1: Selection of Individuals for Mutation. Individuals are selected from the parent population to undergo mutation based on a predetermined mutation probability .
Step 2: Generation of Random Perturbation Factor. For each decision variable
selected for mutation, generate a random number
uniformly distributed in [0, 1]. Calculate the perturbation factor
using the polynomial distribution formula:
where
is the mutation distribution index, which controls the strength and scope of the mutation.
Step 3: Offspring Generation.
Step 4: Boundary Check and Correction. Verify whether each variable of the offspring individual exceeds its defined boundaries. If a variable violates these constraints, correct it by setting the value to the nearest boundary value.
Step 5: Fitness Evaluation. Calculate the fitness value of the offspring individual by evaluating its performance against the multi-objective optimization criteria.
Step 6: Population Update. Incorporate the feasible offspring individual into the population, either directly replacing the parent individual or participating in a subsequent environmental selection process to maintain population size and quality.
Based on the aforementioned design, the pseudocode for the dynamic polynomial mutation is shown in Algorithm 3:
| Algorithm 3: Dynamic Polynomial Mutation. |
| Line Number | Pseudocode |
| 1 | Input: C, gen, MaxGen, p_m_max, p_m_min, *k* |
| 2 | Output: C′, |
| 3 | p_m ← p_m_max–(p_m_max–p_m_min) × (gen/MaxGen)^*k* // Dynamic mutation probability |
| 4 | for each decision variable i do |
| 5 | if rand() ≤ p_m then |
| 6 | C′[*i*] ← perturb(C[*i*]) //Generate new value by perturbation |
| 7 | end if |
| 8 | end for |
| 9 | return C′ |
This dynamic mechanism ensures a nonlinear decrease in the mutation probability throughout the evolutionary process, guaranteeing sufficient exploration of the solution space during the early iterations while stabilizing the search in promising regions for intensive exploitation during later stages. This approach effectively reconciles the conflict between discovering new areas and converging to precise solutions.
3.5. Overall Flow Design of the Improved NSGA-III Algorithm
The core design philosophy of the INSGA-III algorithm lies in constructing a phased, collaborative optimization framework with distinct responsibilities for each component, systematically addressing the bottlenecks faced by traditional algorithms in solving the hybrid fleet routing problem for cold chain logistics. Its innovation does not lie in inventing entirely new operators but rather in a deeply customized integration tailored to the problem characteristics: first, leveraging the guided search capability of PSO to provide a high-quality, high-diversity starting point for evolution, overcoming the blindness of random initialization at its source; second, endowing genetic operators with adaptability, enabling them to intelligently switch between exploration and exploitation based on the search state, thereby achieving fine-grained exploration of the complex solution space; finally, seamlessly embedding the above mechanisms into the NSGA-III framework of reference point-based elite selection, ensuring the search process consistently progresses toward a converged and well-distributed Pareto front. This systematic collaboration of “high-quality initialization, intelligent adaptive search, and stable elite guidance” is the key to achieving a breakthrough in overall performance.
The main procedure of INSGA-III operates through iterative evolution, with its core steps and module invocations as detailed in Algorithm 4. The specific implementation details of all critical operations will be elaborated in
Section 3.6.
| Algorithm 4: Overall Procedure of the Improved Hybrid NSGA-III Algorithm. |
| Line Number | Pseudocode | Improvement |
| 1 | Input: gen_max, pop_size, fname, V, M | |
| 2 | Output: Pareto, | |
| 3 | | |
| 4 | //Phase 1: Hybrid Initialization | Improvement 1 |
| 5 | P_elite ← calpso(cs, pop_size) //Generate elite solutions via PSO | |
| 6 | P_unique ← remove_duplicates(P_elite) //Remove duplicate solutions | |
| 7 | P_random←*generate_random_individuals(pop_size-size(P_unique, 1))* | |
| 8 | population ← [P_unique; P_random] //Combine elite and random solutions | |
| 9 | population ← evaluate(population, fname) //Evaluate initial population | |
| 10 | [population, front] ← non_dominated_sort(population) //Non-dominated sorting | |
| 11 | | |
| 12 | for gen = 1 to gen_max do //Main loop begins | |
| 13 | //Phase 2: Reproduction and Evolution | |
| 14 | parent_selected ← tournament_selection(population) //Tournament selection | |
| 15 |
child_offspring ← genetic_operator(parent_selected) //Generate offspring
| Improvements 2 & 3 |
| 16 | child_offspring[:,IntVar] ← round(child_offspring[:,IntVar]) //Integer repair | |
| 17 | child_offspring ← evaluate(child_offspring, fname) //Evaluate offspring
| |
| 18 | | |
| 19 | //Phase 3: Environmental Selection | |
| 20 | population_inter ← [population; child_offspring] //Merge parent and offspring
| |
| 21 | [population_inter_sorted, front] ←non_dominated_sort(population_inter) | |
| 22 | new_population ← replacement(population_inter_sorted, front) //Generate new population | |
| 23 | population ← new_population | |
| 24 | end for | |
| 25 | | |
| 26 | //Phase 4: Result Output | |
| 27 | Pareto ← population(population[:,V*+*M*+2) == == 1, :) //Extract Rank-1 solutions | |
| 28 | return Pareto | |
3.6. Core Implementation Mechanisms
To ensure the robustness and reproducibility of the INSGA-III algorithm, this section elucidates two core mechanisms: the Infeasible Individual Repair Strategy and the Adaptive Parameter Calibration Process.
Infeasible Individual Repair Strategy. During the evolutionary process, newly generated individuals may simultaneously violate multiple hard constraints of the routing problem. This algorithm employs a hierarchical, sequential repair heuristic that prioritizes restoring basic feasibility constraints before optimizing higher-level objectives. The specific steps are as follows:
Step 1: Load Capacity Constraint Repair: Check: Traverse each route and calculate the total demand . If it exceeds the vehicle’s maximum load capacity , the route is marked as infeasible. Repair: Apply the “farthest customer removal–nearest feasible insertion” rule. Specifically, the customer farthest from the depot within the infeasible route is temporarily removed. It is then attempted to be inserted into the feasible position with the minimum insertion cost in other existing routes of the current solution. This process is repeated until the load capacity constraints of all vehicles are satisfied. This step ensures the fundamental physical feasibility of the solution.
Step 2: Energy Constraint Repair: Check: For electric vehicle routes, simulate energy consumption according to Equation (18). If the battery level falls below zero before reaching any node, the route is marked as infeasible. Repair: Before the segment where the energy first becomes negative, insert the charging station or depot node closest to node . After insertion, reset the vehicle’s charging decision variable and charging amount , and recalculate the subsequent energy levels. This step ensures the continuous operational capability of electric vehicles.
Step 3: Time Window Constraint Softening and Local Optimization. After undergoing the repairs in the previous two steps, an individual is structurally feasible in terms of route sequence, but may still incur substantial time window penalty costs. Optimization: For such individuals, no further structural repair is applied. Instead, a fast local search is conducted to directly minimize its time window penalty cost , thereby fine-tuning the solution. This approach ensures that the repair process contributes not only to feasibility but also to the optimization of the objective function.
Adaptive Parameter Calibration Process. In INSGA-III, the distribution index for SBX and the distribution index for polynomial mutation are not randomly assigned or kept constant. Instead, they are adaptively calibrated through a feedback loop based on historical successful experiences. Archive of Successful Experiences:
At the end of each generation
, the algorithm archives all offspring individuals that successfully advance to the next generation, along with their corresponding crossover and mutation parameter values
, into a temporary repository. Learning Parameter Distributions: From this repository, the algorithm computes the empirical distribution of the successful parameter values for the current generation. Parameter Generation for the Next Generation: When generating the crossover and mutation parameters for generation
, the algorithm samples from a normal distribution centered around the mean of the historical successful parameters
, with a standard deviation
reflecting their dispersion. Specifically:
This mechanism enables the algorithm to autonomously learn and progressively converge toward parameter settings that exhibit superior performance within the contemporary search environment. During the initial stages of the search, successful parameters may be widely dispersed, allowing the algorithm to explore a broad solution space. As evolution converges, the distribution of successful parameters becomes increasingly concentrated and stable, guiding the algorithm into a phase of localized, fine-tuned exploitation. This provides a clear and mathematically traceable interpretation of the concept of “adaptiveness” within the algorithm’s operational framework.
3.7. Analysis of Computational Complexity and Scalability
- (1)
Computational Complexity Analysis
Let the population size be , the number of objective functions be , the maximum number of evolutionary generations be , the number of PSO initialization iterations be , the number of customer points be , and the complexity of individual evaluation be . PSO Initialization Phase: The complexity is , representing a one-time, fixed upfront overhead. NSGA-III Main Loop (per generation): This primarily includes adaptive evaluation , non-dominated sorting and reference point association , and genetic operations and repair . The per-generation complexity is dominated by . Overall Evolutionary Process: The overall complexity of the evolutionary process is .
- (2)
Scalability Experiments and Computational Efficiency Analysis.
To assess the algorithm’s potential for handling large-scale problems, we generated larger test sets with customer point scales of 100, 200, and 500 based on benchmark instances such as C101, using replication and perturbation methods.
- (3)
Discussion on Scalability Strategies for Industrial Application
Cost–Benefit Trade-off: The overhead of PSO initialization, , provides a high-quality initial population which can significantly reduce the total number of convergence generations . This yields a net benefit in terms of either overall runtime or solution quality.
Performance and Efficiency Summary: As shown in
Table 2, when the problem scale expands to 500 customer points, the HV value of INSGA-III decreases by only 8.1%, demonstrating good scale robustness. The runtime growth trend lies between linear and quadratic. Processing a 500-point problem requires approximately 20 min, establishing feasibility for offline or daily planning.
Path to Industrial-Grade Application: When becomes extremely large, the evaluation cost becomes the primary bottleneck. The algorithmic framework proposed in this paper can be further enhanced for scalability through the following strategies:
- (1)
Parallelization: Population individual evaluation is a highly parallelizable task. Implementation on multi-core CPUs or GPUs can theoretically reduce evaluation time by nearly a factor of .
- (2)
Approximate Evaluation: Employ surrogate models for rapid fitness estimation in the early evolutionary stages, reserving precise evaluation only for later stages or critical individuals.
- (3)
Hierarchical Optimization: First, cluster customer points into partitions, then perform detailed path optimization within each partition.
4. Computational Results and Analysis
4.1. Experimental Setup and Performance Metrics
To ensure a comprehensive and impartial evaluation of the performance of the improved NSGA-III algorithm, this section elaborates on the computational environment, benchmark test problems, parameter configurations, and quantitative performance metrics employed in the experiments.
4.1.1. Computational Environment, Test Instances, and Parameter Adaptation
All experiments were conducted on a unified platform (Windows 11, 2.4 GHz CPU, 16 GB RAM) and implemented using MATLAB R2019b.
- (1)
Benchmark Instance Selection and Adaptation Method
To ensure a fair evaluation and align with the cold chain distribution scenario, this study selects the C1-class instances (C101–C105) from the Solomon benchmark dataset, characterized by clustered customer point distributions, as the source of geographic coordinates and basic demand data. To adapt these instances for the green vehicle routing problem with a mixed fleet in cold chain logistics, we enhance the original instances following a set of transparent rules based on industry data and engineering practice: Temperature Control and Cargo Damage Parameters: Each customer point is randomly assigned a product temperature tier (frozen, refrigerated, or ambient), based on which the target temperature, thermal sensitivity coefficient, and cargo damage model are set. Mixed Fleet Parameters: An initial fleet electrification ratio (30–50%) is defined. For electric vehicles, battery capacity (100 kWh), unit energy consumption (0.25 kWh/km), and charging rate are configured; for fuel-powered vehicles, fuel consumption per 100 km (14 L) and refrigeration energy consumption coefficients are set. The load capacity for both vehicle types is 4 tons. Energy Refueling Network: Charging/fueling station nodes are randomly generated outside the distribution center in proportion to the problem scale, forming an energy refueling network. Dynamic Cost and Time Calculation: Based on the generated parameters, vehicle load, travel distance, and traffic coefficients, the energy consumption, time, and various costs for each route segment are dynamically calculated.
This method ensures that all compared algorithms are fairly tested on the same set of enhanced benchmark instances that closely reflect real-world conditions.
- (2)
Scenario and Operational Parameter Settings
Customer Point Scale: Three levels are set: 20, 50, and 100. Time Windows: Two types are defined: strict (45 min) and relaxed (90 min). Cost Parameters: Parameters are integrated as follows: diesel price (7.5 CNY/L), industrial electricity price (0.8 CNY/kWh), vehicle fixed cost (300–350 CNY/day), driver labor cost (2 CNY/minute), and late delivery penalty.
4.1.2. Algorithm Parameters, Comparative Settings, and Fairness Assurance
- (1)
Algorithm Parameters
INSGA-III parameters: Population size 100, maximum iterations 200, crossover probability 0.8. An adaptive mutation probability related to the problem scale and a reference point division layer are adopted.
Comparative algorithms: Standard NSGA-III, NSGA-II, MOEA/D, and the single-objective Genetic Algorithm (GA) are selected for comprehensive comparison.
- (2)
Experimental Fairness Assurance
To eliminate performance differences arising from auxiliary mechanisms, the following principles are strictly enforced:
Unified Constraint Handling: The hierarchical repair strategy proposed in this paper is implemented as an independent, shared module. All comparative algorithms are mandatorily required to invoke this module for feasibility repair after generating new solutions. Unified Operating Environment: All algorithms are executed under identical hardware, software, test instances, and maximum number of function evaluations (termination criterion). Coordinated Parameter Baseline: The baseline distribution indices for common genetic operators (e.g., SBX, polynomial mutation) are set identically across all multi-objective algorithms. The “adaptive” nature of INSGA-III is reflected in its dynamic adjustments around this common baseline.
4.1.3. Performance Evaluation Metrics
To quantitatively evaluate the comprehensive performance of the algorithms, this study employs two widely adopted performance metrics in the field of multi-objective optimization: Hypervolume (HV): This metric measures the volume in the objective space enclosed by the solution set and a predefined reference point. It simultaneously reflects both the convergence and diversity of the solution set. A larger HV value indicates better overall performance. Inverted Generational Distance (IGD): This metric calculates the average distance from reference points on the true Pareto front to the solution set obtained by the algorithm. It effectively assesses the convergence accuracy of the solution set. A smaller IGD value indicates better convergence performance. Furthermore, all experiments were independently executed 30 times to mitigate the effects of randomness. The Wilcoxon rank-sum test was applied at a significance level of 0.05 to conduct statistical significance analysis on the results, thereby scientifically validating whether performance differences are statistically significant.
4.2. Ablation Study and Analysis of Algorithm Components
To precisely evaluate the independent contributions of the three core components proposed in this paper—PSO-based initialization (Improvement 1), adaptive SBX (Improvement 2), and dynamic polynomial mutation (Improvement 3)—this section designs a systematic ablation experiment. We constructed four comparative algorithm variants: Variant-A (Baseline): Standard NSGA-III, employing random initialization, fixed-parameter SBX, and polynomial mutation; Variant-B (Improvement 1 only): Based on Variant-A, introducing the PSO-based hybrid initialization strategy; Variant-C (Improvements 1 + 2): Based on Variant-B, introducing the adaptive SBX operator; Variant-D (the complete algorithm of this paper, INSGA-III): Based on Variant-C, introducing the dynamic polynomial mutation operator.
All variants are built upon the same foundational framework and employ identical parameter tuning rules and constraint-handling mechanisms to ensure the fairness of the comparison. We executed each algorithm on multiple standard instances, including C101, and recorded key performance metrics after convergence: Hypervolume (HV), Inverted Generational Distance (IGD), and the average number of generations to convergence.
- (1)
Quantitative Decomposition of Component Contributions
Based on
Table 3, the marginal contribution of each newly added component can be clearly quantified:
Contribution of PSO Hybrid Initialization (Improvement 1): It provides the most significant initial performance leap. Compared to the baseline, HV improves by 16.8%, and the number of convergence generations decreases by 20.4%. This indicates that a high-quality initial population directly determines the “starting height” of the search and is the most critical factor in accelerating overall convergence.
Contribution of Adaptive SBX (Improvement 2): Building upon the high-quality initial population, it further enhances the convergence and distribution of the solution set (HV increases by an additional 9.9%) and significantly accelerates mid-term convergence speed (the number of convergence generations further decreases by 19.0%). This demonstrates the effectiveness of its mechanism for dynamically adjusting search behavior based on population status.
Contribution of Dynamic Polynomial Mutation (Improvement 3): Based on the preceding components, it primarily optimizes the distribution uniformity and extensibility of the Pareto front (IGD improves substantially by 17.5%) and plays a key role in fine-grained exploitation during later stages (enabling the algorithm to stabilize at the optimal front more rapidly).
- (2)
Analysis of Synergistic Effects Among Components
The experimental results reveal significant “1 + 1 > 2” synergistic effects among the components:
The adaptive crossover operator heavily relies on the high-quality and diverse initial population provided by PSO initialization to effectively exert its “local exploitation” capability.
The dynamic mutation operator assumes a complementary role of “exploring unknown regions” and “fine-tuning continuous variables” after the crossover operator completes the primary structural search.
Together, they form a complete and synergistic search chain: “High-Quality Starting Point → Intelligent Structural Search → Fine-Tuned Exploration.” Each component is indispensable. The performance achieved by enhancing any single component falls short of that of the complete INSGA-III.
4.3. Comparative Analysis of Algorithm Performance
To comprehensively evaluate the integrated performance of the improved NSGA-III proposed in this paper, this section systematically compares it with NSGA-III, NSGA-II, MOEA/D, and GA across four dimensions: quantitative metrics, statistical significance, visual comparison of solution set distributions, and convergence speed. Due to space limitations, this paper selects the most representative C101 instance to present the detailed analysis process. All comparative algorithms exhibit consistent trends across other C1-type instances, which fully demonstrates the broad effectiveness of the proposed improvement strategy. Therefore, the algorithmic performance comparison is primarily based on the C101 test case.
4.3.1. Quantitative Comparison of Overall Performance
To quantitatively evaluate the comprehensive performance of each algorithm, this study employs Hypervolume (HV) and Inverted Generational Distance (IGD) as core metrics, with statistical analysis conducted on the C101 instance.
Table 4 presents the mean performance comparison of each algorithm on the representative C101 instance, covering five key objectives: distribution cost, customer satisfaction, carbon emissions, route length, and average runtime.
Based on the quantitative results, the following conclusions can be drawn:
- (1)
The algorithm demonstrates outstanding capability in collaborative optimization across all objectives: The proposed INSGA-III algorithm exhibits comprehensive and consistent superiority across the five key objectives—distribution cost, customer satisfaction, carbon emissions, route length, and average runtime. Specifically, compared to the GA, which performed the weakest, INSGA-III reduces distribution costs by approximately 12.46%, cuts carbon emissions by about 49.3%, improves customer satisfaction by around 23.19%, and optimizes route length by roughly 76.7%, while simultaneously shortening the average runtime by 35.4%, significantly enhancing solution efficiency. Even when compared to the relatively well-performing NSGA-III, INSGA-III still achieves further improvements in customer satisfaction (a 0.34% increase), carbon emissions (a 3.27% reduction), and route length (a 3.27% reduction), alongside an 11.5% reduction in average runtime, highlighting its computational efficiency advantage. These results validate that the algorithm effectively guides the search process toward discovering more balanced and comprehensive Pareto-optimal solutions, avoiding entrapment in local optima or imbalanced trade-offs among objectives.
- (2)
Comprehensive Superiority in Multi-objective Evaluation Metrics: In terms of the two standard multi-objective evaluation metrics, HV and IGD, INSGA-III achieves the highest HV value of 0.724 and the lowest IGD value of 0.052, significantly outperforming all benchmark algorithms. This not only theoretically validates that the obtained solution set exhibits superior convergence—closer proximity to the true Pareto front—but also demonstrates more desirable distribution diversity, indicating a more uniform and extensive coverage of the solution set across the objective space.
- (3)
Algorithm Performance Gradient Aligns with Expected Evolutionary Trajectory: A clear performance gradient is observable from the distribution of HV and IGD values: INSGA-III > NSGA-III > NSGA-II > MOEA/D > GA, indicating a progressive improvement in algorithmic performance aligned with the evolution of algorithmic paradigms. Compared to early methods such as the conventional GA, INSGA-III achieves an approximately 153% improvement in the HV metric and an 84.74% reduction in the IGD metric. Even relative to the state-of-the-art NSGA-III, INSGA-III demonstrates a further 13.46% increase in HV and a 41.57% reduction in IGD. This underscores that the proposed improvements effectively unlock additional performance potential beyond existing advanced frameworks, enabling more refined search and convergence control.
4.3.2. Statistical Significance Test Analysis
To scientifically validate the statistical significance of the performance advantages of the improved NSGA-III algorithm, this study conducted 30 independent repeated experiments across multiple problem instances and analyzed the results using the Wilcoxon rank-sum test (significance level = 0.05). This test is employed to determine whether the performance differences between two algorithms are statistically significant and not attributable to random factors.
- (1)
Statistical Significance of Performance Superiority: As shown in
Table 5, the
p-values for INSGA-III compared to all benchmark algorithms (NSGA-III, NSGA-II, MOEA/D, and GA) are less than 0.05. This provides strong statistical evidence that the performance improvement of INSGA-III in terms of the HV metric is statistically significant, representing a deterministic advantage attributable to its enhancement mechanisms.
- (2)
Generality and Robustness of the Advantage: INSGA-III demonstrates statistically significant superiority over all benchmark algorithms. The advantage is most pronounced compared to GA (p < 1 × 10−10), while the difference, though still significant, is relatively smaller compared to MOEA/D (p = 0.038). This indicates that the performance advantage of INSGA-III is generalizable and robust, consistently outperforming various types of benchmark algorithms.
- (3)
Mutual Corroboration with Quantitative Results: The outcomes of these significance tests are fully consistent with the mean HV and IGD values and the multi-objective optimization results presented in
Section 3.2. The statistical tests not only confirm the reality of the performance differences but also elevate the apparent numerical advantages to a rigorous statistical level, thereby substantially strengthening the credibility and robustness of the study’s conclusions.
- (4)
Validation of Algorithmic Enhancement Effectiveness: The statistically significant superiority of INSGA-III over the original NSGA-III validates the effectiveness of the adopted improvement strategies. While maintaining its multi-objective optimization capability, the algorithm significantly enhances the quality of the obtained solution sets, thereby providing a more effective solution for complex optimization problems.
4.3.3. Visual Comparison of Pareto Solution Set Distributions
To visually assess and compare the convergence and distribution of solution sets obtained by different algorithms, this section begins with a visual analysis of the Pareto front for a representative instance (C101).
Figure 2 and
Figure 3 respectively illustrate the final solution set distributions of the INSGA-III, NSGA-III, and MOEA/D algorithms in the two-dimensional “cost-satisfaction” plane and the three-dimensional “cost-satisfaction-carbon emission” space.
- (1)
Qualitative Visual Analysis of Solution Set Distribution
Through a direct comparison of
Figure 4 and
Figure 5, the following qualitative conclusions can be drawn:
Convergence Advantage: As shown in
Figure 4, the “front” formed by the solution set obtained by INSGA-III is closest to the lower-left corner of the coordinate plot (i.e., the ideal region of lower cost and higher satisfaction), indicating that it possesses the strongest convergence capability and can discover higher-quality solutions closer to the true Pareto front.
Diversity Advantage: While achieving excellent convergence, as illustrated in
Figure 5, the solution points of INSGA-III are distributed more widely and uniformly in the three-dimensional objective space, forming a complete and smooth trade-off surface. In contrast, the solution sets of the comparative algorithms (particularly MOEA/D) exhibit clustered aggregation, indicating insufficient diversity.
Corroboration of Comprehensive Performance: This distribution pattern of being “closer, broader, and more uniform” is a direct manifestation of the algorithm achieving a better balance between exploration and exploitation, and is fully consistent with the quantitative result in
Section 4.2 where INSGA-III achieved the highest HV value.
- (2)
Quantitative Analysis for Management Decision-Making
To move beyond mere graphical representation and provide decision-makers with quantifiable bases for trade-offs directly applicable to practice, this study further extracts key decision insights from the high-quality Pareto solution set generated by INSGA-III. We define and analyze the following two core marginal trade-off rate indicators:
Marginal Cost of Emission Reduction (MCCR): Measures the additional cost required to achieve a unit reduction in carbon emissions. By analyzing the Pareto solution set, we find that when a company is willing to accept an increase in total distribution cost of approximately 5%, it can achieve a carbon emission reduction of about 15–18% compared to the lowest-cost solution. However, if pursuing a deep reduction exceeding 25%, the marginal cost rises sharply, potentially increasing costs by over 15%.
Marginal Cost of Satisfaction Improvement and Its Emission Effect (MCCS): Measures the cost required to improve per-unit customer satisfaction and the associated change in emissions. The analysis indicates that within the satisfaction range from 84% to 87%, each percentage point increase in satisfaction requires an average cost increase of about 2%, while carbon emissions can concurrently decrease slightly (due to more efficient scheduling). However, when the satisfaction requirement exceeds 88%, entering the “premium service” range, the marginal increases in both cost and carbon emissions rise substantially.
4.3.4. Convergence Speed Comparison
To dynamically evaluate the search efficiency of each algorithm, a comparative analysis of convergence speed was conducted by plotting the curves of the core performance metric (Hypervolume, HV) against the number of evolutionary generations.
Figure 4 illustrates the trend of the average HV value with respect to the iteration count for each algorithm on the representative C101 instance.
Analysis of the convergence curves yields the following conclusions:
- (1)
Advantage in Initial Population Quality: From the very beginning of the iterations, it can be observed that the HV value of the proposed INSGA-III algorithm is significantly higher than those of NSGA-III and MOEA/D. This directly validates the effectiveness of Improvement 1 (the hybrid PSO initialization strategy), demonstrating that this strategy successfully provides the evolutionary algorithm with a high-quality and highly diverse initial population, thereby laying a solid foundation for rapid convergence.
- (2)
Rapid Convergence Capability: In the early stages of evolution, the HV curve of INSGA-III exhibits the steepest slope and the most rapid ascent, indicating its strongest capability for fast convergence. This is attributed to Improvement 2 (adaptive SBX), which enhances the algorithm’s local search ability during the initial evolutionary phase, enabling it to efficiently leverage the high-quality initial population and quickly approach the Pareto front.
- (3)
Stable and Refined Search Capability: During the middle and late stages of evolution, the curve of INSGA-III stabilizes and converges to a higher HV plateau, whereas other algorithms tend to stagnate prematurely or converge at lower levels. This demonstrates the overall robustness and sustained optimization capability of the proposed algorithm. Specifically, Improvement 3 (dynamic polynomial mutation) effectively balances exploration and exploitation, mitigating premature convergence and enabling more refined search in later stages, thereby yielding a final solution set with superior comprehensive quality.
4.3.5. Results Discussion and Attribution Analysis
Integrating the aforementioned quantitative comparisons, statistical tests, and visualization analyses, it can be confirmed that the improved NSGA-III algorithm proposed in this paper significantly outperforms mainstream benchmark algorithms in terms of comprehensive performance, convergence speed, and solution set quality for solving the cold chain logistics distribution routing problem. This section aims to delve into the underlying mechanisms responsible for this systematic advantage, linking the preceding experimental results with the algorithmic innovations presented herein to complete the logical progression from “what” to “why.”
The performance improvement can be primarily attributed to the synergistic effects of the following three mechanisms:
- (1)
High-Starting-Point Search: Hybrid Initialization Strategy Establishes Global Advantage.
Experimental data indicate that INSGA-III begins the evolutionary process with a significantly higher Hypervolume (HV) value (see
Figure 4). The average HV value of its initial population is approximately 15–20% higher than that of benchmark algorithms, with statistically significant differences (
p < 1 × 10
−10). This advantage directly originates from Improvement 1 (the PSO-based hybrid initialization). The effectiveness of this strategy is rooted in the characteristics of the cold chain routing problem: route selection involves discrete decisions, while vehicle speed and temperature control involve continuous decisions, forming a complex hybrid search space. Traditional random initialization distributes solutions sparsely and with low quality in this space. The guided search of PSO leverages historical optimal information to rapidly generate a set of “elite solutions” that simultaneously satisfy route feasibility and temperature control/energy consumption constraints as the initial population. This not only avoids ineffective searches from the outset but also positions the algorithm’s starting point directly on a “high ground” close to the true Pareto front, significantly shortening the convergence path (as demonstrated by the rapid ascent in the first 50 generations in
Figure 4).
- (2)
Intelligent Search: Adaptive Genetic Operators Enable Fine-Grained Exploration.
Improvement 2 (adaptive SBX) and Improvement 3 (dynamic polynomial mutation) collectively constitute the intelligent search engine of the algorithm. The core of the adaptive crossover operator lies in its ability to perceive the population state. When population diversity is high, the operator tends to adopt a larger distribution index, emphasizing strong local exploitation to focus on finding improved route solutions near existing high-quality solutions. Conversely, when diversity decreases, it reduces the distribution index to promote strong global exploration, helping the population escape local optima. This dynamic adjustment is the key to the algorithm’s precise balance between “exploration” and “exploitation,” enabling efficient discovery of high-quality solutions even under complex constraints. The dynamic mutation operator, on the other hand, plays a dual role as both an “explorer” and a “fine-tuner.” In the early stages, a high mutation rate ensures thorough exploration of the solution space. In later stages, a low mutation rate allows the algorithm to perform fine adjustments to continuous variables such as speed and departure time on the established high-quality route framework, thereby optimizing cost and timeliness. This flexible search strategy is the direct reason why the algorithm can quickly and accurately identify low-cost, low-emission route solutions (as evidenced by the significant reductions in cost and carbon emissions shown in
Table 2). Moreover, its dynamic adjustment strategy aligns perfectly with the trend observed in the convergence curve: “rapid ascent in early stages, stabilizing at a high level in later stages”. (In
Figure 5, which includes additional boxplots to illustrate the distribution range of HV values at the end of each generation for each algorithm, it can be seen that INSGA-III not only achieves a higher mean HV value but also exhibits a more concentrated distribution and superior stability.) This alignment effectively balances the inherent trade-off between exploration and exploitation.
- (3)
Framework Synergy: The Elite Preservation and Guidance Mechanism of NSGA-III Ensures Final Solution Quality.
The aforementioned improvement modules are embedded within the NSGA-III reference point-based environmental selection framework. This framework’s selection mechanism is particularly crucial for multi-objective cold chain logistics problems, which require simultaneous optimization of cost, carbon emissions, and satisfaction—objectives that often conflict. The reference point-based selection not only ensures the retention of non-dominated solutions (elite preservation) but, more importantly, maintains a broad distribution of the population in the objective space. This guarantees that the final solution set encompasses a variety of trade-off strategies—from “cost-first” to “green-first” approaches (as illustrated in
Figure 2 and
Figure 3)—providing decision-makers with comprehensive options. This explains why the solution set achieves both high quality and high diversity. Through non-dominated sorting and reference point-based selection, the framework ensures the preservation of elite solutions in each generation and guides the population toward a uniformly distributed Pareto front. This constitutes the fundamental institutional guarantee for INSGA-III’s ability to maintain rapid convergence while still achieving excellent diversity in its solution set (as demonstrated by the wide and uniform distribution shown in
Figure 2 and
Figure 3).
In summary, the INSGA-III algorithm proposed in this paper addresses the complex optimization challenge of high-dimensional, multi-constrained, multi-objective hybrid fleet routing in cold chain logistics through a systematic and synergistic design. Hybrid initialization provides a high-quality and diverse starting point tailored to the problem characteristics; adaptive operators enable intelligent and fine-grained exploration of the hybrid solution space; and the elite selection framework ensures the search direction consistently advances toward a genuine and diverse Pareto front. These three levels of improvement are interlinked and indispensable, collectively forming the solid foundation for the algorithm’s exceptional performance.
4.4. Sensitivity Analysis of Key Parameters
To evaluate the robustness of the proposed algorithm and derive managerial insights, this section conducts sensitivity analyses along two dimensions: internal algorithmic parameters and external operational environment parameters.
4.4.1. Sensitivity Analysis of Algorithmic Parameters
To validate the robustness of the proposed adaptive mechanisms (particularly Improvements 2 and 3), this subsection analyzes the impact of key internal algorithmic parameters on performance. We selected the crossover distribution index (NSGAparam.CrossIndex) and the mutation distribution index (NSGAparam. DistIndex). Experiments were conducted on the C101 instance, varying the target parameters while keeping all other conditions constant, and using Hypervolume (HV) as the core evaluation metric.
Analysis of the experimental results leads to the following conclusions:
- (1)
The algorithm exhibits low sensitivity to the crossover distribution index
, demonstrating strong robustness: As shown in
Figure 6, when the crossover distribution index
varies within the range of [10, 30], the HV value remains at a high level with fluctuations of less than 3%. This indicates that the
parameter, which controls the search scope of the SBX operator, has a broad and stable high-performance interval. As long as it falls within this range, the improved SBX strategy can effectively balance global exploration and local exploitation, ensuring stable algorithm performance. This confirms the successful design of Improvement 2 (adaptive SBX), as its performance does not rely on precise parameter fine-tuning.
- (2)
The algorithm exhibits low sensitivity to the mutation distribution index
, with a broad optimal window: As illustrated in
Figure 6, the HV value remains optimal and stable when the mutation distribution index
falls within the range of [10, 30]. This finding indicates that the
parameter, which governs the perturbation intensity of polynomial mutation, is straightforward to configure. The algorithm’s performance demonstrates minimal sensitivity to variations in this parameter, consistently maintaining high efficacy across a wide spectrum of settings. This outcome further corroborates the effectiveness and practical utility of Improvement 3 (dynamic polynomial mutation).
- (3)
Overall Robustness Validation: Collectively, the proposed algorithm demonstrates no excessive sensitivity to the core crossover and mutation distribution indices. Across a broad value range of [10, 50] for parameters and , the algorithm consistently delivers performance close to its peak level. This significantly enhances the algorithm’s usability and reliability in practical applications, as users can achieve stable and excellent optimization results without engaging in tedious parameter fine-tuning. In this study, and are ultimately set to 20—a value situated at the center of the high-performance interval—ensuring robust operation across various scenarios.
4.4.2. In-Depth Analysis of Mixed Fleet Configuration: How Vehicle Type Selection Drives Multi-Objective Trade-Offs
To elucidate the central role of the mixed fleet configuration in this study, this subsection conducts an in-depth quantitative analysis of the fleet composition as a key decision variable. The aim is to reveal how vehicle type selection specifically influences the trade-offs among cost, carbon emissions, and satisfaction, thereby validating its necessity as a core modeling element.
Fleet composition serves as a pivotal lever in shaping Pareto trade-offs. The trade-off impact analysis of fleet composition on total cost and carbon emissions is shown in
Figure 7. By systematically adjusting the proportion of electric vehicles in the fleet (from 0% to 100%) and employing the INSGA-III algorithm to solve for the Pareto-optimal front at each proportion, the results clearly demonstrate (see
Figure 8) that fleet composition is a fundamental lever for systematically altering the multi-objective trade-off space.
Deterministic impact on carbon emission objectives: As shown on the right axis of
Figure 8, carbon emissions exhibit a monotonically decreasing trend as the proportion of electric vehicles increases. This intuitively demonstrates that introducing electric vehicles is both a necessary condition and an effective pathway for achieving deep emission reductions. Non-monotonic and complex impact on cost objectives: As illustrated on the left axis of
Figure 8, the total distribution cost follows a significant “first decrease, then increase” U-shaped curve as the proportion of electric vehicles changes. This phenomenon reveals the internal economic logic of mixed fleets: a moderate proportion of electric vehicles can synergistically optimize routes and reduce partial fuel consumption, thereby offsetting their higher purchase and charging costs. However, once the proportion exceeds the optimal range, constraints from charging infrastructure and higher unit energy costs begin to dominate, leading to an increase in total costs. The comparison of key performance indicators under different fleet sizes is shown in
Table 6.
Analysis of the experimental results leads to the following conclusions:
- (1)
The combination of the aforementioned “U-shaped” cost curve and “L-shaped” emission curve clearly defines a Pareto-optimal interval (electric vehicle share: 30–50%). Within this interval, decision-makers can achieve a 15–20% reduction in carbon emissions at the cost of only a 3–5% increase in expenditure. This strongly demonstrates that optimizing the composition of the mixed fleet is, in itself, a critical decision-making process for resolving the core conflict between “cost and emissions.”
- (2)
Battery capacity and charging constraints are inseparable core components of the model among all Pareto-optimal solutions involving electric vehicles: Charging decision activation rate: On average, 68.2% of electric vehicle trips triggered at least one mid-journey charging event, indicating that charging demand is a routine part of operations. Battery constraint violation frequency: During algorithm iterations, a striking 41.5% of initial candidate routes were repaired due to violations of battery capacity constraints. This high frequency provides conclusive evidence that range and charging constraints are rigid, core constraints for generating feasible solutions. Ignoring them would lead the model to output a large number of impractical plans.
- (3)
Dynamic Matching Mechanism between Vehicle Type Selection and Task Allocation. To reveal how the algorithm performs trade-offs at the micro level, we dissected the task-vehicle matching patterns within representative Pareto solutions: Long-distance, heavy-load tasks: Primarily undertaken by fuel-powered vehicles to avoid the range anxiety of electric vehicles and the time cost of long-distance charging. This is a key strategy for controlling total costs. Urban-dense, multi-stop tasks: Primarily undertaken by electric vehicles. Their zero-emission maximizes environmental benefits in densely populated areas, and their advantage in kinetic energy recovery during frequent stops and starts directly supports the achievement of carbon emission targets. Tasks extremely sensitive to time windows: Electric vehicles are prioritized for dispatch under the drive of the satisfaction objective, due to their more precise torque control and schedulable charging strategies. This ensures stable service quality (as shown in
Table 4, satisfaction fluctuates by less than 0.2% across various configurations).
4.4.3. Analysis of the Impact of Delivery Vehicle Load Capacity
Vehicle load capacity is a critical asset decision in cold chain logistics distribution, directly influencing delivery costs, scheduling flexibility, and customer service levels. This section aims to quantitatively analyze the impact of variations in vehicle load capacity on the overall system performance, providing direct quantitative evidence to support corporate vehicle selection and procurement decisions.
As shown in
Figure 8, the horizontal axis represents vehicle load capacity, the left vertical axis represents cost and carbon emissions, and the right vertical axis represents customer satisfaction and the number of vehicles. The total cost exhibits a U-shaped curve, reaching its minimum at 8 t; customer satisfaction follows an inverted U-shaped curve, peaking at 8 t; the number of vehicles monotonically decreases as load capacity increases; and carbon emissions show a slowly rising trend. The trend of the impact of vehicle capacity on operational indicators is shown in
Table 7.
Analysis of the experimental results leads to the following conclusions:
- (1)
A distinct optimal economic capacity interval exists: The total cost reaches its minimum at 8 t, forming an optimal interval (7–9 t) centered around 8 t. Within this range, cost fluctuations can be controlled within 3%. When the capacity is below 8 t, fixed costs cannot be effectively amortized due to excessively high departure frequency; when the capacity exceeds 8 t, declining vehicle utilization rates and reduced scheduling flexibility jointly drive costs upward.
- (2)
Load capacity has a nonlinear impact on service level: customer satisfaction peaks at 8 t (85.20%), exhibiting a distinct inverted U-shaped trend. A load capacity that is too small (≤6 t) leads to excessively high departure frequencies, which can easily cause time window conflicts; conversely, a load capacity that is too large (≥10 t) increases the number of customers served per vehicle, resulting in prolonged waiting times for end customers and reduced product freshness, both of which significantly diminish satisfaction.
- (3)
Asset Coordination Strategy Based on Bottleneck Analysis: To simultaneously address the economies of scale required for high-density orders and the rapid response needed for sporadic urgent orders, a hybrid configuration strategy of “primarily 8 t vehicles supplemented by a small number of 6 t vehicles” is recommended. This approach leverages 8 t vehicles to ensure overall optimality in cost and satisfaction, while deploying a limited number of 6 t vehicles specifically to resolve bottlenecks related to “excessive departure frequency and time window conflicts,” thereby enhancing the system’s overall robustness to demand fluctuations.
4.5. Management Insights Analysis and Model Value Verification
Based on the aforementioned experiments, this section aims to distill management insights that can directly guide practice. Furthermore, by comparing with simplified models, it verifies the unique value of the complex model proposed in this study in deepening the understanding of the problem.
4.5.1. From Trade-Off Quantification to Strategic Decision Matrix
The core value of multi-objective optimization lies in the precise quantification of conflicts among objectives. Through in-depth analysis of the Pareto solution set, we derive the following quantifiable relationships that can directly support decision-making:
- (1)
Cost of Carbon Emission Reduction: Under current technological and market conditions, reducing 1 ton of CO2 emissions requires, on average, an additional operational cost of approximately 93,000 CNY. This internal carbon pricing benchmark is crucial for corporate carbon asset management.
- (2)
Premium for Service Level Enhancement: Increasing customer satisfaction by 0.5 percentage points may lead to an 8–9% rise in distribution costs. This provides a basis for formulating differentiated service pricing.
- (3)
Strategic Decision Menu: As shown in
Table 6, we extracted three typical strategic orientation schemes from the Pareto solution set, offering managers a clear “decision palette.”
The strategic decision matrix based on the Pareto solution set is shown in
Table 8.
4.5.2. Dynamic Optimal Strategies for Mixed Fleet Configuration
Based on sensitivity analysis, this study proposes actionable mixed-fleet management strategies:
- (1)
Optimal Economic-Environmental Range: For regional distribution centers with a daily throughput of 10–50 tons, maintaining an electric vehicle proportion of 30–50% achieves the best balance (cost increases marginally by 3–5%, while carbon emissions are sharply reduced by 15–20%).
- (2)
Scale-Oriented Differentiated Configuration: In low-volume scenarios (<10 tons/day), the electric vehicle proportion can be increased (40–50%) to leverage their low marginal cost advantage. In high-volume scenarios (>50 tons/day), the configuration should be dominated by fuel vehicles (60–70%) to ensure scheduling flexibility.
- (3)
Dynamic Collaborative Optimization Mechanism: It is recommended that enterprises establish a quarterly review mechanism. Fleet composition should be dynamically adjusted by integrating business forecasts, carbon price fluctuations, and policy changes.
4.5.3. Validation of Model Complexity Value: Comparative Analysis with Simplified Models
A critical comparative analysis was designed: the complete model from this study was compared with two simplified models for analysis. Model S1: Single-Objective Cost Minimization Model (optimizes only cost, treating carbon emissions and customer satisfaction as constraints). Model S2: Multi-Objective Model Ignoring Heterogeneity (treats electric and fuel vehicles as homogeneous, neglecting charging and range constraints). The comparison of decision insights between complex and simplified models is shown in
Table 9.
The value of the complex model constructed in this study extends far beyond merely increasing formal difficulty. As the comparison above demonstrates, simplified models, due to their overly strong assumptions (e.g., single objective, vehicle homogeneity), obscure the nonlinear, non-monotonic, and complex trade-off relationships present in reality, potentially leading to overly optimistic or one-sided decision recommendations. In contrast, the complete model presented here, by systematically integrating mixed fleets, multi-objective considerations, and cold-chain constraints, enables:
- (1)
Revealing the Nature and Intensity of Conflicts: It precisely quantifies the “inflection points” and “thresholds” of conflicts between different objectives.
- (2)
Uncovering Possibilities for Synergy: It identifies optimization opportunities where “one action achieves two goals” under specific conditions (e.g., using electric vehicles to simultaneously enhance service and environmental performance).
- (3)
Providing Contextualized Decision Support: It yields deep insights into “what configuration and scheduling strategies should be adopted under which strategic intents,” rather than providing a static “optimal solution.”
Therefore, the model’s complexity is a necessary mapping of real-world business paradoxes. It endows decision-makers with the systematic thinking capability to perform refined trade-offs and innovative problem-solving under the triple pressures of cost, service, and environment. This is precisely the key to transitioning from “traditional operations” to “sustainable smart logistics.”
5. Conclusions and Future Work
This research addresses the optimization of hybrid fleet distribution routes for cold chain logistics by constructing a multi-objective model integrating total cost, customer satisfaction, and carbon emissions, and proposes an improved hybrid adaptive NSGA-III algorithm. The following key conclusions are drawn: Theoretical Contribution: The proposed algorithm, incorporating mechanisms such as adaptive reference points and local search, significantly outperforms traditional algorithms in terms of solution set convergence and distribution. Managerial Implications: The model accurately captures the operational paradox of hybrid fleets. Through sensitivity analysis, it reveals the trade-off relationships between key parameters—such as fleet electrification ratio and vehicle type configuration—and cost and carbon emissions. This provides enterprises with differentiated decision-making strategies that balance economic, service, and environmental objectives. Operational Benefits: The algorithmic performance advantages directly translate into tangible operational gains. Empirical analysis shows that, compared to baseline solutions, the proposed algorithm achieves an approximate 2.9% reduction in total costs and a 15% decrease in carbon emissions. For large-scale daily cold chain distribution networks, this translates to substantial annual financial savings and environmental benefits. Moreover, the algorithm provides a higher-quality and more broadly distributed Pareto-optimal solution set, enabling managers to scientifically weigh and flexibly switch among multiple operational strategies—such as “cost-first,” “service-first,” or “green-first” approaches. This significantly enhances decision-making agility and operational resilience in complex market environments.
Limitations and Future Directions: The limitations of this study offer clear pathways for future research: Model Refinement: Future work may incorporate heterogeneous demands for multi-temperature-zone products, speed-load-dependent emission functions, and dynamic urban traffic congestion effects. These aspects represent key limitations in modeling real-world complexity and are critical directions for advancing toward more refined and dynamic models. Algorithmic Advancements: The INSGA-III framework proposed in this study offers a promising tool for solving such complex models. Future algorithmic research may proceed along two paths: Horizontal Comparison: Systematically comparing this method with emerging hybrid multi-objective routing algorithms based on PSO or other metaheuristics (e.g., Grey Wolf Optimizer, Sparrow Search Algorithm) to precisely delineate its advantages and characteristics within a broader algorithmic spectrum; Vertical Integration: Exploring integration with real-time data systems such as the Internet of Things (IoT) and digital twins to develop an integrated “perception-decision-execution” dynamic intelligent scheduling platform. This would advance the algorithm from offline optimization to online adaptive scheduling. Through such interdisciplinary and technological integration in modeling and algorithmic design, the theoretical and practical pathways toward sustainable cold chain logistics can be further refined.