Next Article in Journal
Influence of Lateral Wheelset Force on Track Buckling Behaviour
Previous Article in Journal
Economical Motion Planning for On-Road Autonomous Driving with Distance-Sensitive Spatio-Temporal Resolutions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Strategy Computational Algorithm for Sustainable Logistics: Solving the Green Vehicle Routing Problem with Mixed Fleet

1
School of Management and Economics, North China University of Water Resources and Electric Power, Zhengzhou 450046, China
2
School of Information Engineering, North China University of Water Resources and Electric Power, Zhengzhou 450046, China
*
Author to whom correspondence should be addressed.
Machines 2026, 14(2), 202; https://doi.org/10.3390/machines14020202
Submission received: 8 January 2026 / Revised: 2 February 2026 / Accepted: 6 February 2026 / Published: 9 February 2026
(This article belongs to the Section Vehicle Engineering)

Abstract

Cold chain logistics distribution, a vital activity supporting global urbanization, faces complex challenges in balancing economic, environmental, and social objectives. This study investigates the green vehicle routing problem with a mixed fleet, simultaneously optimizing total cost, carbon emissions, and customer satisfaction. To solve this NP-hard problem, a novel multi-strategy NSGA-III algorithm is proposed, which integrates an adaptive pheromone update mechanism, elite route guidance, and genetic operators to significantly enhance search efficiency and solution diversity in complex solution spaces. Computational experiments on benchmark instances and a real-world case demonstrate the algorithm’s superior performance over mainstream multi-objective optimizers like NSGA-III and NSGA-II in metrics such as hypervolume. Sensitivity analysis further elucidates the impact of key operational parameters on system performance and provides a quantitative decision-making basis for greening urban cold chain fleets. This research offers an effective computational tool for complex sustainable logistics problems, with a modeling framework extensible to other industrial systems facing similar multi-objective trade-offs.

1. Introduction

Against the backdrop of rapid global urbanization and concurrent climate change, the sustainable transformation of urban logistics systems constitutes a critical link in achieving the United Nations Sustainable Development Goals. Serving as the lifeline for ensuring food safety and pharmaceutical efficacy, cold chain logistics, while underpinning modern living and health security, has become a focal point of environmental and operational cost pressures due to its high energy consumption characteristics [1,2,3,4]. Currently, the annual carbon dioxide emissions from China’s cold chain logistics have reached approximately 402 million tons, with the transportation sector contributing over 60% of this total—a figure projected to continue rising under the existing operational paradigm [5,6]. Simultaneously, consumer expectations regarding delivery timeliness and reliability are continually escalating. An industry survey indicates that over 35% of cold chain orders incur customer complaints due to delivery delays or temperature control deviations, directly leading to an average increase of 15% in cargo damage costs [7]. These severe emission data points and significant service cost issues collectively form a quantitative challenge that cold chain logistics operations must urgently confront.
The green transition of the industry offers a new pathway to address this dilemma. Driven by both policy and market forces, the penetration rate of new energy (electric) refrigerated trucks is projected to reach 42.4% by 2025, signifying a fundamental transformation in fleet composition [8,9]. However, this shift gives rise to a complex “mixed fleet paradox” at the operational level: while electric refrigerated trucks can achieve zero tailpipe emissions during operation and offer more precise temperature control, they are constrained by higher purchase costs, limited driving range (averaging 200–300 km), and inadequate charging infrastructure coverage (particularly below 40% in suburban areas). Conversely, traditional fuel-powered refrigerated trucks hold advantages in purchase cost and mileage flexibility, but their carbon emissions per unit distance are typically 2–3 times higher than those of electric vehicles [10]. In practical scheduling, enterprises must perform quantitative trade-offs on this paradox while simultaneously meeting stringent requirements for timeliness and temperature-controlled service delivery [11,12,13]. Consequently, developing a decision-making model capable of concurrently optimizing total operational cost, full-journey carbon emissions, and customer satisfaction (quantified by time window fulfillment rate) is no longer merely a theoretical frontier issue but an urgent, practical necessity for the industry to tackle the aforementioned quantitative challenges. Existing research often focuses on single objectives or fails to adequately account for the multi-objective trade-off mechanisms under mixed fleet and charging constraints, resulting in limited decision-support capability. This paper aims to bridge this gap by proposing an innovative multi-strategy optimization algorithm to provide a systematic, quantitative solution for resolving the “cost-emission-service” trilemma of mixed fleets.
Amid intensifying market competition and increasingly dynamic operational environments, research on cold chain logistics distribution scheduling has primarily evolved along two technical pathways: multi-objective programming models based on exact solution methods and multi-objective optimization algorithms based on meta-heuristics. The former continues to deepen in terms of model complexity and realism. For instance, Deng et al. [14] constructed a distribution cost model comprehensively considering temperature, carbon emissions, customer satisfaction, and traffic conditions, while Hong et al. [15] planned cold chain logistics routes considering congestion avoidance with the objectives of minimizing carbon emissions and total costs. Notably, recent research has achieved new progress in green logistics and refined modeling. For example, Tavana et al. [16] systematically analyzed the link between food miles and carbon footprint, revealing the environmental impact of logistics activities from the source of the supply chain, thus providing a macro-level carbon emission consideration background for this study. Wu et al. [17] innovatively introduced trapezoidal fuzzy numbers to handle interval fuzzy demand in green multimodal transport path optimization, and considered mixed time windows and carbon trading policies, offering direct reference for addressing demand uncertainty and environmental policy synergy in logistics. Sun et al. [18] explicitly focused on “modeling the green multimodal transport path problem with soft time windows considering interval fuzzy demand.” The fuzzy multi-objective optimization framework they constructed provides important methodological inspiration for this study in handling uncertain customer requirements and time windows. Given the NP-hard nature of the problem and the requirements for real-time scheduling, multi-objective optimization algorithms based on meta-heuristics have become the mainstream of current research. Zhang et al. [19] developed a multi-objective hybrid genetic algorithm combined with large neighborhood search to enhance efficiency; Liu et al. [20] integrated multiple strategies to develop a hybrid ant colony optimization algorithm; Leng et al. [21] introduced adaptive crossover and mutation strategies into traditional genetic algorithms. Recent algorithm research places greater emphasis on system integration and multi-dimensional performance evaluation. For instance, Yi et al. [22] constructed a multi-objective model simultaneously considering heterogeneous fleets, soft time windows, and path flexibility.
However, existing research still exhibits notable limitations. Firstly, the majority of studies focus on traditional objectives such as distribution cost and timeliness, generally overlooking the core reality of mixed-temperature fleets composed of electric and fuel-powered vehicles. They fail to systematically integrate critical factors like electric vehicle range and charging constraints, as well as refrigeration energy consumption characteristics, into the optimization models. Consequently, these models struggle to accurately reflect the complex decision-making environment of cold-chain logistics under the triple bottom line of economic cost, service quality, and environmental impact. Secondly, at the algorithmic level, to address the shortcomings of traditional algorithms like NSGA-II in balancing multi-objective relationships and maintaining solution diversity, NSGA-III has been increasingly adopted due to its reference point mechanism. Scholars have improved it in aspects such as adaptive reference point setting [23,24], integration of local search strategies [25,26,27], and design of problem-adapted operators [28,29,30]. Nonetheless, these generic improvements do not adequately account for the unique solution space structure induced by mixed-temperature fleets in cold-chain distribution—such as the coupling of charging/refueling decisions with temperature control energy consumption—and the complex constraint systems. This results in insufficient specificity when applied to the problem at hand. Even with the construction of more comprehensive models, the standard NSGA-III algorithm still faces three major bottlenecks when addressing such high-dimensional, multi-constrained, and strongly coupled cold-chain routing optimization problems: (1) the initial population lacks quality and guidance within the hybrid solution space; (2) there is insufficient capability for fine-grained local search on continuous variables (e.g., speed, temperature); (3) the fixed operator mechanism struggles to adapt to problem-specific characteristics, leading to an imbalance between exploration and exploitation, which ultimately compromises convergence efficiency and the distribution quality of the Pareto front.
To address the aforementioned challenges, this paper proposes a systematically improved NSGA-III algorithm, with its core innovations comprising: the design of a hybrid heuristic population initialization strategy to enhance the quality and diversity of initial solutions; the introduction of an adaptive simulated binary crossover operation to strengthen the search capability in continuous spaces; and the development of a dynamic polynomial mutation mechanism to achieve intelligent balancing between exploration and exploitation. This study aims to provide a quantitative decision-support tool for this complex sustainable logistics problem through advanced computational intelligence methods, thereby promoting the synergistic optimization of cold chain logistics distribution systems across the three dimensions of cost, environment, and service.

2. Model Formulation

2.1. Problem Description and Assumptions

This paper establishes a cold chain logistics model structured as a two-tier supply chain network comprising m distribution centers and n customer points. Each distribution center possesses a sufficient fleet of refrigerated vehicles. All vehicles commence delivery at a designated, unified start time and, after serving their assigned customers, return to the distribution center geographically closest to the final customer. The objectives are to rationally plan the number of vehicles used and their transportation routes, subject to constraints such as vehicle load capacity and customer demand, to simultaneously maximize customer satisfaction, minimize total distribution cost, and minimize total carbon emissions.
The vehicle routing problem for cold chain logistics distribution based on the improved NSGA-III algorithm is a highly complex, multi-depot, multi-objective vehicle routing problem with hard time windows and special cargo attributes [31,32,33,34]. Its complexity is manifested in several dimensions. First, the strong coupling between task allocation and route planning arises from the coordination required among multiple distribution centers, along with the dynamic uncertainty in fleet size configuration. Second, the inherent characteristics of cold chain logistics—such as time sensitivity, continuous supply requirements, and the non-returnable nature of goods—impose stringent time window and loading constraints. These constraints drastically reduce the feasible solution space, significantly increasing the difficulty of finding optimal solutions. Finally, the three inherently conflicting objectives—maximizing customer satisfaction, minimizing total distribution cost, and minimizing carbon emissions—demand that the algorithm possesses a powerful global search capability and an effective mechanism for multi-objective trade-offs, capabilities that extend far beyond the scope of traditional optimization methods. To ensure that customer demands are met while adhering to urban regulations and operational limitations, the following assumptions are made for model formulation: (1) All distribution centers have sufficient inventory to meet the demand of every customer point; stockouts do not occur. (2) The vehicle fleet consists of both fuel-powered and electric refrigerated trucks. The goods transported are a single category of temperature-sensitive products (e.g., fresh produce or pharmaceuticals) requiring uninterrupted temperature control throughout the entire journey. (3) The location, demand quantity, and strict service time window for each customer point are known in advance. (4) If a refrigerated truck fails to arrive and complete service within the agreed-upon time window, the logistics company must pay a penalty to the customer and bears the risk associated with product quality degradation.

2.2. Nomenclature

The parameters and variables required for establishing the cold chain logistics distribution routing planning model based on the improved NSGA-III algorithm, along with their descriptions, are presented in Table 1 below.
Decision Variables: z k is a binary variable that equals 1 if refrigerated vehicle k is utilized, and 0 otherwise; y j k is a binary variable that equals 1 if refrigerated vehicle k visits customer point j , and 0 otherwise; x i j k is a binary variable that equals 1 if refrigerated vehicle k travels from distribution center to customer point j , and 0 otherwise; y i k is a binary variable that equals 1 if the refrigerated vehicle charges at node i , and 0 otherwise.

2.3. Objective Functions

2.3.1. Customer Satisfaction

The inclusion of customer satisfaction as a core optimization objective is grounded in the theoretical frameworks of multi-criteria decision analysis and service quality management. In time-sensitive cold chain logistics, on-time delivery serves as a key performance indicator for service reliability, directly linked to customer loyalty and commercial value. Consequently, this study constructs it, alongside distribution cost and carbon emissions, as a multi-objective optimization model.
(I)
Individual Satisfaction Function: Theoretical Basis and Definition
To quantify the nonlinear impact of delivery timeliness on service experience, a satisfaction function based on fuzzy time windows is introduced. The design of this function form integrates insights from behavioral science and operations management theory: Theoretical Foundation: Its piecewise design simulates the “loss aversion” psychology from Prospect Theory. Customers perceive on-time delivery (within the ideal time window) as the reference point. Any deviation is perceived as a “loss,” and the sensitivity to perceived losses is significantly higher than to equivalent “gains.” Concurrently, its decay shape reflects the law of diminishing marginal utility, meaning that even minor initial deviations lead to a substantial drop in satisfaction. Validation in Domain Applications: This type of functional form is widely used in the operations management literature to characterize the nonlinear relationship between service quality and customer response. Its effectiveness has been validated through applications in related fields such as vehicle routing and instant delivery. Based on this, the satisfaction S ( t i ) for customer point i is defined as a function of the service arrival time t i , with its piecewise form shown in Formula (1). The parameters α , β are time-sensitivity coefficients, controlling the decay rate of satisfaction for early and late arrivals respectively. This allows the model to flexibly adjust the penalty intensity for different directions of deviation.
(II)
Overall Satisfaction Aggregation: Weighting Scheme and Model Extensibility
When aggregating individual satisfaction levels from each customer point into an overall satisfaction objective, a corresponding weight must be assigned to each customer point. Benchmark Weighting Scheme: Demand Proportion Method. The benchmark scheme established in this study employs the proportion of customer demand as the weight, i.e., w i = q i j q j . This scheme is based on three key principles: Quantifiability and Objectivity: Demand data is readily available and unambiguous, ensuring the model’s reproducibility and avoiding subjective bias. Proxy for Economic Importance: A customer’s order volume often correlates positively with their revenue contribution and strategic importance, serving as a reasonable proxy variable for economic weight. Link to Operational Cost: Service failures for major clients typically entail higher corrective costs and reputational risk. Therefore, the overall satisfaction objective function is defined as Formula (2). This benchmark scheme aims to establish a transparent, generalizable, and easily comparable foundation for the model.
Statement on Model Generality and Extension Interface. We fully acknowledge that in management practice, customer importance is a multidimensional construct. The reviewer’s insight regarding the weighting assumption is highly pertinent. Therefore, it is hereby clarified that the satisfaction weight w i in this model is designed as a modular and configurable interface. The aforementioned demand proportion method is merely a standard instantiation embedded within this general framework. In practice, decision-makers can seamlessly replace w i with a composite function that integrates multiple attributes—such as strategic tier, contract priority, profit contribution rate, or customer lifetime value—based on specific strategic objectives. The optimization algorithm framework proposed in this study fully supports such extensions, thereby enabling support for differentiated and precise operational decision-making. A thorough analysis of the impact of different weighting schemes constitutes a valuable direction for future research.
Incorporating customer satisfaction as a core optimization objective is theoretically grounded in the classical frameworks of multi-criteria decision analysis and service quality management. In time-sensitive cold chain logistics, on-time delivery, often explicitly stipulated in service contracts as a key dimension for measuring service reliability, serves as a critical performance indicator linking operational decisions to commercial value. Consequently, this study constructs a multi-objective optimization model aimed at simultaneously enhancing customer satisfaction, reducing distribution costs, and minimizing carbon emissions. The measurement of customer satisfaction is achieved by introducing fuzzy appointment time windows: the response time from the distribution center to a customer point is treated as a variable and mapped to a value between 0 and 1 via a specific satisfaction function (0 represents complete dissatisfaction, 1 represents complete satisfaction, as shown in Figure 1). The piecewise linear decay shape of this function simulates the law of diminishing marginal utility from decision theory and the loss aversion principle from prospect theory. This means customers are completely satisfied with service within the ideal time window, while even minor deviations lead to a significant drop in satisfaction. This functional form is widely used in the operations management literature to characterize the nonlinear relationship between service quality and customer response, fully accounting for customer point sensitivity to service time and the paramount importance of the distribution center’s rapid responsiveness [35,36,37,38].
The customer satisfaction function is denoted by S ( t i ) and defined as follows:
S ( t i ) = { 0 t i < S S j ( t i S S j S j S S j ) α S S j t i < S j 1 S j t i L j ( L L j t i L L j L j ) β L j < t i L L j 0 t i > L L j
Formula (1) is concretely described as follows: If service is completed within the customer’s ideal time window, customer satisfaction is 1. If the service time is earlier than the lower bound or later than the upper bound of the feasible time window, customer satisfaction is 0. Otherwise, the further the service time deviates from the ideal time window interval, the lower the satisfaction level. α , β represents the time-sensitivity coefficient.
In aggregating individual satisfaction scores to form an overall objective, this study employs the proportion of each customer point’s cold chain product demand relative to the total delivery task as the weighting coefficient. This weighting scheme is selected based on the following practical considerations and principles: 1. Quantifiability and Objectivity: Demand quantity is the most readily available, objective, and unambiguous metric within operational data systems, thereby avoiding bias introduced by subjective judgment. 2. Proxy for Economic Importance: In most commercial contexts, a customer’s purchase volume is positively correlated with their revenue contribution and strategic significance to the enterprise. Thus, demand proportion serves as a reasonable and generalizable proxy for a customer’s economic weight. 3. Link to Operational Cost: Customer points with higher demand typically involve greater loading/unloading volumes and longer service times, and service failures for these points carry higher potential costs for operational rectification. While alternative factors such as strategic partnership status, contractual priority, or customer lifetime value could serve as the basis for defining weights in specific supply chain relationships, this study adopts demand-proportion weighting to establish a transparent, universal, and easily replicable baseline model. Consequently, a customer point’s importance level is characterized by the ratio of its cold chain logistics resource demand to the total demand of the delivery mission, meaning that a higher demand proportion signifies a higher degree of importance for that customer point. The customer satisfaction function is presented as Equation (2).
S = j = 1 n X j p j = 1 n X j p S ( t i )
Since this research focuses on the vehicle routing problem for a mixed fleet in cold chain logistics, particular attention is paid to the impact of route selection on distribution costs. The composition of distribution costs typically includes vehicle depreciation, maintenance, tire wear, driver wages, fuel costs, charging costs, and road maintenance fees. Given the numerous and complex parameters influencing these costs, along with varied and intricate calculation methods and the lack of a unified standard, considering them individually often leads to significantly divergent results. Therefore, the total distribution cost is categorized into fixed costs, variable costs, refrigeration costs, cargo damage costs, and delivery penalty costs.

2.3.2. Fixed Cost

Fixed Distribution Cost: Costs whose total amount does not change with mileage variations within a given period. Fixed distribution costs include driver wages, vehicle depreciation, maintenance fees, and road tolls. These costs are proportional to the number of vehicles dispatched—i.e., the more vehicles deployed, the higher the fixed costs. Therefore, these fixed costs are allocated to each refrigerated vehicle involved in the dispatch process, measured in RMB Yuan per vehicle. The calculation formula for fixed distribution cost is as follows:
C f = c a k A z k + c b k B z k

2.3.3. Variable Cost

Variable distribution costs are those whose total amount varies with changes in travel distance within the relevant range. Variable transportation costs encompass tire wear expenses, fuel costs, travel electricity consumption costs, and charging service costs, under the assumption that these variable costs increase linearly with transportation mileage. The calculation formula for variable costs is as follows:
C v = i = 1 m j = 1 n k A x i j k d i j c a k + i = 1 m j = 1 n k B x i j k d i j c b k + c e l e c   k K e i O V s δ i k

2.3.4. Refrigeration Cost

In cold chain logistics distribution, the calculation of refrigeration cost considers both the continuous cooling demand during transportation and the additional cooling consumption during customer service. When the vehicle doors are closed, the interior maintains a stable low-temperature environment, resulting in lower refrigerant consumption. When the doors are open, heat exchange between the inside and outside causes the cabin temperature to rise, requiring increased cooling intensity to maintain the set temperature. Therefore, the energy cost for cooling during door-open periods is higher. The total refrigeration cost can be expressed as:
C r = c 1 i O j D k K x i j k H T i j k + c 1 i O max ( S j t i ) , 0 + c 2 i O t s
The refrigeration cost is strictly defined as the energy expenditure required to maintain the vehicle cargo compartment within the specified temperature range. It is formulated as a function of the total travel time, the internal thermal load (influenced by external environmental conditions), and the energy efficiency of the refrigeration unit. Its essence lies in the “sustained energy consumption for maintaining a predetermined thermal environment.” Its calculation is independent of whether temperature control failures or time window violations occur.

2.3.5. Cargo Damage Cost

In the process of cold chain logistics distribution, the cargo damage cost C d is primarily composed of two components: the quality degradation cost C d 1 of goods during transportation and waiting periods when the vehicle doors are closed, and the additional damage cost C d 2 caused by temperature fluctuations when the doors are opened for customer service. The cargo damage cost can be expressed as:
C d 1 = j D k K p P y i k c p X j p 1 e δ 1 p max t i , S S j D T i j k + d j p 1 e δ 1 p × t i
C d 2 = j D k K p P y i k c p Q i p 1 e δ 2 p × t s
The total cargo damage cost is expressed as:
C d = C d 1 + C d 2
The cargo damage cost is strictly defined as the economic value loss incurred due to product quality degradation (such as spoilage or loss of freshness) during transportation. It is formulated as a function related to the product’s “time-temperature tolerance threshold” when exposed to non-ideal temperature environments. This cost is triggered and accumulates only when the in-compartment temperature exceeds the product’s safety threshold. Its essence is the “loss of cargo value caused by temperature exceedance.”

2.3.6. Penalty Cost

As defined by the customer satisfaction function, S j , L j represents the ideal time window requested by the customer, while S S j , L L j denotes the acceptable feasible time window. The penalty cost is calculated as follows:
C p = k K i V c ϕ e a r l y 1 max S S j t i k d , 0 + ϕ e a r l y 2 max S j t i k d , 0 + ϕ l a t e 1 max t i k d L j , 0 + ϕ l a t e 2 max t i k d L L j , 0
where ϕ e a r l y 1 and ϕ l a t e 2 are high penalty coefficients, corresponding to service times earlier than the feasible time window’s lower bound or later than its upper bound, respectively. Such cases typically signify service rejection or the incurrence of substantial (breach penalties), thus warranting the most severe penalties. In contrast, ϕ e a r l y 2 and ϕ l a t e 1 are low penalty costs, corresponding to deviations from the ideal time window while still remaining within the feasible time window. These scenarios usually involve certain service quality costs or minor compensations, resulting in comparatively lighter penalties.
The time window penalty cost is strictly defined as the direct financial consequence incurred due to failure to deliver within the contractually agreed service time window. It comprises two components: (1) Contractual Penalty: A fixed or proportional fine paid according to the terms of the Service Level Agreement; (2) Emergency Operational Costs: Additional labor and communication expenses arising from handling the delay incident (e.g., customer complaints, urgent coordination). Its essence is the “compensation and handling fees paid for breaching the service time commitment,” independent of whether the cargo quality is compromised.

2.3.7. Carbon Emission Function

This study accounts for the carbon dioxide (CO2) emissions generated during the distribution process. In vehicle routing problem research, linear emission models based on travel distance and load are widely adopted due to their good computability and effective characterization of the core factor (travel distance). To strike a balance between model simplicity and the realities of cold chain operations, we introduce a crucial “refrigeration process energy consumption correction” on the foundation of the classical model. This results in a more explanatory emission accounting framework specifically optimized for refrigerated trucks. The total carbon emissions consist of the following two components:
Traction process emissions originate from the work performed by the vehicle’s powertrain to overcome driving resistance and constitute the primary portion of total emissions. A widely used linear model is adopted, where fuel consumption is related to travel distance and load.
When the unladen weight of the Refrigerated Truck is Q 0 , the cargo load is X′ (independent variable), and the fuel consumption per unit distance is Y′ (dependent variable), the following relationship can be established:
Y = a Q 0 + X + b
If the load capacity of a refrigerated truck is Q 1 , the fuel consumption per unit distance when the refrigerated truck is fully loaded is Y*, and the fuel consumption per unit distance when the refrigerated truck is empty (unloaded) is Y0, the following linear relationship can be derived:
Y 0 = a Q 0 + b Y = a Q 0 + Q 1 + b
Solving the equation yields the linear fuel consumption function: a = Y Y 0 Q 1 , b = Y 0 Y Y 0 Q 1 . Therefore, the expression for fuel consumption per unit distance Y′ is:
Y = Y 0 + Y Y 0 Q 1 X
In the process of cold chain logistics distribution, when a refrigerated truck travels from distribution center i to customer point j , the CO2 emissions E i j generated during this segment can be expressed as:
E i j t r a c t i o n = e 0 Y 0 + Y Y 0 Q 1 Q i p d i j x i j k
where e 0 is the CO2 emission coefficient. This part of the model captures the influence of the dominant factors: travel distance and load.
Refrigeration Process Emissions: The energy consumption for refrigeration primarily arises from maintaining the low temperature inside the cargo compartment. This consumption is exacerbated by heat intrusion each time the vehicle stops and the doors are opened for unloading.
The refrigeration emissions generated while the vehicle travels on arc i , j and during service at node i can be estimated as:
E i j c o o l i n g = ρ b a s e H T i j k + ρ d o o r k i , j y j k e 0
where ρ b a s e is the base refrigeration power per unit time (related to insulation performance and the temperature difference between inside and outside), and ρ d o o r is the peak additional refrigeration energy consumption caused by each door opening for unloading.
The total carbon emissions are the sum of the above two parts:
E i j = i , j A E i j t r a c t i o n + E i j c o o l i n g

2.4. Multi-Objective Model

Based on the comprehensive considerations above, a multi-objective optimization model is constructed to maximize customer satisfaction, minimize total distribution cost, and minimize carbon emissions.
max S = j = 1 n X j j = 1 n X j S ( t i )
min C = C f + C v + C r + C d + C p
min E i j = i , j A E i j t r a c t i o n + E i j c o o l i n g
The constraints are formulated as follows:
j = 1 n y j k = 1 , j D
B j k a = B i k d e d i j , i , j V , k K e
B i k d = B i k a + δ i k , i V , k K e
0 B i k a , B i k d B max , i V , k K e
0 δ i k y i k B max , i D V s , k K e
δ i k = 0 , i V c , k K e
t i k d = t i k a + t s + δ i k ρ , i V , k K e
j D x i j k 1 , k A B
O i 1
i j k A x i j k X j Q 1 , i O , j D
i j k B x i j k X j Q 2 , i O , j D
S S j t i L L j , i O
Formula (16) represents the maximization of customer satisfaction; Formula (17) represents the minimization of total distribution cost; Formula (18) represents the minimization of carbon emissions; Formula (19) ensures that each customer point is served; Equation (20) represents the battery energy consumption when traveling from node i to node j ; Equation (21) represents the increase in battery energy after recharging at node i ; Equation (22) ensures that the battery energy level always remains within the feasible range; Equations (23) and (24) collectively ensure that recharging occurs only at distribution centers or charging stations, and that the amount of energy recharged is constrained by battery capacity and charging decision variables; Equation (25) represents the departure time of vehicle k from node i ; Formula (26) stipulates that each refrigerated truck departs from a distribution center; Formula (27) states that a delivery task can only be performed if a refrigerated truck is available at the distribution center; Formula (28) ensures that fuel-powered vehicles do not exceed their load capacity; Formula (29) ensures that electric vehicles do not exceed their load capacity; and Formula (30) guarantees that refrigerated trucks arrive at customer points within the specified time windows.

3. Algorithm Design

3.1. Principles and Limitations of the Classical NSGA-III Algorithm

The key step in the NSGA-II algorithm is to determine which individuals proceed to the next generation by calculating their crowding distance. However, this algorithm exhibits weak convergence. NSGA-III shares a similar framework with NSGA-II but addresses the issue of poor population convergence more effectively by introducing reference points and a niche-preservation strategy based on those points. Nevertheless, the uniformity of the reference points remains difficult to ascertain [30,39,40,41,42].
Key Steps of NSGA-III: Assume the population size at generation t is N . The algorithm begins by performing a series of genetic operations on the parent population P t to generate an offspring population Q t . Subsequently, P t and Q t are combined to form a temporary combined population R t , where R t = P t Q t . At this stage, the combined population R t has a size of 2 N . The objective is to select N individuals from R t to form the next generation’s parent population P t + 1 . Thus, the selection is made from these non-domination fronts F 1 , F 2 , F 3 , based on both Pareto rank and diversity. The smaller the hierarchy level, the greater the probability that the individual will enter the next generation population P t + 1 . Individuals from fronts F 1 , F 2 , F 3 , , are sequentially incorporated into set S t until the size of S exceeds N , meaning the number of individuals in S t must be at least N . The front F t at which this condition is first met is referred to as the critical layer, indicating that its incorporation into S t satisfies this requirement. Among these, assuming S t = N , all individuals from fronts F 1 to F t are incorporated into S t , at which point S t = P t + 1 . If S t > N , meaning the inclusion of the critical front F l causes the population size of S l to exceed N , it becomes necessary to select only a portion of the individuals from the critical front F l for inclusion in P t + 1 . This selection is accomplished using a reference point-based niching strategy. The procedure is described as follows: First, the individuals in set S l are normalized, with the ideal point after normalization serving as the origin. Subsequently, a set of H H N reference points is randomly generated. Rays emanating from the origin and passing through these reference points define reference lines. For each individual in S l , the perpendicular distance to each reference line is calculated. Each individual is associated with the reference point whose corresponding reference line yields the smallest perpendicular distance. Let ρ r denote the number of individuals associated with a reference point r , defined as its niche count. Next, within the critical front F l , the reference point r , with the smallest niche count ρ r is selected. A decision is then made based on the value of ρ r : When ρ r = 0 , the algorithm selects the individual from frontier F l with the smallest perpendicular distance and inserts it into the new population P t + 1 and ρ r is incremented by 1; if ρ r 1 , an individual associated with F l is randomly chosen from and added to P t + 1 , and ρ r is similarly incremented. This process is repeated iteratively until the number of individuals in the next-generation population P t + 1 equals N .
The traditional NSGA-III algorithm exhibits limitations when applied to the cold chain logistics vehicle routing problem, including slow convergence due to reliance on randomly generated initial populations, insufficient optimization capability for continuous variables through its crossover operation, and a mutation mechanism that struggles to balance global exploration and local exploitation. To address these issues, PSO is introduced to optimize the initial population, enhancing the quality and diversity of initial solutions. SBX is employed to strengthen the local search capability for continuous variables. Additionally, a polynomial mutation mechanism is incorporated to maintain population diversity. These enhancements effectively overcome the original algorithm’s shortcomings in convergence speed, optimization precision, and solution distribution uniformity, thereby significantly improving the traditional NSGA-III algorithm’s solving performance and robustness for the cold chain logistics vehicle routing problem.

3.2. Initial Population Generation Strategy Based on Particle Swarm Optimization

In the NSGA-III algorithm, the quality of the initial population directly influences both the subsequent convergence speed and the quality of the final solution set. The traditional NSGA-III algorithm employs a random method to generate the initial population. While this approach is straightforward, it often results in insufficient population diversity, slow convergence, and a heightened risk of becoming trapped in local optima.
To address the aforementioned issues, this study leverages the powerful global guided search capability of Particle Swarm Optimization (PSO) to design a hybrid initialization strategy aimed at generating a high-quality and highly diverse initial population. The core of this strategy involves executing a short-cycle PSO search, where the encoding and fitness function are specifically designed to adapt to the path optimization problem. The detailed procedure is as follows:
Step 1: Encoding. Particle position X i = x 1 , x 2 , , x n represents the visitation sequence of customer points. Simultaneously, real-number encoding is employed to denote continuous parameters in another particle position vector X i , which requires a decoding process to transform these parameters into actual routes. During initialization, N particles are randomly generated, with each particle’s position and velocity initialized as follows:
X i 0 U x min , x max , V i 0 U v min , v max
where U denotes the uniform distribution, x min , x max represents the position boundaries, and v min , v max defines the velocity boundaries.
Step 2: Fitness Function Design. The fitness function is used to evaluate the merit of a particle and is typically related to the model’s objective functions. For the cold chain logistics distribution routing problem, the fitness function can be defined as:
f X i = w 1 Cos t X i + w 2 S a t i s f a c t i o n X i + w 3 E X i
where Cos t X i represents the total distribution cost, S a t i s f a c t i o n X i represents the customer satisfaction, E X i represents the total carbon emissions, and w 1 , w 2 , w 3 are weight coefficients used to balance the multiple objectives.
Step 3: Particle Velocity and Position Update. During each iteration, particles update their velocity and position based on their personal best position and the global best position according to the following equations:
V i t + 1 = w V i t + c 1 r 1 p b e s t i X i t + c 2 r 2 g b e s t X i t X i t + 1 = X i t + V i t + 1
where w is the inertia weight, controlling the momentum of the particle’s velocity; c 1 , c 2 are acceleration coefficients, governing the influence of the particle’s individual experience and social experience, respectively; r 1 , r 2 are random numbers uniformly distributed in the interval [0, 1], introducing stochasticity to the search process.
Step 4: Individual and Global Best Update. After each iteration, the personal best position of each particle and the global best position are updated as follows:
p b e s t i = X i t + 1 , i f f X i t + 1 < f p b e s t i p b e s t i , o t h e r w i s e g b e s t = arg min p b e s t i f p b e s t i
Step 4: Termination Condition and Output. The PSO process terminates when either the maximum number of iterations is reached or the fitness values converge. Upon termination, the optimized particle swarm, representing the final personal best positions, is output as the high-quality initial population for the subsequent NSGA-III algorithm.
Based on the above design, the pseudocode for the hybrid initialization strategy is shown in Algorithm 1:
Algorithm 1: PSO-based Hybrid Population Initialization.
Line NumberPseudocode
1Input: pop_size
2Output: P
3P_elite ← calpso(cs, pop_size)   //Obtain a set of elite solutions via Particle Swarm Optimization
4P_unique ← remove_duplicates(P_elite)   //Remove duplicate solutions
5num_elite ← size(P_unique, 1)
6P_random ← generate_random_individuals(pop_size-num_elite)
7P ← [P_unique; P_random]   //Combine the elite solutions with the random solutions
8return P
This strategy leverages the guided search of Particle Swarm Optimization to rapidly direct the population toward promising regions of the solution space, thereby ensuring the quality of the initial solutions. Concurrently, it enhances population diversity by hybridizing the PSO-optimized solutions with completely randomly generated solutions, establishing a robust foundation for the subsequent evolutionary process of NSGA-III.

3.3. Local Search Enhancement Mechanism Based on Simulated Binary Crossover

In the NSGA-III algorithm, the Simulated Binary Crossover (SBX) operator is adopted, primarily due to its mathematical properties and adaptability to the characteristics of the cold chain logistics distribution routing model. The specific reasons are as follows: ① Mathematical Principle and Characteristics of SBX. SBX is a real-valued crossover algorithm whose core idea is to simulate single-point crossover behavior in binary-coded genetic algorithms but implemented in real-valued space. It generates offspring around parents based on a distribution index, allowing for controlled exploration near the current solutions. ② Motivation for Adopting SBX: Adaptation to the Continuous Variable Nature of the Model. The cold chain logistics distribution routing problem typically involves continuous variables, such as delivery time, vehicle load, and customer satisfaction. Traditional binary crossover is ill-suited for directly optimizing such variables. SBX, through its real-valued crossover operation, can effectively handle continuous variables while preserving solution feasibility and diversity. Enhancement of Local Search Capability. The distribution parameter of SBX allows offspring to be generated in the vicinity of the parent individuals, thereby strengthening local search capability. For the cold chain logistics routing problem, improved local search helps quickly locate high-quality solutions within the complex solution space, avoiding the degradation in solution quality that can result from random crossover. Preservation of Population Diversity. By randomly generating the β value, SBX ensures that offspring are generated in different regions near the parents, thereby maintaining population diversity. This is particularly crucial for multi-objective optimization problems, as it helps effectively prevent the algorithm from converging to local optima and enhances global search capability.
However, the distribution index ( η c ) of the standard Simulated Binary Crossover (SBX) operator is typically a fixed parameter, which constrains its capacity to dynamically balance exploration and exploitation throughout the search process. To address this limitation, this paper introduces an adaptive SBX scheme wherein the core innovation is enabling η c to self-adjust based on the population’s evolutionary state. The detailed procedure is outlined as follows:
Step 1: Parent Selection. Two parent individuals, denoted as P 1 and P 2 , are selected from the current population using a tournament selection method.
Step 2: Random Number Generation u . For each decision variable, generate a uniformly distributed random number u within the interval [0, 1].
Step 3: Calculation of Spread Parameter β .
β = 2 u 1 η c + 1 , i f u 0.5 1 2 u 1 1 η c + 1 , i f u >; 0.5
where u is a uniformly distributed random number in the interval [0, 1]; η c is the distribution index, controlling the proximity of offspring to their parents. A larger η c value results in offspring being generated closer to the parents, thereby enhancing local search capability; conversely, a smaller η c value promotes a wider spread of offspring, thus strengthening global search capability.
Step 4: Offspring Generation. The two offspring individuals C 1 and C 2 are generated for each decision variable using the following formulas:
C 1 = 0.5 1 + β P 1 + 1 β P 2 C 2 = 0.5 1 β P 1 + 1 + β P 2
Step 5: Constraint Handling and Feasibility Check. Verify whether the generated offspring individuals satisfy all constraints specified in the model. If constraints are violated, apply repair mechanisms or regenerate the offspring to ensure feasibility.
Step 6: Population Update. Incorporate the feasible offspring into the new population, replacing the parent individuals according to the selection and replacement strategy of the NSGA-III algorithm.
Based on the aforementioned design, the pseudocode for the adaptive simulated binary crossover is shown in Algorithm 2:
Algorithm 2: Adaptive Simulated Binary Crossover.
Line NumberPseudocode
1Input: P1, P2
2Output: C1, C2
3 η c Compute the adaptive distribution index()   //Based on population diversity
4For each decision variable i  do
5    β compute_distribution_factor( , *u*)   //*u* ∼ U(0,1)
6   C1[*i*], C2[*i*] generate_offspring(P1[*i*], P2[*i*], β)
7end for
8return C1, C2
This adaptive strategy dynamically adjusts the distribution index of the crossover operator, enabling the algorithm to autonomously balance global exploration and local exploitation during the evolutionary process, thereby significantly enhancing search efficiency and convergence precision in complex solution spaces.

3.4. Global Search Optimization Method Based on Polynomial Mutation

In the vehicle routing problem for cold chain logistics distribution, the polynomial mutation operator is employed to fully leverage its mathematical properties for real-valued encoding, addressing the difficulties traditional mutation methods face in handling continuous variables and complex constraints. By enhancing global search capability, maintaining population diversity, and adapting to intricate constraints, polynomial mutation can significantly improve the algorithm’s optimization performance and robustness, providing high-quality solutions for multi-objective routing problems.
The specific procedure of polynomial mutation in this context is as follows:
Step 1: Selection of Individuals for Mutation. Individuals are selected from the parent population to undergo mutation based on a predetermined mutation probability p m .
Step 2: Generation of Random Perturbation Factor. For each decision variable x i selected for mutation, generate a random number u uniformly distributed in [0, 1]. Calculate the perturbation factor δ using the polynomial distribution formula:
δ = 2 u 1 / η m + 1 1 , i f u < 0.5 1 2 1 u 1 / η m + 1 , o t h e r w i s e
where η m is the mutation distribution index, which controls the strength and scope of the mutation.
Step 3: Offspring Generation.
x i = x i + δ x u , i x l , i
Step 4: Boundary Check and Correction. Verify whether each variable of the offspring individual exceeds its defined boundaries. If a variable violates these constraints, correct it by setting the value to the nearest boundary value.
Step 5: Fitness Evaluation. Calculate the fitness value of the offspring individual by evaluating its performance against the multi-objective optimization criteria.
Step 6: Population Update. Incorporate the feasible offspring individual into the population, either directly replacing the parent individual or participating in a subsequent environmental selection process to maintain population size and quality.
Based on the aforementioned design, the pseudocode for the dynamic polynomial mutation is shown in Algorithm 3:
Algorithm 3: Dynamic Polynomial Mutation.
Line NumberPseudocode
1Input: C, gen, MaxGen, p_m_max, p_m_min, *k*
2Output: C′,
3p_m ← p_m_max–(p_m_max–p_m_min) × (gen/MaxGen)^*k*   //   Dynamic mutation probability
4for each decision variable i do
5   if rand() ≤ p_m then
6   C′[*i*] ← perturb(C[*i*]) //Generate new value by perturbation
7   end if
8end for
9return C
This dynamic mechanism ensures a nonlinear decrease in the mutation probability throughout the evolutionary process, guaranteeing sufficient exploration of the solution space during the early iterations while stabilizing the search in promising regions for intensive exploitation during later stages. This approach effectively reconciles the conflict between discovering new areas and converging to precise solutions.

3.5. Overall Flow Design of the Improved NSGA-III Algorithm

The core design philosophy of the INSGA-III algorithm lies in constructing a phased, collaborative optimization framework with distinct responsibilities for each component, systematically addressing the bottlenecks faced by traditional algorithms in solving the hybrid fleet routing problem for cold chain logistics. Its innovation does not lie in inventing entirely new operators but rather in a deeply customized integration tailored to the problem characteristics: first, leveraging the guided search capability of PSO to provide a high-quality, high-diversity starting point for evolution, overcoming the blindness of random initialization at its source; second, endowing genetic operators with adaptability, enabling them to intelligently switch between exploration and exploitation based on the search state, thereby achieving fine-grained exploration of the complex solution space; finally, seamlessly embedding the above mechanisms into the NSGA-III framework of reference point-based elite selection, ensuring the search process consistently progresses toward a converged and well-distributed Pareto front. This systematic collaboration of “high-quality initialization, intelligent adaptive search, and stable elite guidance” is the key to achieving a breakthrough in overall performance.
The main procedure of INSGA-III operates through iterative evolution, with its core steps and module invocations as detailed in Algorithm 4. The specific implementation details of all critical operations will be elaborated in Section 3.6.
Algorithm 4: Overall Procedure of the Improved Hybrid NSGA-III Algorithm.
Line NumberPseudocodeImprovement
1Input: gen_max, pop_size, fname, V, M
2Output: Pareto,
3
4//Phase 1: Hybrid InitializationImprovement 1
5P_elite ← calpso(cs, pop_size)
//Generate elite solutions via PSO
6P_unique ← remove_duplicates(P_elite)
//Remove duplicate solutions
7P_random*generate_random_individuals(pop_size-size(P_unique, 1))*
8population ← [P_unique; P_random]
//Combine elite and random solutions
9population ← evaluate(population, fname)
//Evaluate initial population
10[population, front] ← non_dominated_sort(population)
//Non-dominated sorting
11
12for gen = 1 to gen_max do
//Main loop begins
13//Phase 2: Reproduction and Evolution
14     parent_selected ← tournament_selection(population)
    //Tournament selection
15     child_offspring ← genetic_operator(parent_selected)
    //Generate offspring
Improvements 2 & 3
16    child_offspring[:,IntVar] ← round(child_offspring[:,IntVar])
    //Integer repair
17    child_offspring ← evaluate(child_offspring, fname)
    //Evaluate offspring
18
19//Phase 3: Environmental Selection
20    population_inter ← [population; child_offspring]
    //Merge parent and offspring
21[population_inter_sorted, front] ←non_dominated_sort(population_inter)
22    new_population ← replacement(population_inter_sorted, front)
    //Generate new population
23population ← new_population
24end for
25
26//Phase 4: Result Output
27Paretopopulation(population[:,V*+*M*+2) == == 1, :)
//Extract Rank-1 solutions
28return Pareto

3.6. Core Implementation Mechanisms

To ensure the robustness and reproducibility of the INSGA-III algorithm, this section elucidates two core mechanisms: the Infeasible Individual Repair Strategy and the Adaptive Parameter Calibration Process.
Infeasible Individual Repair Strategy. During the evolutionary process, newly generated individuals may simultaneously violate multiple hard constraints of the routing problem. This algorithm employs a hierarchical, sequential repair heuristic that prioritizes restoring basic feasibility constraints before optimizing higher-level objectives. The specific steps are as follows:
Step 1: Load Capacity Constraint Repair: Check: Traverse each route and calculate the total demand X j p . If it exceeds the vehicle’s maximum load capacity Q , the route is marked as infeasible. Repair: Apply the “farthest customer removal–nearest feasible insertion” rule. Specifically, the customer farthest from the depot within the infeasible route is temporarily removed. It is then attempted to be inserted into the feasible position with the minimum insertion cost in other existing routes of the current solution. This process is repeated until the load capacity constraints of all vehicles are satisfied. This step ensures the fundamental physical feasibility of the solution.
Step 2: Energy Constraint Repair: Check: For electric vehicle routes, simulate energy consumption according to Equation (18). If the battery level B j k a < 0 falls below zero before reaching any node, the route is marked as infeasible. Repair: Before the segment i , j where the energy first becomes negative, insert the charging station or depot node closest to node i . After insertion, reset the vehicle’s charging decision variable y i k and charging amount δ i k , and recalculate the subsequent energy levels. This step ensures the continuous operational capability of electric vehicles.
Step 3: Time Window Constraint Softening and Local Optimization. After undergoing the repairs in the previous two steps, an individual is structurally feasible in terms of route sequence, but may still incur substantial time window penalty costs. Optimization: For such individuals, no further structural repair is applied. Instead, a fast local search is conducted to directly minimize its time window penalty cost C p , thereby fine-tuning the solution. This approach ensures that the repair process contributes not only to feasibility but also to the optimization of the objective function.
Adaptive Parameter Calibration Process. In INSGA-III, the distribution index η c for SBX and the distribution index η m for polynomial mutation are not randomly assigned or kept constant. Instead, they are adaptively calibrated through a feedback loop based on historical successful experiences. Archive of Successful Experiences:
At the end of each generation g , the algorithm archives all offspring individuals that successfully advance to the next generation, along with their corresponding crossover and mutation parameter values η c , η m , into a temporary repository. Learning Parameter Distributions: From this repository, the algorithm computes the empirical distribution of the successful parameter values for the current generation. Parameter Generation for the Next Generation: When generating the crossover and mutation parameters for generation g + 1 , the algorithm samples from a normal distribution centered around the mean of the historical successful parameters μ η c g , μ η m g , with a standard deviation σ η c g , σ η m g reflecting their dispersion. Specifically:
η c g + 1 N μ η c g , σ η c g , η m g + 1 N μ η m g , σ η m g
This mechanism enables the algorithm to autonomously learn and progressively converge toward parameter settings that exhibit superior performance within the contemporary search environment. During the initial stages of the search, successful parameters may be widely dispersed, allowing the algorithm to explore a broad solution space. As evolution converges, the distribution of successful parameters becomes increasingly concentrated and stable, guiding the algorithm into a phase of localized, fine-tuned exploitation. This provides a clear and mathematically traceable interpretation of the concept of “adaptiveness” within the algorithm’s operational framework.

3.7. Analysis of Computational Complexity and Scalability

(1)
Computational Complexity Analysis
Let the population size be N , the number of objective functions be O , the maximum number of evolutionary generations be G , the number of PSO initialization iterations be M , the number of customer points be n , and the complexity of individual evaluation be O f . PSO Initialization Phase: The complexity is O M N f , representing a one-time, fixed upfront overhead. NSGA-III Main Loop (per generation): This primarily includes adaptive evaluation O N f , non-dominated sorting and reference point association O O N 2 , and genetic operations and repair O N n . The per-generation complexity is dominated by O O N 2 . Overall Evolutionary Process: The overall complexity of the evolutionary process is O M N f + G N f + O N 2 .
(2)
Scalability Experiments and Computational Efficiency Analysis.
To assess the algorithm’s potential for handling large-scale problems, we generated larger test sets with customer point scales of 100, 200, and 500 based on benchmark instances such as C101, using replication and perturbation methods.
(3)
Discussion on Scalability Strategies for Industrial Application
Cost–Benefit Trade-off: The overhead of PSO initialization, O M N f , provides a high-quality initial population which can significantly reduce the total number of convergence generations G . This yields a net benefit in terms of either overall runtime or solution quality.
Performance and Efficiency Summary: As shown in Table 2, when the problem scale expands to 500 customer points, the HV value of INSGA-III decreases by only 8.1%, demonstrating good scale robustness. The runtime growth trend lies between linear and quadratic. Processing a 500-point problem requires approximately 20 min, establishing feasibility for offline or daily planning.
Path to Industrial-Grade Application: When n becomes extremely large, the evaluation cost f becomes the primary bottleneck. The algorithmic framework proposed in this paper can be further enhanced for scalability through the following strategies:
(1)
Parallelization: Population individual evaluation is a highly parallelizable task. Implementation on multi-core CPUs or GPUs can theoretically reduce evaluation time by nearly a factor of N .
(2)
Approximate Evaluation: Employ surrogate models for rapid fitness estimation in the early evolutionary stages, reserving precise evaluation only for later stages or critical individuals.
(3)
Hierarchical Optimization: First, cluster customer points into partitions, then perform detailed path optimization within each partition.

4. Computational Results and Analysis

4.1. Experimental Setup and Performance Metrics

To ensure a comprehensive and impartial evaluation of the performance of the improved NSGA-III algorithm, this section elaborates on the computational environment, benchmark test problems, parameter configurations, and quantitative performance metrics employed in the experiments.

4.1.1. Computational Environment, Test Instances, and Parameter Adaptation

All experiments were conducted on a unified platform (Windows 11, 2.4 GHz CPU, 16 GB RAM) and implemented using MATLAB R2019b.
(1)
Benchmark Instance Selection and Adaptation Method
To ensure a fair evaluation and align with the cold chain distribution scenario, this study selects the C1-class instances (C101–C105) from the Solomon benchmark dataset, characterized by clustered customer point distributions, as the source of geographic coordinates and basic demand data. To adapt these instances for the green vehicle routing problem with a mixed fleet in cold chain logistics, we enhance the original instances following a set of transparent rules based on industry data and engineering practice: Temperature Control and Cargo Damage Parameters: Each customer point is randomly assigned a product temperature tier (frozen, refrigerated, or ambient), based on which the target temperature, thermal sensitivity coefficient, and cargo damage model are set. Mixed Fleet Parameters: An initial fleet electrification ratio (30–50%) is defined. For electric vehicles, battery capacity (100 kWh), unit energy consumption (0.25 kWh/km), and charging rate are configured; for fuel-powered vehicles, fuel consumption per 100 km (14 L) and refrigeration energy consumption coefficients are set. The load capacity for both vehicle types is 4 tons. Energy Refueling Network: Charging/fueling station nodes are randomly generated outside the distribution center in proportion to the problem scale, forming an energy refueling network. Dynamic Cost and Time Calculation: Based on the generated parameters, vehicle load, travel distance, and traffic coefficients, the energy consumption, time, and various costs for each route segment are dynamically calculated.
This method ensures that all compared algorithms are fairly tested on the same set of enhanced benchmark instances that closely reflect real-world conditions.
(2)
Scenario and Operational Parameter Settings
Customer Point Scale: Three levels are set: 20, 50, and 100. Time Windows: Two types are defined: strict (45 min) and relaxed (90 min). Cost Parameters: Parameters are integrated as follows: diesel price (7.5 CNY/L), industrial electricity price (0.8 CNY/kWh), vehicle fixed cost (300–350 CNY/day), driver labor cost (2 CNY/minute), and late delivery penalty.

4.1.2. Algorithm Parameters, Comparative Settings, and Fairness Assurance

(1)
Algorithm Parameters
INSGA-III parameters: Population size 100, maximum iterations 200, crossover probability 0.8. An adaptive mutation probability P m = 1 n related to the problem scale and a reference point division layer H = 4 are adopted.
Comparative algorithms: Standard NSGA-III, NSGA-II, MOEA/D, and the single-objective Genetic Algorithm (GA) are selected for comprehensive comparison.
(2)
Experimental Fairness Assurance
To eliminate performance differences arising from auxiliary mechanisms, the following principles are strictly enforced:
Unified Constraint Handling: The hierarchical repair strategy proposed in this paper is implemented as an independent, shared module. All comparative algorithms are mandatorily required to invoke this module for feasibility repair after generating new solutions. Unified Operating Environment: All algorithms are executed under identical hardware, software, test instances, and maximum number of function evaluations (termination criterion). Coordinated Parameter Baseline: The baseline distribution indices for common genetic operators (e.g., SBX, polynomial mutation) are set identically across all multi-objective algorithms. The “adaptive” nature of INSGA-III is reflected in its dynamic adjustments around this common baseline.

4.1.3. Performance Evaluation Metrics

To quantitatively evaluate the comprehensive performance of the algorithms, this study employs two widely adopted performance metrics in the field of multi-objective optimization: Hypervolume (HV): This metric measures the volume in the objective space enclosed by the solution set and a predefined reference point. It simultaneously reflects both the convergence and diversity of the solution set. A larger HV value indicates better overall performance. Inverted Generational Distance (IGD): This metric calculates the average distance from reference points on the true Pareto front to the solution set obtained by the algorithm. It effectively assesses the convergence accuracy of the solution set. A smaller IGD value indicates better convergence performance. Furthermore, all experiments were independently executed 30 times to mitigate the effects of randomness. The Wilcoxon rank-sum test was applied at a significance level of 0.05 to conduct statistical significance analysis on the results, thereby scientifically validating whether performance differences are statistically significant.

4.2. Ablation Study and Analysis of Algorithm Components

To precisely evaluate the independent contributions of the three core components proposed in this paper—PSO-based initialization (Improvement 1), adaptive SBX (Improvement 2), and dynamic polynomial mutation (Improvement 3)—this section designs a systematic ablation experiment. We constructed four comparative algorithm variants: Variant-A (Baseline): Standard NSGA-III, employing random initialization, fixed-parameter SBX, and polynomial mutation; Variant-B (Improvement 1 only): Based on Variant-A, introducing the PSO-based hybrid initialization strategy; Variant-C (Improvements 1 + 2): Based on Variant-B, introducing the adaptive SBX operator; Variant-D (the complete algorithm of this paper, INSGA-III): Based on Variant-C, introducing the dynamic polynomial mutation operator.
All variants are built upon the same foundational framework and employ identical parameter tuning rules and constraint-handling mechanisms to ensure the fairness of the comparison. We executed each algorithm on multiple standard instances, including C101, and recorded key performance metrics after convergence: Hypervolume (HV), Inverted Generational Distance (IGD), and the average number of generations to convergence.
(1)
Quantitative Decomposition of Component Contributions
Based on Table 3, the marginal contribution of each newly added component can be clearly quantified:
Contribution of PSO Hybrid Initialization (Improvement 1): It provides the most significant initial performance leap. Compared to the baseline, HV improves by 16.8%, and the number of convergence generations decreases by 20.4%. This indicates that a high-quality initial population directly determines the “starting height” of the search and is the most critical factor in accelerating overall convergence.
Contribution of Adaptive SBX (Improvement 2): Building upon the high-quality initial population, it further enhances the convergence and distribution of the solution set (HV increases by an additional 9.9%) and significantly accelerates mid-term convergence speed (the number of convergence generations further decreases by 19.0%). This demonstrates the effectiveness of its mechanism for dynamically adjusting search behavior based on population status.
Contribution of Dynamic Polynomial Mutation (Improvement 3): Based on the preceding components, it primarily optimizes the distribution uniformity and extensibility of the Pareto front (IGD improves substantially by 17.5%) and plays a key role in fine-grained exploitation during later stages (enabling the algorithm to stabilize at the optimal front more rapidly).
(2)
Analysis of Synergistic Effects Among Components
The experimental results reveal significant “1 + 1 > 2” synergistic effects among the components:
The adaptive crossover operator heavily relies on the high-quality and diverse initial population provided by PSO initialization to effectively exert its “local exploitation” capability.
The dynamic mutation operator assumes a complementary role of “exploring unknown regions” and “fine-tuning continuous variables” after the crossover operator completes the primary structural search.
Together, they form a complete and synergistic search chain: “High-Quality Starting Point → Intelligent Structural Search → Fine-Tuned Exploration.” Each component is indispensable. The performance achieved by enhancing any single component falls short of that of the complete INSGA-III.

4.3. Comparative Analysis of Algorithm Performance

To comprehensively evaluate the integrated performance of the improved NSGA-III proposed in this paper, this section systematically compares it with NSGA-III, NSGA-II, MOEA/D, and GA across four dimensions: quantitative metrics, statistical significance, visual comparison of solution set distributions, and convergence speed. Due to space limitations, this paper selects the most representative C101 instance to present the detailed analysis process. All comparative algorithms exhibit consistent trends across other C1-type instances, which fully demonstrates the broad effectiveness of the proposed improvement strategy. Therefore, the algorithmic performance comparison is primarily based on the C101 test case.

4.3.1. Quantitative Comparison of Overall Performance

To quantitatively evaluate the comprehensive performance of each algorithm, this study employs Hypervolume (HV) and Inverted Generational Distance (IGD) as core metrics, with statistical analysis conducted on the C101 instance. Table 4 presents the mean performance comparison of each algorithm on the representative C101 instance, covering five key objectives: distribution cost, customer satisfaction, carbon emissions, route length, and average runtime.
Based on the quantitative results, the following conclusions can be drawn:
(1)
The algorithm demonstrates outstanding capability in collaborative optimization across all objectives: The proposed INSGA-III algorithm exhibits comprehensive and consistent superiority across the five key objectives—distribution cost, customer satisfaction, carbon emissions, route length, and average runtime. Specifically, compared to the GA, which performed the weakest, INSGA-III reduces distribution costs by approximately 12.46%, cuts carbon emissions by about 49.3%, improves customer satisfaction by around 23.19%, and optimizes route length by roughly 76.7%, while simultaneously shortening the average runtime by 35.4%, significantly enhancing solution efficiency. Even when compared to the relatively well-performing NSGA-III, INSGA-III still achieves further improvements in customer satisfaction (a 0.34% increase), carbon emissions (a 3.27% reduction), and route length (a 3.27% reduction), alongside an 11.5% reduction in average runtime, highlighting its computational efficiency advantage. These results validate that the algorithm effectively guides the search process toward discovering more balanced and comprehensive Pareto-optimal solutions, avoiding entrapment in local optima or imbalanced trade-offs among objectives.
(2)
Comprehensive Superiority in Multi-objective Evaluation Metrics: In terms of the two standard multi-objective evaluation metrics, HV and IGD, INSGA-III achieves the highest HV value of 0.724 and the lowest IGD value of 0.052, significantly outperforming all benchmark algorithms. This not only theoretically validates that the obtained solution set exhibits superior convergence—closer proximity to the true Pareto front—but also demonstrates more desirable distribution diversity, indicating a more uniform and extensive coverage of the solution set across the objective space.
(3)
Algorithm Performance Gradient Aligns with Expected Evolutionary Trajectory: A clear performance gradient is observable from the distribution of HV and IGD values: INSGA-III > NSGA-III > NSGA-II > MOEA/D > GA, indicating a progressive improvement in algorithmic performance aligned with the evolution of algorithmic paradigms. Compared to early methods such as the conventional GA, INSGA-III achieves an approximately 153% improvement in the HV metric and an 84.74% reduction in the IGD metric. Even relative to the state-of-the-art NSGA-III, INSGA-III demonstrates a further 13.46% increase in HV and a 41.57% reduction in IGD. This underscores that the proposed improvements effectively unlock additional performance potential beyond existing advanced frameworks, enabling more refined search and convergence control.

4.3.2. Statistical Significance Test Analysis

To scientifically validate the statistical significance of the performance advantages of the improved NSGA-III algorithm, this study conducted 30 independent repeated experiments across multiple problem instances and analyzed the results using the Wilcoxon rank-sum test (significance level α = 0.05). This test is employed to determine whether the performance differences between two algorithms are statistically significant and not attributable to random factors.
(1)
Statistical Significance of Performance Superiority: As shown in Table 5, the p-values for INSGA-III compared to all benchmark algorithms (NSGA-III, NSGA-II, MOEA/D, and GA) are less than 0.05. This provides strong statistical evidence that the performance improvement of INSGA-III in terms of the HV metric is statistically significant, representing a deterministic advantage attributable to its enhancement mechanisms.
(2)
Generality and Robustness of the Advantage: INSGA-III demonstrates statistically significant superiority over all benchmark algorithms. The advantage is most pronounced compared to GA (p < 1 × 10−10), while the difference, though still significant, is relatively smaller compared to MOEA/D (p = 0.038). This indicates that the performance advantage of INSGA-III is generalizable and robust, consistently outperforming various types of benchmark algorithms.
(3)
Mutual Corroboration with Quantitative Results: The outcomes of these significance tests are fully consistent with the mean HV and IGD values and the multi-objective optimization results presented in Section 3.2. The statistical tests not only confirm the reality of the performance differences but also elevate the apparent numerical advantages to a rigorous statistical level, thereby substantially strengthening the credibility and robustness of the study’s conclusions.
(4)
Validation of Algorithmic Enhancement Effectiveness: The statistically significant superiority of INSGA-III over the original NSGA-III validates the effectiveness of the adopted improvement strategies. While maintaining its multi-objective optimization capability, the algorithm significantly enhances the quality of the obtained solution sets, thereby providing a more effective solution for complex optimization problems.

4.3.3. Visual Comparison of Pareto Solution Set Distributions

To visually assess and compare the convergence and distribution of solution sets obtained by different algorithms, this section begins with a visual analysis of the Pareto front for a representative instance (C101). Figure 2 and Figure 3 respectively illustrate the final solution set distributions of the INSGA-III, NSGA-III, and MOEA/D algorithms in the two-dimensional “cost-satisfaction” plane and the three-dimensional “cost-satisfaction-carbon emission” space.
(1)
Qualitative Visual Analysis of Solution Set Distribution
Through a direct comparison of Figure 4 and Figure 5, the following qualitative conclusions can be drawn:
Convergence Advantage: As shown in Figure 4, the “front” formed by the solution set obtained by INSGA-III is closest to the lower-left corner of the coordinate plot (i.e., the ideal region of lower cost and higher satisfaction), indicating that it possesses the strongest convergence capability and can discover higher-quality solutions closer to the true Pareto front.
Diversity Advantage: While achieving excellent convergence, as illustrated in Figure 5, the solution points of INSGA-III are distributed more widely and uniformly in the three-dimensional objective space, forming a complete and smooth trade-off surface. In contrast, the solution sets of the comparative algorithms (particularly MOEA/D) exhibit clustered aggregation, indicating insufficient diversity.
Corroboration of Comprehensive Performance: This distribution pattern of being “closer, broader, and more uniform” is a direct manifestation of the algorithm achieving a better balance between exploration and exploitation, and is fully consistent with the quantitative result in Section 4.2 where INSGA-III achieved the highest HV value.
(2)
Quantitative Analysis for Management Decision-Making
To move beyond mere graphical representation and provide decision-makers with quantifiable bases for trade-offs directly applicable to practice, this study further extracts key decision insights from the high-quality Pareto solution set generated by INSGA-III. We define and analyze the following two core marginal trade-off rate indicators:
Marginal Cost of Emission Reduction (MCCR): Measures the additional cost required to achieve a unit reduction in carbon emissions. By analyzing the Pareto solution set, we find that when a company is willing to accept an increase in total distribution cost of approximately 5%, it can achieve a carbon emission reduction of about 15–18% compared to the lowest-cost solution. However, if pursuing a deep reduction exceeding 25%, the marginal cost rises sharply, potentially increasing costs by over 15%.
Marginal Cost of Satisfaction Improvement and Its Emission Effect (MCCS): Measures the cost required to improve per-unit customer satisfaction and the associated change in emissions. The analysis indicates that within the satisfaction range from 84% to 87%, each percentage point increase in satisfaction requires an average cost increase of about 2%, while carbon emissions can concurrently decrease slightly (due to more efficient scheduling). However, when the satisfaction requirement exceeds 88%, entering the “premium service” range, the marginal increases in both cost and carbon emissions rise substantially.

4.3.4. Convergence Speed Comparison

To dynamically evaluate the search efficiency of each algorithm, a comparative analysis of convergence speed was conducted by plotting the curves of the core performance metric (Hypervolume, HV) against the number of evolutionary generations. Figure 4 illustrates the trend of the average HV value with respect to the iteration count for each algorithm on the representative C101 instance.
Analysis of the convergence curves yields the following conclusions:
(1)
Advantage in Initial Population Quality: From the very beginning of the iterations, it can be observed that the HV value of the proposed INSGA-III algorithm is significantly higher than those of NSGA-III and MOEA/D. This directly validates the effectiveness of Improvement 1 (the hybrid PSO initialization strategy), demonstrating that this strategy successfully provides the evolutionary algorithm with a high-quality and highly diverse initial population, thereby laying a solid foundation for rapid convergence.
(2)
Rapid Convergence Capability: In the early stages of evolution, the HV curve of INSGA-III exhibits the steepest slope and the most rapid ascent, indicating its strongest capability for fast convergence. This is attributed to Improvement 2 (adaptive SBX), which enhances the algorithm’s local search ability during the initial evolutionary phase, enabling it to efficiently leverage the high-quality initial population and quickly approach the Pareto front.
(3)
Stable and Refined Search Capability: During the middle and late stages of evolution, the curve of INSGA-III stabilizes and converges to a higher HV plateau, whereas other algorithms tend to stagnate prematurely or converge at lower levels. This demonstrates the overall robustness and sustained optimization capability of the proposed algorithm. Specifically, Improvement 3 (dynamic polynomial mutation) effectively balances exploration and exploitation, mitigating premature convergence and enabling more refined search in later stages, thereby yielding a final solution set with superior comprehensive quality.

4.3.5. Results Discussion and Attribution Analysis

Integrating the aforementioned quantitative comparisons, statistical tests, and visualization analyses, it can be confirmed that the improved NSGA-III algorithm proposed in this paper significantly outperforms mainstream benchmark algorithms in terms of comprehensive performance, convergence speed, and solution set quality for solving the cold chain logistics distribution routing problem. This section aims to delve into the underlying mechanisms responsible for this systematic advantage, linking the preceding experimental results with the algorithmic innovations presented herein to complete the logical progression from “what” to “why.”
The performance improvement can be primarily attributed to the synergistic effects of the following three mechanisms:
(1)
High-Starting-Point Search: Hybrid Initialization Strategy Establishes Global Advantage.
Experimental data indicate that INSGA-III begins the evolutionary process with a significantly higher Hypervolume (HV) value (see Figure 4). The average HV value of its initial population is approximately 15–20% higher than that of benchmark algorithms, with statistically significant differences (p < 1 × 10−10). This advantage directly originates from Improvement 1 (the PSO-based hybrid initialization). The effectiveness of this strategy is rooted in the characteristics of the cold chain routing problem: route selection involves discrete decisions, while vehicle speed and temperature control involve continuous decisions, forming a complex hybrid search space. Traditional random initialization distributes solutions sparsely and with low quality in this space. The guided search of PSO leverages historical optimal information to rapidly generate a set of “elite solutions” that simultaneously satisfy route feasibility and temperature control/energy consumption constraints as the initial population. This not only avoids ineffective searches from the outset but also positions the algorithm’s starting point directly on a “high ground” close to the true Pareto front, significantly shortening the convergence path (as demonstrated by the rapid ascent in the first 50 generations in Figure 4).
(2)
Intelligent Search: Adaptive Genetic Operators Enable Fine-Grained Exploration.
Improvement 2 (adaptive SBX) and Improvement 3 (dynamic polynomial mutation) collectively constitute the intelligent search engine of the algorithm. The core of the adaptive crossover operator lies in its ability to perceive the population state. When population diversity is high, the operator tends to adopt a larger distribution index, emphasizing strong local exploitation to focus on finding improved route solutions near existing high-quality solutions. Conversely, when diversity decreases, it reduces the distribution index to promote strong global exploration, helping the population escape local optima. This dynamic adjustment is the key to the algorithm’s precise balance between “exploration” and “exploitation,” enabling efficient discovery of high-quality solutions even under complex constraints. The dynamic mutation operator, on the other hand, plays a dual role as both an “explorer” and a “fine-tuner.” In the early stages, a high mutation rate ensures thorough exploration of the solution space. In later stages, a low mutation rate allows the algorithm to perform fine adjustments to continuous variables such as speed and departure time on the established high-quality route framework, thereby optimizing cost and timeliness. This flexible search strategy is the direct reason why the algorithm can quickly and accurately identify low-cost, low-emission route solutions (as evidenced by the significant reductions in cost and carbon emissions shown in Table 2). Moreover, its dynamic adjustment strategy aligns perfectly with the trend observed in the convergence curve: “rapid ascent in early stages, stabilizing at a high level in later stages”. (In Figure 5, which includes additional boxplots to illustrate the distribution range of HV values at the end of each generation for each algorithm, it can be seen that INSGA-III not only achieves a higher mean HV value but also exhibits a more concentrated distribution and superior stability.) This alignment effectively balances the inherent trade-off between exploration and exploitation.
(3)
Framework Synergy: The Elite Preservation and Guidance Mechanism of NSGA-III Ensures Final Solution Quality.
The aforementioned improvement modules are embedded within the NSGA-III reference point-based environmental selection framework. This framework’s selection mechanism is particularly crucial for multi-objective cold chain logistics problems, which require simultaneous optimization of cost, carbon emissions, and satisfaction—objectives that often conflict. The reference point-based selection not only ensures the retention of non-dominated solutions (elite preservation) but, more importantly, maintains a broad distribution of the population in the objective space. This guarantees that the final solution set encompasses a variety of trade-off strategies—from “cost-first” to “green-first” approaches (as illustrated in Figure 2 and Figure 3)—providing decision-makers with comprehensive options. This explains why the solution set achieves both high quality and high diversity. Through non-dominated sorting and reference point-based selection, the framework ensures the preservation of elite solutions in each generation and guides the population toward a uniformly distributed Pareto front. This constitutes the fundamental institutional guarantee for INSGA-III’s ability to maintain rapid convergence while still achieving excellent diversity in its solution set (as demonstrated by the wide and uniform distribution shown in Figure 2 and Figure 3).
In summary, the INSGA-III algorithm proposed in this paper addresses the complex optimization challenge of high-dimensional, multi-constrained, multi-objective hybrid fleet routing in cold chain logistics through a systematic and synergistic design. Hybrid initialization provides a high-quality and diverse starting point tailored to the problem characteristics; adaptive operators enable intelligent and fine-grained exploration of the hybrid solution space; and the elite selection framework ensures the search direction consistently advances toward a genuine and diverse Pareto front. These three levels of improvement are interlinked and indispensable, collectively forming the solid foundation for the algorithm’s exceptional performance.

4.4. Sensitivity Analysis of Key Parameters

To evaluate the robustness of the proposed algorithm and derive managerial insights, this section conducts sensitivity analyses along two dimensions: internal algorithmic parameters and external operational environment parameters.

4.4.1. Sensitivity Analysis of Algorithmic Parameters

To validate the robustness of the proposed adaptive mechanisms (particularly Improvements 2 and 3), this subsection analyzes the impact of key internal algorithmic parameters on performance. We selected the crossover distribution index η c (NSGAparam.CrossIndex) and the mutation distribution index η m (NSGAparam. DistIndex). Experiments were conducted on the C101 instance, varying the target parameters while keeping all other conditions constant, and using Hypervolume (HV) as the core evaluation metric.
Analysis of the experimental results leads to the following conclusions:
(1)
The algorithm exhibits low sensitivity to the crossover distribution index η c , demonstrating strong robustness: As shown in Figure 6, when the crossover distribution index η c varies within the range of [10, 30], the HV value remains at a high level with fluctuations of less than 3%. This indicates that the η c parameter, which controls the search scope of the SBX operator, has a broad and stable high-performance interval. As long as it falls within this range, the improved SBX strategy can effectively balance global exploration and local exploitation, ensuring stable algorithm performance. This confirms the successful design of Improvement 2 (adaptive SBX), as its performance does not rely on precise parameter fine-tuning.
(2)
The algorithm exhibits low sensitivity to the mutation distribution index η m , with a broad optimal window: As illustrated in Figure 6, the HV value remains optimal and stable when the mutation distribution index η m falls within the range of [10, 30]. This finding indicates that the η m parameter, which governs the perturbation intensity of polynomial mutation, is straightforward to configure. The algorithm’s performance demonstrates minimal sensitivity to variations in this parameter, consistently maintaining high efficacy across a wide spectrum of settings. This outcome further corroborates the effectiveness and practical utility of Improvement 3 (dynamic polynomial mutation).
(3)
Overall Robustness Validation: Collectively, the proposed algorithm demonstrates no excessive sensitivity to the core crossover and mutation distribution indices. Across a broad value range of [10, 50] for parameters η c and η m , the algorithm consistently delivers performance close to its peak level. This significantly enhances the algorithm’s usability and reliability in practical applications, as users can achieve stable and excellent optimization results without engaging in tedious parameter fine-tuning. In this study, η c and η m are ultimately set to 20—a value situated at the center of the high-performance interval—ensuring robust operation across various scenarios.

4.4.2. In-Depth Analysis of Mixed Fleet Configuration: How Vehicle Type Selection Drives Multi-Objective Trade-Offs

To elucidate the central role of the mixed fleet configuration in this study, this subsection conducts an in-depth quantitative analysis of the fleet composition as a key decision variable. The aim is to reveal how vehicle type selection specifically influences the trade-offs among cost, carbon emissions, and satisfaction, thereby validating its necessity as a core modeling element.
Fleet composition serves as a pivotal lever in shaping Pareto trade-offs. The trade-off impact analysis of fleet composition on total cost and carbon emissions is shown in Figure 7. By systematically adjusting the proportion of electric vehicles in the fleet (from 0% to 100%) and employing the INSGA-III algorithm to solve for the Pareto-optimal front at each proportion, the results clearly demonstrate (see Figure 8) that fleet composition is a fundamental lever for systematically altering the multi-objective trade-off space.
Deterministic impact on carbon emission objectives: As shown on the right axis of Figure 8, carbon emissions exhibit a monotonically decreasing trend as the proportion of electric vehicles increases. This intuitively demonstrates that introducing electric vehicles is both a necessary condition and an effective pathway for achieving deep emission reductions. Non-monotonic and complex impact on cost objectives: As illustrated on the left axis of Figure 8, the total distribution cost follows a significant “first decrease, then increase” U-shaped curve as the proportion of electric vehicles changes. This phenomenon reveals the internal economic logic of mixed fleets: a moderate proportion of electric vehicles can synergistically optimize routes and reduce partial fuel consumption, thereby offsetting their higher purchase and charging costs. However, once the proportion exceeds the optimal range, constraints from charging infrastructure and higher unit energy costs begin to dominate, leading to an increase in total costs. The comparison of key performance indicators under different fleet sizes is shown in Table 6.
Analysis of the experimental results leads to the following conclusions:
(1)
The combination of the aforementioned “U-shaped” cost curve and “L-shaped” emission curve clearly defines a Pareto-optimal interval (electric vehicle share: 30–50%). Within this interval, decision-makers can achieve a 15–20% reduction in carbon emissions at the cost of only a 3–5% increase in expenditure. This strongly demonstrates that optimizing the composition of the mixed fleet is, in itself, a critical decision-making process for resolving the core conflict between “cost and emissions.”
(2)
Battery capacity and charging constraints are inseparable core components of the model among all Pareto-optimal solutions involving electric vehicles: Charging decision activation rate: On average, 68.2% of electric vehicle trips triggered at least one mid-journey charging event, indicating that charging demand is a routine part of operations. Battery constraint violation frequency: During algorithm iterations, a striking 41.5% of initial candidate routes were repaired due to violations of battery capacity constraints. This high frequency provides conclusive evidence that range and charging constraints are rigid, core constraints for generating feasible solutions. Ignoring them would lead the model to output a large number of impractical plans.
(3)
Dynamic Matching Mechanism between Vehicle Type Selection and Task Allocation. To reveal how the algorithm performs trade-offs at the micro level, we dissected the task-vehicle matching patterns within representative Pareto solutions: Long-distance, heavy-load tasks: Primarily undertaken by fuel-powered vehicles to avoid the range anxiety of electric vehicles and the time cost of long-distance charging. This is a key strategy for controlling total costs. Urban-dense, multi-stop tasks: Primarily undertaken by electric vehicles. Their zero-emission maximizes environmental benefits in densely populated areas, and their advantage in kinetic energy recovery during frequent stops and starts directly supports the achievement of carbon emission targets. Tasks extremely sensitive to time windows: Electric vehicles are prioritized for dispatch under the drive of the satisfaction objective, due to their more precise torque control and schedulable charging strategies. This ensures stable service quality (as shown in Table 4, satisfaction fluctuates by less than 0.2% across various configurations).

4.4.3. Analysis of the Impact of Delivery Vehicle Load Capacity

Vehicle load capacity is a critical asset decision in cold chain logistics distribution, directly influencing delivery costs, scheduling flexibility, and customer service levels. This section aims to quantitatively analyze the impact of variations in vehicle load capacity on the overall system performance, providing direct quantitative evidence to support corporate vehicle selection and procurement decisions.
As shown in Figure 8, the horizontal axis represents vehicle load capacity, the left vertical axis represents cost and carbon emissions, and the right vertical axis represents customer satisfaction and the number of vehicles. The total cost exhibits a U-shaped curve, reaching its minimum at 8 t; customer satisfaction follows an inverted U-shaped curve, peaking at 8 t; the number of vehicles monotonically decreases as load capacity increases; and carbon emissions show a slowly rising trend. The trend of the impact of vehicle capacity on operational indicators is shown in Table 7.
Analysis of the experimental results leads to the following conclusions:
(1)
A distinct optimal economic capacity interval exists: The total cost reaches its minimum at 8 t, forming an optimal interval (7–9 t) centered around 8 t. Within this range, cost fluctuations can be controlled within 3%. When the capacity is below 8 t, fixed costs cannot be effectively amortized due to excessively high departure frequency; when the capacity exceeds 8 t, declining vehicle utilization rates and reduced scheduling flexibility jointly drive costs upward.
(2)
Load capacity has a nonlinear impact on service level: customer satisfaction peaks at 8 t (85.20%), exhibiting a distinct inverted U-shaped trend. A load capacity that is too small (≤6 t) leads to excessively high departure frequencies, which can easily cause time window conflicts; conversely, a load capacity that is too large (≥10 t) increases the number of customers served per vehicle, resulting in prolonged waiting times for end customers and reduced product freshness, both of which significantly diminish satisfaction.
(3)
Asset Coordination Strategy Based on Bottleneck Analysis: To simultaneously address the economies of scale required for high-density orders and the rapid response needed for sporadic urgent orders, a hybrid configuration strategy of “primarily 8 t vehicles supplemented by a small number of 6 t vehicles” is recommended. This approach leverages 8 t vehicles to ensure overall optimality in cost and satisfaction, while deploying a limited number of 6 t vehicles specifically to resolve bottlenecks related to “excessive departure frequency and time window conflicts,” thereby enhancing the system’s overall robustness to demand fluctuations.

4.5. Management Insights Analysis and Model Value Verification

Based on the aforementioned experiments, this section aims to distill management insights that can directly guide practice. Furthermore, by comparing with simplified models, it verifies the unique value of the complex model proposed in this study in deepening the understanding of the problem.

4.5.1. From Trade-Off Quantification to Strategic Decision Matrix

The core value of multi-objective optimization lies in the precise quantification of conflicts among objectives. Through in-depth analysis of the Pareto solution set, we derive the following quantifiable relationships that can directly support decision-making:
(1)
Cost of Carbon Emission Reduction: Under current technological and market conditions, reducing 1 ton of CO2 emissions requires, on average, an additional operational cost of approximately 93,000 CNY. This internal carbon pricing benchmark is crucial for corporate carbon asset management.
(2)
Premium for Service Level Enhancement: Increasing customer satisfaction by 0.5 percentage points may lead to an 8–9% rise in distribution costs. This provides a basis for formulating differentiated service pricing.
(3)
Strategic Decision Menu: As shown in Table 6, we extracted three typical strategic orientation schemes from the Pareto solution set, offering managers a clear “decision palette.”
The strategic decision matrix based on the Pareto solution set is shown in Table 8.

4.5.2. Dynamic Optimal Strategies for Mixed Fleet Configuration

Based on sensitivity analysis, this study proposes actionable mixed-fleet management strategies:
(1)
Optimal Economic-Environmental Range: For regional distribution centers with a daily throughput of 10–50 tons, maintaining an electric vehicle proportion of 30–50% achieves the best balance (cost increases marginally by 3–5%, while carbon emissions are sharply reduced by 15–20%).
(2)
Scale-Oriented Differentiated Configuration: In low-volume scenarios (<10 tons/day), the electric vehicle proportion can be increased (40–50%) to leverage their low marginal cost advantage. In high-volume scenarios (>50 tons/day), the configuration should be dominated by fuel vehicles (60–70%) to ensure scheduling flexibility.
(3)
Dynamic Collaborative Optimization Mechanism: It is recommended that enterprises establish a quarterly review mechanism. Fleet composition should be dynamically adjusted by integrating business forecasts, carbon price fluctuations, and policy changes.

4.5.3. Validation of Model Complexity Value: Comparative Analysis with Simplified Models

A critical comparative analysis was designed: the complete model from this study was compared with two simplified models for analysis. Model S1: Single-Objective Cost Minimization Model (optimizes only cost, treating carbon emissions and customer satisfaction as constraints). Model S2: Multi-Objective Model Ignoring Heterogeneity (treats electric and fuel vehicles as homogeneous, neglecting charging and range constraints). The comparison of decision insights between complex and simplified models is shown in Table 9.
The value of the complex model constructed in this study extends far beyond merely increasing formal difficulty. As the comparison above demonstrates, simplified models, due to their overly strong assumptions (e.g., single objective, vehicle homogeneity), obscure the nonlinear, non-monotonic, and complex trade-off relationships present in reality, potentially leading to overly optimistic or one-sided decision recommendations. In contrast, the complete model presented here, by systematically integrating mixed fleets, multi-objective considerations, and cold-chain constraints, enables:
(1)
Revealing the Nature and Intensity of Conflicts: It precisely quantifies the “inflection points” and “thresholds” of conflicts between different objectives.
(2)
Uncovering Possibilities for Synergy: It identifies optimization opportunities where “one action achieves two goals” under specific conditions (e.g., using electric vehicles to simultaneously enhance service and environmental performance).
(3)
Providing Contextualized Decision Support: It yields deep insights into “what configuration and scheduling strategies should be adopted under which strategic intents,” rather than providing a static “optimal solution.”
Therefore, the model’s complexity is a necessary mapping of real-world business paradoxes. It endows decision-makers with the systematic thinking capability to perform refined trade-offs and innovative problem-solving under the triple pressures of cost, service, and environment. This is precisely the key to transitioning from “traditional operations” to “sustainable smart logistics.”

5. Conclusions and Future Work

This research addresses the optimization of hybrid fleet distribution routes for cold chain logistics by constructing a multi-objective model integrating total cost, customer satisfaction, and carbon emissions, and proposes an improved hybrid adaptive NSGA-III algorithm. The following key conclusions are drawn: Theoretical Contribution: The proposed algorithm, incorporating mechanisms such as adaptive reference points and local search, significantly outperforms traditional algorithms in terms of solution set convergence and distribution. Managerial Implications: The model accurately captures the operational paradox of hybrid fleets. Through sensitivity analysis, it reveals the trade-off relationships between key parameters—such as fleet electrification ratio and vehicle type configuration—and cost and carbon emissions. This provides enterprises with differentiated decision-making strategies that balance economic, service, and environmental objectives. Operational Benefits: The algorithmic performance advantages directly translate into tangible operational gains. Empirical analysis shows that, compared to baseline solutions, the proposed algorithm achieves an approximate 2.9% reduction in total costs and a 15% decrease in carbon emissions. For large-scale daily cold chain distribution networks, this translates to substantial annual financial savings and environmental benefits. Moreover, the algorithm provides a higher-quality and more broadly distributed Pareto-optimal solution set, enabling managers to scientifically weigh and flexibly switch among multiple operational strategies—such as “cost-first,” “service-first,” or “green-first” approaches. This significantly enhances decision-making agility and operational resilience in complex market environments.
Limitations and Future Directions: The limitations of this study offer clear pathways for future research: Model Refinement: Future work may incorporate heterogeneous demands for multi-temperature-zone products, speed-load-dependent emission functions, and dynamic urban traffic congestion effects. These aspects represent key limitations in modeling real-world complexity and are critical directions for advancing toward more refined and dynamic models. Algorithmic Advancements: The INSGA-III framework proposed in this study offers a promising tool for solving such complex models. Future algorithmic research may proceed along two paths: Horizontal Comparison: Systematically comparing this method with emerging hybrid multi-objective routing algorithms based on PSO or other metaheuristics (e.g., Grey Wolf Optimizer, Sparrow Search Algorithm) to precisely delineate its advantages and characteristics within a broader algorithmic spectrum; Vertical Integration: Exploring integration with real-time data systems such as the Internet of Things (IoT) and digital twins to develop an integrated “perception-decision-execution” dynamic intelligent scheduling platform. This would advance the algorithm from offline optimization to online adaptive scheduling. Through such interdisciplinary and technological integration in modeling and algorithmic design, the theoretical and practical pathways toward sustainable cold chain logistics can be further refined.

Author Contributions

Conceptualization, J.Y.; methodology, G.S.; software, J.S.; writing—original draft preparation, Y.G. All authors have read and agreed to the published version of the manuscript.

Funding

International Science and Technology Cooperation Project of Henan Province (Grant No. 242102520040); National Natural Science Foundation of China: Research on Enterprise Resource Location Optimization Based on the Internet of things (Grant No. 71371172).; Henan 2023 Research and development of key technologies for intelligent distribution of ready-mixed concrete materials and cloud platform system (Grant No. 232102220089).

Data Availability Statement

The data supporting the findings of this study are available within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Lu, Z.; Wu, K.; Bai, E. Optimization of Multi-Vehicle Cold Chain Logistics Distribution Paths Considering Traffic Congestion. Symmetry 2025, 17, 89. [Google Scholar] [CrossRef]
  2. Liu, Y.; Hou, J.; Cai, C. Research on the optimization of cold chain logistics distribution routes considering time-dependent networks and simultaneous pick-up and delivery from the perspective of sustainability. PLoS ONE 2025, 20, e0330535. [Google Scholar] [CrossRef] [PubMed]
  3. Zhang, L.; Fu, M.; Fei, T.; Lim, M.K.; Tseng, M.-L. A cold chain logistics distribution optimization model: Beijing-Tianjin-Hebei region low-carbon site selection. Ind. Manag. Data Syst. 2024, 124, 3138–3163. [Google Scholar] [CrossRef]
  4. Zheng, W.; Ji, X.; Zou, Y.; Wang, L. Optimization of Vehicle Routing in Green Cold Chain Logistics Distribution Considering Customer Satisfaction. Discret. Dyn. Nat. Soc. 2025, 2025, 2467398. [Google Scholar] [CrossRef]
  5. China Federation of Logistics & Purchasing. In China Cold Chain Logistics Development Report; China Fortune Press: Beijing, China, 2023.
  6. Liu, G.; Hu, J.; Yang, Y.; Xia, S.; Lim, M.K. Vehicle routing problem in cold Chain logistics: A joint distribution model with carbon trading mechanisms. Resour. Conserv. Recycl. 2020, 156, 104715. [Google Scholar] [CrossRef]
  7. Wang, Y.; Li, Q.; Guan, X.; Xu, M.; Liu, Y.; Wang, H. Two-echelon collaborative multi-depot multi-period vehicle routing problem. Expert Syst. Appl. 2021, 167, 114201. [Google Scholar] [CrossRef]
  8. China Automotive Data Co., Ltd. Development Trend Report of New Energy Commercial Vehicle Market in China; Place of Publication: Tianjin, China, 2024. [Google Scholar]
  9. Zhang, A.; Zhang, Y.; Liu, Y.; Hou, J.; Hu, J. Planning Low-Carbon Cold-Chain Logistics Path with Congestion-Avoidance Strategy. Pol. J. Environ. Stud. 2023, 32, 5899–5909. [Google Scholar] [CrossRef]
  10. Zhao, A.P.; Li, S.; Li, Z.; Wang, Z.; Fei, X.; Hu, Z.; Alhazmi, M.; Yan, X.; Wu, C.; Lu, S.; et al. Electric vehicle charging planning: A complex systems perspective. IEEE Trans. Smart Grid 2024, 16, 754–772. [Google Scholar] [CrossRef]
  11. Ye, C.; Liu, F.; Ou, Y.; Xu, Z. Optimization of Vehicle Paths considering Carbon Emissions in a Time-Varying Road Network. J. Adv. Transp. 2022, 2022, 9656262. [Google Scholar] [CrossRef]
  12. Zhang, H.; Yan, J.; Wang, L. Hybrid Tabu-Grey wolf optimizer algorithm for enhancing fresh cold-chain logistics distribution. PLoS ONE 2024, 19, e0306166. [Google Scholar] [CrossRef]
  13. Sabet, S.; Farooq, B. Green vehicle routing problem: State of the art and future directions. IEEE Access 2022, 10, 101622–101642. [Google Scholar] [CrossRef]
  14. Deng, H.; Wang, M.; Hu, Y.; Ouyang, J.; Li, B. An improved distribution cost model considering various temperatures and random demands: A case study of Harbin cold-chain logistics. IEEE Access 2021, 9, 105521–105531. [Google Scholar] [CrossRef]
  15. Bei, H.; Lin, H.; Yang, F.; Li, X.; Murcio, R.; Yang, T. Optimization on Multimodal Network Considering Time Window Under Uncertain Demand. IEEE Trans. Intell. Transp. Syst. 2025, 11294–11312. [Google Scholar] [CrossRef]
  16. Nasr, A.K.; Tavana, M.; Alavi, B.; Mina, H. A novel fuzzy multi-objective circular supplier selection and order allocation model for sustainable closed-loop supply chains. J. Clean. Prod. 2021, 287, 124994. [Google Scholar] [CrossRef]
  17. Wu, D.; Cui, J.; Li, D. A New Route Optimization Approach of Fresh Agricultural Logistics Distribution. Intell. Autom. Soft Comput. 2022, 34, 1553–1569. [Google Scholar] [CrossRef]
  18. Sun, J.; Li, X.; Chen, Z. An investigation into multi-objective decision-making in fresh cold chain supply chain networks within a dual distribution framework. Complex Intell. Syst. 2025, 11, 393. [Google Scholar] [CrossRef]
  19. Zhang, A.; Zhang, Y.; Liu, Y. Low-carbon cold-chain logistics path optimization problem considering the influence of road impedance. IEEE Access 2023, 11, 124055–124067. [Google Scholar] [CrossRef]
  20. Liu, L.; Su, B.; Liu, Y. Distribution route optimization model based on multi-objective for food cold chain logistics from a low-carbon perspective. Fresenius Environ. Bull. 2021, 30, 1538–1549. [Google Scholar]
  21. Leng, L.; Wang, Z.; Zhao, Y.; Zuo, Q. Formulation and heuristic method for urban cold-chain logistics systems with path flexibility—The case of China. Expert Syst. Appl. 2024, 244, 122926. [Google Scholar] [CrossRef]
  22. Yi, J.H.; Deb, S.; Dong, J.; Alavi, A.H.; Wang, G.G. An improved NSGA-III algorithm with adaptive mutation operator for Big Data optimization problems. Future Gener. Comput. Syst. 2018, 88, 571–585. [Google Scholar] [CrossRef]
  23. Wang, Z.; Lin, K.; Li, G.; Gao, W. Multi-objective optimization problem with hardly dominated boundaries: Benchmark, analysis, and indicator-based algorithm. IEEE Trans. Evol. Comput. 2024, 29, 1070–1084. [Google Scholar] [CrossRef]
  24. Zhang, H.; Wang, G.-G.; Dong, J.; Gandomi, A.H. Improved NSGA-III with second-order difference random strategy for dynamic multi-objective optimization. Processes 2021, 9, 911. [Google Scholar] [CrossRef]
  25. Verma, S.; Pant, M.; Snasel, V. A comprehensive review on NSGA-II for multi-objective combinatorial optimization problems. IEEE Access 2021, 9, 57757–57791. [Google Scholar] [CrossRef]
  26. Cui, Z.; Chang, Y.; Zhang, J.; Cai, X.; Zhang, W. Improved NSGA-III with selection-and-elimination operator. Swarm Evol. Comput. 2019, 49, 23–33. [Google Scholar] [CrossRef]
  27. Wang, Y.; Sheng, Y. Two-stage optimization of instant distribution of fresh products based on improved NSGA-III algorithm. Int. J. Ind. Eng. Comput. 2025, 16, 535–556. [Google Scholar] [CrossRef]
  28. Moghdani, R.; Salimifard, K.; Demir, E.; Barak, S.; Aazami, A.; Shekarabi, S.A.H. A metaheuristic approach for the multi-objective sustainable vehicle routing problem. Ann. Oper. Res. 2025, 1–50. [Google Scholar] [CrossRef]
  29. Zhao, F.; Si, B.; Wei, Z.; Lu, T. Time-dependent vehicle routing problem of perishable product delivery considering the differences among paths on the congested road. Oper. Res. 2023, 23, 5. [Google Scholar] [CrossRef]
  30. Xiong, H. Research on cold chain logistics distribution route based on ant colony optimization algorithm. Discret. Dyn. Nat. Soc. 2021, 2021, 6623563. [Google Scholar] [CrossRef]
  31. Li, X.; Gao, C.; Wang, J.; Tang, H.; Ma, T.; Yuan, F. Research on Multi-Objective Green Vehicle Routing Problem with Time Windows Based on the Improved Non-Dominated Sorting Genetic Algorithm III. Symmetry 2025, 17, 734. [Google Scholar] [CrossRef]
  32. Ho, T.H.; Zheng, Y.S. Setting customer expectation in service delivery: An integrated marketing-operations perspective. Manag. Sci. 2004, 50, 479–488. [Google Scholar] [CrossRef]
  33. Cui, H.; Qiu, J.; Cao, J.; Guo, M.; Chen, X.; Gorbachev, S. Route optimization in township logistics distribution considering customer satisfaction based on adaptive genetic algorithm. Math. Comput. Simul. 2023, 204, 28–42. [Google Scholar] [CrossRef]
  34. Zhang, Y.; Yuan, C.; Wu, J. Vehicle routing optimization of instant distribution routing based on customer satisfaction. Information 2020, 11, 36. [Google Scholar] [CrossRef]
  35. Alinezhad, M.; Mahdavi, I.; Hematian, M.; Tirkolaee, E. A fuzzy multi-objective optimization model for sustainable closed-loop supply chain network design in food industries. Environ. Dev. Sustain. 2022, 24, 8779–8806. [Google Scholar] [CrossRef]
  36. Wang, S.; Tao, F.; Shi, Y. Optimization of Location–Routing Problem for Cold Chain Logistics Considering Carbon Footprint. Int. J. Environ. Res. Public Health 2018, 15, 86. [Google Scholar] [CrossRef] [PubMed]
  37. Liu, Y. Cold Chain Distribution Route Optimization Considering Customer Satisfaction in the Context of Carbon Emission Reduction. Acad. J. Comput. Inf. Sci. 2023, 6, 97–105. [Google Scholar] [CrossRef]
  38. Ma, Z.; Zhang, J.; Wang, H.; Gao, S. Optimization of sustainable bi-objective cold-chain logistics route considering carbon emissions and customers’ immediate demands in China. Sustainability 2023, 15, 5946. [Google Scholar] [CrossRef]
  39. Gupta, P.; Pratihar, D.K.; Deb, K. Many-objective robust gait optimization for a 25-DOFs NAO robot using NSGA-III. Eng. Optim. 2025, 1–39. [Google Scholar] [CrossRef]
  40. Yang, Y.; Zhang, J.; Sun, W.; Pu, Y. Research on NSGA-III in Location-routing-inventory problem of pharmaceutical logistics intermodal network. J. Intell. Fuzzy Syst. 2021, 41, 699–713. [Google Scholar] [CrossRef]
  41. Zhang, L.Y.; Tseng, M.L.; Wang, C.H.; Xiao, C.; Fei, T. Low-carbon cold chain logistics using ribonucleic acid-ant colony optimization algorithm. J. Clean. Prod. 2019, 233, 169–180. [Google Scholar] [CrossRef]
  42. Bektaş, T.; Laporte, G. The pollution-routing problem. Transp. Res. Part B Methodol. 2011, 45, 1232–1250. [Google Scholar] [CrossRef]
Figure 1. Customer Satisfaction Function.
Figure 1. Customer Satisfaction Function.
Machines 14 00202 g001
Figure 2. Pareto Front Distributions of Different Algorithms in the “Cost–Satisfaction” Objective Space.
Figure 2. Pareto Front Distributions of Different Algorithms in the “Cost–Satisfaction” Objective Space.
Machines 14 00202 g002
Figure 3. Solution Set Distributions of Different Algorithms in the Three-Dimensional “Cost–Satisfaction–Carbon Emissions” Objective Space.
Figure 3. Solution Set Distributions of Different Algorithms in the Three-Dimensional “Cost–Satisfaction–Carbon Emissions” Objective Space.
Machines 14 00202 g003
Figure 4. Comparison of HV Convergence Curves Across Algorithms.
Figure 4. Comparison of HV Convergence Curves Across Algorithms.
Machines 14 00202 g004
Figure 5. Convergence curves and per-generation HV value distribution boxplots for INSGA-III and comparative algorithms on the C101 test instance.
Figure 5. Convergence curves and per-generation HV value distribution boxplots for INSGA-III and comparative algorithms on the C101 test instance.
Machines 14 00202 g005
Figure 6. Sensitivity Analysis of Crossover and Mutation Distribution Indices on Algorithm Performance (HV).
Figure 6. Sensitivity Analysis of Crossover and Mutation Distribution Indices on Algorithm Performance (HV).
Machines 14 00202 g006
Figure 7. Analysis of the Trade-off Impact of Fleet Composition on Total Cost and Carbon Emissions.
Figure 7. Analysis of the Trade-off Impact of Fleet Composition on Total Cost and Carbon Emissions.
Machines 14 00202 g007
Figure 8. Trends in the Impact of Vehicle Load Capacity on Operational Metrics.
Figure 8. Trends in the Impact of Vehicle Load Capacity on Operational Metrics.
Machines 14 00202 g008
Table 1. Symbol Description.
Table 1. Symbol Description.
SignMeaningSignMeaning
O Set of distribution centers data C r Total refrigeration cost
i Index for a distribution center c 1 Unit time refrigerant cost during transportation and waiting
m Number of distribution centers c 2 Unit time refrigerant cost during customer service
D Set of customer points t s Service time of the refrigerated vehicle at customer point i
j Indices for customer points t i Time required for vehicle k to travel from customer point i back to its depot
n Number of customer points H T i j k The time when the refrigerated vehicle arrives at customer point
i for delivery
P Set of product types C d Total cost of cargo damage
k Index for a refrigerated vehicle c p Unit price of the product
K Total number of refrigerated vehicles D T i j k The start time of delivery for a refrigerated vehicle at a customer point
A Subset of fuel-powered vehicles δ 1 p Freshness decay coefficient for product when the vehicle door is closed
B Subset of electric vehicles δ 2 p Incremental freshness decay coefficient for product when the vehicle door is open
S Customer satisfaction C p Total penalty cost
E Total carbon emissions Q i p Load weight on vehicle k upon departing from customer point i
t i Arrival time of vehicle k at customer i ρ Fixed charging power
X j p Demand of customer i for product P d i j Distance from distribution center
to customer point i
d j p Pickup demand of customer
i for product P
c a k Unit fuel consumption cost for fuel-powered vehicles
C f Fixed dispatch cost for using vehicle k c b k Unit distance cost for electric vehicles
c a Average fixed cost for a fuel-powered vehicle c e l e c Unit Electricity Price
c b Average fixed cost for an electric vehicle Q 1 Maximum load capacity of a fuel-powered vehicle
C v Variable transportation for using vehicle k Q 2 Maximum load capacity of an electric vehicle
δ i k Charging amount at node i B max Maximum battery capacity
B j k a Energy consumption from node i to j t i k d Departure time from node i
B i k d Battery level upon departing from node i t i k a Arrival time at node i
B i k a Battery level upon arriving at node i
Table 2. Algorithm Performance and Computation Time Under Different Problem Scales.
Table 2. Algorithm Performance and Computation Time Under Different Problem Scales.
Customer Point ScaleINSGA-III Average HV ValueHV Relative
Decrease
Average Runtime (Seconds)Time Growth Trend
100 (Baseline)0.724185
2000.698−3.6%420~Linear growth
5000.665−8.1%1250~Less than quadratic growth
Table 3. Comparison of Ablation Experiment Results for Algorithm Components (C101 Instance).
Table 3. Comparison of Ablation Experiment Results for Algorithm Components (C101 Instance).
Algorithm VariantHVRelative ImprovementIGDRelative ImprovementAvg. Convergence Generations
Variant-A (Baseline)0.5120.085152
Variant-B (+Initialization)0.598+16.8%0.072+15.3%121
Variant-C (+Adaptive Crossover)0.657+9.9%0.063+12.5%98
Variant-D (Complete INSGA-III)0.724+10.2%0.052+17.5%85
Table 4. Performance Comparison of Different Algorithms on the C101 Instance.
Table 4. Performance Comparison of Different Algorithms on the C101 Instance.
AlgorithmDelivery Cost/RMBCustomer
Satisfaction
/%
Carbon Emissions
/kgCO2
Route Length
/km
Average Runtime/sHVIGD
INSGA-III428,452.7785.99%1.2433124.33062300.7240.052
NSGA-III427,912.7785.70%1.2853128.52852600.6380.089
NSGA-II551,502.7772.50%1.3856113.51923020.4850.152
MOEA/D427,182.7781.20%2.1290212.90423230.3920.218
GA489,300.0062.80%2.4500531.83273560.2850.341
Table 5. Wilcoxon Rank-Sum Test Results Based on the HV Metric (INSGA-III vs. Benchmark Algorithms).
Table 5. Wilcoxon Rank-Sum Test Results Based on the HV Metric (INSGA-III vs. Benchmark Algorithms).
Benchmark Algorithmp-ValueSignificance
( α = 0.05)
Summary of Significance
NSGA-III3.02 × 10−5+Significantly Superior
NSGA-II1.15 × 10−6+Significantly Superior
MOEA/D0.038+Significantly Superior
GA<1 × 10−10+Significantly Superior
Note: The cell entries represent p-values. Bold formatting indicates p < 0.05, signifying that INSGA-III is significantly superior to the benchmark algorithm. The “+” symbol indicates a significant correlation.
Table 6. Comparison of Key Performance Indicators under Different Fleet Sizes.
Table 6. Comparison of Key Performance Indicators under Different Fleet Sizes.
Fleet SizeDelivery Cost
/RMB
Average Customer Satisfaction
/%
Carbon Emissions
/kgCO2
8512,350 (+17.62%)84.98 (−0.02%)1.42
10435,680 (Baseline)85.001.38
12428,990 (−1.54%)85.12 (+0.14%)1.35
14426,540 (−2.10%)85.05 (+0.06%)1.33
16427,110 (−1.97%)84.95 (−0.06%)1.32
Table 7. Performance Comparison under Different Vehicle Load Capacity Scenarios.
Table 7. Performance Comparison under Different Vehicle Load Capacity Scenarios.
Capacity Scenario
/t
Delivery Cost
/RMB
Average Customer
Satisfaction/%
Carbon Emissions
/kgCO2
Main Bottleneck
6512,350 (+17.62%)84.981.45Excessive departure frequency, time window conflicts
8435,680 (Baseline)85.201.38Well-balanced, no significant bottlenecks observed
10446,920 (+2.58%)84.901.42Low load factor, insufficient scheduling flexibility
12462,150 (+6.07%)84.751.48High empty-load costs, slow response times
Table 8. Strategic Decision Matrix Based on Pareto Solution Set.
Table 8. Strategic Decision Matrix Based on Pareto Solution Set.
Strategic
Orientation
Delivery Cost/RMBCustomer
Satisfaction/%
Carbon
Emissions/kgCO2
Applicable Scenario
Cost-Priority Type400,680
(−8.2%)
84.50%
(−0.5%)
1.52
(+10.1%)
Cost-sensitive commercial and residential projects
Balanced Development Type436,350
(Baseline)
85.00%1.38Universally applicable, balancing multiple objectives
Green-Leading Type460,540
(+5.5%)
84.80%
(−0.2%)
1.12
(−18.8%)
Corporate social responsibility fulfillment and green certification acquisition
Table 9. Comparison of Decision Insights between Complex and Simplified Models.
Table 9. Comparison of Decision Insights between Complex and Simplified Models.
Comparison DimensionTypical Conclusions from Simplified ModelsDeeper Insights Revealed by the Complete Model in This PaperDifference in Management Value
Fleet CompositionOptimal electric vehicle proportion is 40%Reveals that the optimal proportion is dynamic, dependent on the specific priorities among cost, emissions, and service (see Section 4.5.2). There is no single globally optimal solutionShifts from “seeking a standard answer” to “providing a configuration logic based on strategic choices.”
Cost vs. Emission RelationshipEmission reduction inevitably increases costEmission reduction inevitably increases cost. | Reveals that within a specific range (EV share 30–50%), there exists a “high cost-effectiveness” emission reduction opportunity (significant emission cuts for a marginal cost increase). However, beyond this range, marginal costs rise sharplyIdentifies a strategic investment interval, preventing companies from abandoning viable green transitions due to fear of high costs
Service vs. Cost RelationshipImproving service inevitably leads to simultaneous increases in cost and carbon emissionsIt is found that through intelligent scheduling (e.g., using electric vehicles to serve time-sensitive customers), it is possible to achieve service improvement alongside a reduction in carbon emissions while incurring only a minor cost increase (see the “Green Leadership” solution in Table 8)Dispels the myth that “environmental protection necessarily compromises service,” opening the possibility for formulating a “green + high-quality” premium competitive strategy
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Guan, Y.; Yang, J.; Shi, G.; Shi, J. Multi-Strategy Computational Algorithm for Sustainable Logistics: Solving the Green Vehicle Routing Problem with Mixed Fleet. Machines 2026, 14, 202. https://doi.org/10.3390/machines14020202

AMA Style

Guan Y, Yang J, Shi G, Shi J. Multi-Strategy Computational Algorithm for Sustainable Logistics: Solving the Green Vehicle Routing Problem with Mixed Fleet. Machines. 2026; 14(2):202. https://doi.org/10.3390/machines14020202

Chicago/Turabian Style

Guan, Yang, Jie Yang, Ge Shi, and Jinfa Shi. 2026. "Multi-Strategy Computational Algorithm for Sustainable Logistics: Solving the Green Vehicle Routing Problem with Mixed Fleet" Machines 14, no. 2: 202. https://doi.org/10.3390/machines14020202

APA Style

Guan, Y., Yang, J., Shi, G., & Shi, J. (2026). Multi-Strategy Computational Algorithm for Sustainable Logistics: Solving the Green Vehicle Routing Problem with Mixed Fleet. Machines, 14(2), 202. https://doi.org/10.3390/machines14020202

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop