Optimization of the Collaborative Hub Location Problem with Metaheuristics

: By creating new job opportunities and developing the regional economy, the transport of goods generates signiﬁcant costs, environmental and sanitary nuisances, and high greenhouse gas (GHG) emissions. In this context, collaboration is an interesting solution that can be used to enable companies to overcome some problems such as globalization, economic crisis, health crisis, issues related to sustainability, etc. This study deals with the design of a multiperiod multiproduct three-echelon collaborative distribution network with a heterogeneous ﬂeet. By applying the mixed integer linear problem (MILP) formulations, it was possible to study the three dimensions of sustainability (economic, environmental, and social/societal). Since the examined problem was NP-hard, it was solved using four metaheuristic approaches to minimize the different logistics costs or CO 2 emissions. The social/societal aspect evaluated the accident rate and the noise level generated by the freight transport. Four algorithms were developed to achieve our objectives: a genetic algorithm, a simulated annealing, a particle swarm algorithm, and a vibration damping optimization algorithm. Considering a French distribution network, these algorithms overcame the limits of the exact solution method by obtaining optimal solutions with reasonable execution time.


Introduction
In 2019, 374 billion ton-kilometers of goods were transported in the French metropolitan territory (including 11.8 billion by pipelines), a growth of 2.5% compared to 2018 [1]. These statistics show and highlight the importance of freight transport. However, it negatively influences the three main sustainability levels by increasing the logistics costs, the CO 2 emissions, the depletion of nonrenewable resources, and the degradation of human life. Therefore, improving the effectiveness of the logistics operations by optimizing the distribution networks has been proved to be the best solution to face the companies' difficulties. This effectiveness cannot be attained only by collaboration between companies. This strategy is the cornerstone of managing an efficient supply chain. It reduces, on the one hand, greenhouse gases, pollution, and congestion and, on the other hand, costs. In addition, it allows the agility and resilience of a company [2]. According to Aloui et al. [3], collaboration can be vertical (VC), horizontal (HC), and lateral. The VC is defined as a commercial agreement between two partners or more, from different levels of the network, in order to work together to take advantage of cooperation [4]. These partners (e.g., suppliers and clients) share important information about demands, deliveries, capacity, etc. However, the HC is based on sharing logistical resources between competitive partners who are at the same level of the network, but they belong to different logistics chains [5]. This type of collaboration improves the sustainability aspects through the reduction in logistics costs and CO 2 emissions and the enhancement of the well-being of citizens and inhabitants, etc. [6,7]. Finally, lateral collaboration can be considered as the combination of the two et al. [39] addressed the same problem but with the two allocation strategies (single and multiple). Mathematical formulations and algorithms based on SA were developed in order to study the related problem. Moreover, Bashiri et al. [40] presented a mathematical model to deal with multiperiodic HLP in a dynamic environment. The authors developed a GA with an LS method and a SA algorithm. They found that the GA was more efficient than the SA in dealing with real-world applications where demand is changing rapidly. Moreover, a mathematical model was proposed by Lüer-Villagra et al. [41] for an USAp-HLP. This model was difficult to solve, even for small instances. To overcome this weakness, the authors generated a GA that effectively and quickly solved the problem. At the same time, Özgün-Kibiroglu et al. [42] suggested a mixed integer nonlinear programming (MINLP) formulation to solve an uncapacitated multiple allocation hub location problem (UMAHLP) using a PSO algorithm, which was applied to effectively solve the model. In addition, a HLP was studied by Shang et al. [43] with a stochastic programming model formulated in MILP to minimize distribution and opening costs. Due to the complexity of the computation, a memetic algorithm (MA) was introduced to solve the considered problem. This algorithm combined genetic research and LS to find optimal solutions within a reasonable computational time. Under these conditions, comparative results proved that the MA was more efficient than the GA in terms of the quality of solution and the speed of calculation. The same problem was addressed by Shang et al. [44] using a robust optimization model that enabled the attainment of economic goals. The authors proposed a VNS algorithm and a heuristic based on the population-and-searching (PAS) to find the optimal solutions. The obtained findings showed that the PAS algorithm was the most efficient in dealing with large instances, compared to the CPLEX software and the VNS algorithm. In addition, a GA was introduced by Wang et al. [45] to select the optimal hub locations and to ensure that the total network cost was minimized and products are delivered on time. The results demonstrated that the proposed algorithm could reduce the total cost and improve the efficiency of the network. The reviewed studies are summarized in Table 1.  [25] USAp-HLP Single Hm 1 Single GA [24] CMApHLP Single Hm 1 Single SA [26] CSAHLP Single Hm 1 Single ACO [28] CSApHLP Single Hm 1 Single GA [27] USAp-HLP Single Hm 1 Single GVNS [29] CMAp-HLP Single Hm 1 Single GA [30] USAp-HLP Single Hm 1 Single GA-LS [31] USAp-HLP Single Hm 1 Single PSO-LS [33] SAHLP Single Hm 1 Single SA, ILS [32] USAp-HLP Single Hm 1 Single PSO-TS, VNS-TS [34] USAp-HLP Single Hm 1 Single GA [35] CSAp-HLP Single Ht 2 Single VNS [36] USAp-HLP Single Hm 1 Single GA [37] CMAHLP Single Hm 1 Single ACO-SA [38] CMAp-HLP Single Ht 2 Multiple LS-LR [39] U(SA and MA)p-HLP Single Hm 1 Single SA [40] USAp-HLP Multiple Hm 1 Single GA, SA [41] USAp-HLP Single Hm 1 Single GA [42] UMAHLP Single Hm 1 Single PSO [43] USAp-HLP Single Hm 1 Single MA [45] USAp-HLP Single Hm 1 Single GA [44] SAp From this table, we can see that, in most of the studies dealing with HLP, the objective function of the proposed metaheuristics was to minimize logistics costs (transportation, storage, opening, etc.). However, none of these research works considered the three sustainability levels when designing the distribution networks and not all introduced models were flexible in terms of the delivery time. For these reasons, we present, in this study, a model that addresses the economic level or the environmental level and evaluates the social/societal level. On the other hand, the platforms' capacities, in most of the existing studies, were predefined, which led to a predefined opening cost that did not depend on the warehouse area. Thus, it would be more optimal to consider the capacity of each hub as a decision variable determined by the model depending on the quantity of goods entering each warehouse. In addition, as demonstrated in most studies, a homogeneous fleet of vehicles represents a weakness due to the variability of demand that requires a flexible (heterogeneous) fleet with variable types of vehicles to ensure the customer's satisfaction. In this study, we use a heterogeneous fleet of vehicles that enabled modelling of the economy of scale. As stated in [35], the main reason for employing different types of vehicles in distribution networks is to achieve economies of scale and reduce the unit transportation cost by using larger-capacity vehicles. Therefore, we introduce, in this paper, meta-heuristic algorithms, namely a GA, a SA, a PSO and a VDO, to effectively solve the studied problem. The GA and the PSO were based on the population generation, which gives a larger research space. Moreover, the SA and the VDO are single solution algorithms and they offer low execution time.

Description and Formulation of the Problem
In this section, we examine a collaborative distribution network design problem that addresses three levels of sustainability. The targeted distribution network, shown in Figure 1, contains suppliers who collaborate to deliver multiple products to retailers through the shared warehouses and distribution centers. The distribution of these products is carried out in a multiperiodic way and with several types of vehicles, which makes the vehicle fleet heterogeneous. The problem is similar to the CS/MAp-HLP (capacitated single/multiple allocation p-hub location problem), which is solved in the strategic decision context. The problem has two main objectives. The first is to determine the number and locations of the two types of hubs (warehouses and distribution centers). On the other hand, the second aim consists in assigning suppliers and distribution centers to warehouses and retailers to distribution centers. Moreover, the mathematical model determines the capacities of hubs, which are limited and depend on the quantities of goods entering each hub, the quantities delivered in each period in the three parts of the network (upstream, intermediate and downstream), the level of stocks in the warehouses, the delayed quantities, and the number and type of the used vehicles. As the upstream hubs are supposed to have storage times, their role is to store goods. On the other hand, downstream hubs are like cross-docking platforms in the sense that they have no storage time. The main functions of these platforms are to sort goods and distribute them to retailers.
Today, sustainability has become one of the most important challenges. Therefore, collaboration is one of the solutions that has proven its efficiency in achieving the main goals of sustainability. It allows partners to benefit from certain advantages that contribute to a green and sustainable distribution network by ensuring the massification of flows, grouping together goods, and sharing resources and means. The three levels of sustainability are assessed, in this study, by minimizing the logistics costs or the CO 2 emissions and evaluating the noise levels and the risk of accidents. The evaluated sustainability indicators are shown in Figure 2. Today, sustainability has become one of the most important challenges. Therefore, collaboration is one of the solutions that has proven its efficiency in achieving the main goals of sustainability. It allows partners to benefit from certain advantages that contribute to a green and sustainable distribution network by ensuring the massification of flows, grouping together goods, and sharing resources and means. The three levels of sustainability are assessed, in this study, by minimizing the logistics costs or the CO2 emissions and evaluating the noise levels and the risk of accidents. The evaluated sustainability indicators are shown in Figure 2. The examined mathematical model was first proposed by Mrabti et al. [46]. It considered two scenarios (see Figure 3). The first scenario (Sc1) imposed that each supplier was assigned to a single warehouse (single allocation), which, could deliver to only one distribution center (single allocation), and each retailer could be assigned to several distribution centers (multiple allocation). However, the second scenario (Sc2) differed from the first one at the levels of the assignments of warehouses to the distribution centers, on the one hand, and retailers to distribution centers, on the other hand. Therefore, each warehouse  Today, sustainability has become one of the most important challenges. Therefore, collaboration is one of the solutions that has proven its efficiency in achieving the main goals of sustainability. It allows partners to benefit from certain advantages that contribute to a green and sustainable distribution network by ensuring the massification of flows, grouping together goods, and sharing resources and means. The three levels of sustainability are assessed, in this study, by minimizing the logistics costs or the CO2 emissions and evaluating the noise levels and the risk of accidents. The evaluated sustainability indicators are shown in Figure 2. The examined mathematical model was first proposed by Mrabti et al. [46]. It considered two scenarios (see Figure 3). The first scenario (Sc1) imposed that each supplier was assigned to a single warehouse (single allocation), which, could deliver to only one distribution center (single allocation), and each retailer could be assigned to several distribution centers (multiple allocation). However, the second scenario (Sc2) differed from the first one at the levels of the assignments of warehouses to the distribution centers, on the one hand, and retailers to distribution centers, on the other hand. Therefore, each warehouse The examined mathematical model was first proposed by Mrabti et al. [46]. It considered two scenarios (see Figure 3). The first scenario (Sc1) imposed that each supplier was assigned to a single warehouse (single allocation), which, could deliver to only one distribution center (single allocation), and each retailer could be assigned to several distribution centers (multiple allocation). However, the second scenario (Sc2) differed from the first one at the levels of the assignments of warehouses to the distribution centers, on the one hand, and retailers to distribution centers, on the other hand. Therefore, each warehouse can deliver multiple distribution centers (multiple allocation) and a retailer can only be assigned to one distribution center (single allocation). can deliver multiple distribution centers (multiple allocation) and a retailer can only be assigned to one distribution center (single allocation).
(a) (b) To formally introduce the studied problem, the following notations were used in the remainder of this study.  To formally introduce the studied problem, the following notations were used in the remainder of this study.

Sets
Sets  i,j,t : Quantity of product p transported from origin i to destination j by the v-type vehicle at period t I p m,t : Inventory level of product p in hub m at period t C m : Hub m capacity N v a,t : Number of v-type vehicles at period t between origin i and destination j Wd p t : Quantity of product p delayed in period t A m : Hub m area y m : Binary variable equals to 1, if the warehouse m is open, and to 0 otherwise x ij : Binary variable equals to 1, if there is a link between the two nodes i and j, and to 0 otherwise g kj : Binary variable equals to 1, if the retailer j is affected to the distribution center k, and to 0 otherwise Considering the above-described notations, the problem dealing with scenario Sc1 was formulated as follows: NL i,j,t = 2 · 19.5 + 10 · log 4 · Q HW i,j,t The economic objective function (1) minimized the logistics costs linked to transportation, storage, late delivery penalties, the establishment of hubs, and handling represented respectively by ((2), (3), (5), (7) and (9)). The inventory level in warehouse m was given by (4). The delayed amount of product p at period t was shown in (6). The initial delayed quantity was equal to zero. The area of a hub m to build was given by (8). The environmental objective function (10) minimized the various CO 2 emissions due to vehicles (their manufacture and operation), represented by (11), and to hubs (their construction and operation) demonstrated by (12) and (13). The accident rate per thousand km was given by (14). The multiplication by two in (15) was done to consider the empty return. Knowing that an accident injury can be fatal or nonfatal, the other two accident levels (nonfatal injury and fatal injury) might be also assessed. The first level was given by (16). This mortality rate was determined from statistical reports on road safety in France. The rate of nonfatal accidents was given by (17). The noise level was obtained by (18). The acoustic equivalence changed depending on the degree of the ramp and the type of track. The route chosen by the authors was a motorway, which was a communication road with separate lanes reserved for the rapid circulation of motor vehicles (cars, motorcycles, and heavy goods vehicles). The motorway was made up of 4 express lanes (2 per direction); each of which had a width of 3.5 m. The distance to the platform edge, measured by meter, was defined as a distance between the edge of the road platform and the receiver (person and buildings, etc.). The receiving point was assumed to be 30 m from the road. As far as the speed was concerned, it was fixed at an average speed equal to 80 km/h. After simplification, (19) was used to evaluate the noise level generated by the distribution of goods. The flows of heavy goods vehicles were equal to the number of vehicles or the number of trips. They were given by formula (20). Constraints (21) and (22) ensured the satisfaction of the retailers. In fact, constraint (21) guaranteed that the sum of the quantities delivered by all the suppliers in period t and the stock level of the previous period must be greater than or equal to the quantity of goods delivered between the hubs. The initial inventory level was equal to zero. Constraint (22) ensured that the quantity of inter-hub goods delivered from the warehouse to the distribution center in each period must be exactly equal to the quantity delivered to retailers. Constraint (23) guaranteed the satisfaction of the requests during the authorized periods. Indeed, for each product, the demand of the retailers in each period must be delivered at most in period t + a p . In our case, a p designated the number of periods authorized to deliver the product p. In other words, this parameter designates the level of flexibility in terms of delivery time of each supplier. Constraint (24) indicated that the stock level of each product in the upstream hubs must not be lower than that of the safety stock. Constraint (25) showed that the incoming quantities must be less than the capacities of the hubs. Constraints (26) and (27) demonstrated that a warehouse was only open if, at least, one supplier was assigned to it and that a platform was only open if at least one warehouse was assigned to it. Parameter Z was equal to a large constant. Constraint (28) guaranteed that the number of platforms to be opened was always less than that of candidates. Constraint (29) was used to limit the flow of flows on the arcs. Indeed, no quantity of goods circulated between the nodes unless there was a link between them. Constraint (30) ensured that a supplier was assigned to a single warehouse. Constraint (31) revealed that a warehouse was assigned to a single distribution center. Constraint (32) determined the number of vehicles required to transport the goods on the three parts of the distribution network. Constraint (33) limited the number of vehicles used on each arc. Constraint (34) was employed to conserve flows. The last constraints (35)-(41) provided the domains of defining each decision variable.
The difference between the problem dealing with Sc2 and that dealing with Sc1 was summarized by the following equations: Equation (42) ensured that each retailer was delivered by a single distribution center and (43) guaranteed that each shared warehouse could deliver goods to many distribution centers.

Solution Approach
Since the considered problem was NP-hard, it could not be resolved with traditionalsolution approaches for large-sized instances, hence the importance of applying metaheuristic approaches. So, we introduce and describe the encoding plan of all the proposed metaheuristics.

Encoding Plan
To solve our studied problem, it is impossible to use traditional solutions. Generally, each problem has its own data structure and individuals, in the literature, are represented either by vectors [39] or by tables [47,48]. However, in this work, a candidate individual was represented by a set of tables; each of which showed a given period. There were two types of tables. The first type A f represented the links in the studied network. It was a table containing four rows associated respectively with the indices of the four network sets. The number of columns was the multiplication of the number of suppliers and retailers. The first row demonstrated the supplier indices, the second and third rows described warehouses and distribution centers, respectively, while the last line exposed retailers. This table was represented only once since the assignment did not change during the periods. For example, a simple case contained two suppliers, three warehouses, three distribution centers, and three retailers. These elements are shown in Figure 4 and the links of the example are described by part (A).
number of columns was the multiplication of the number of suppliers and retailers. The first row demonstrated the supplier indices, the second and third rows described warehouses and distribution centers, respectively, while the last line exposed retailers. This table was represented only once since the assignment did not change during the periods. For example, a simple case contained two suppliers, three warehouses, three distribution centers, and three retailers. These elements are shown in Figure 4 and the links of the example are described by part (A). The second type Qty, which represented the product quantities to be transported between the suppliers and the retailers, was a table with the same size of Af. Each period had its own table since the requests were multiperiodic. It was assumed that the three retailers had the respective demands of products P1 and P2: {20 P1, 0 P2}, {18 P1, 0 P2}, and {12 P1, 80 P2}. Part (B) contained the type Qty of this solution and it is shown in Figure 4.

Genetic Algorithm
The GA was first proposed by [49] and successfully used in various optimization problems. It mimics three genetic operations on a certain basic population (the selection, the mutation, and the crossover) to synthesize the best characteristics of different individuals. Genetic algorithms have been widely applied in the literature [30,34,36,40,41,50]. The second type Q ty , which represented the product quantities to be transported between the suppliers and the retailers, was a table with the same size of A f . Each period had its own table since the requests were multiperiodic. It was assumed that the three retailers had the respective demands of products P 1 and P 2 : {20 P 1 , 0 P 2 }, {18 P 1 , 0 P 2 }, and {12 P 1 , 80 P 2 }. Part (B) contained the type Q ty of this solution and it is shown in Figure 4.

Genetic Algorithm
The GA was first proposed by [49] and successfully used in various optimization problems. It mimics three genetic operations on a certain basic population (the selection, the mutation, and the crossover) to synthesize the best characteristics of different individuals. Genetic algorithms have been widely applied in the literature [30,34,36,40,41,50]. They are of great potential for practical applications. Indeed, genetic algorithms provide excellent performance at low costs. They make it possible to explore areas with many solutions. In addition, they use four elements to find the extrema(s) of a function defined on a data space. The pseudocode of GA is presented in Figure 5.
When starting the genetic algorithm, the programmer must provide an initial population that respects the constraints of the studied problem. The algorithm starts by generating some initial solutions, which represent the initial population, from the search space in a random fashion. These solutions are presented in the form of chromosomes composed of several genes. The initial population plays an important role in the convergence of the final solution [51]. After generating the initial population, a score must be assigned to each individual to distinguish the most promising ones who will participate in the improvement of this population in future generations. At each iteration, the algorithm selects, from the current population, the individuals who will survive and reproduce to create an intermediate population. The selection phase is crucial in determining the performance and quality of the new generations. To diversify the population and explore the workspace, diversification operators are used across the succeeding generations. Among these operators, we cited the crossover and mutation operators. In fact, the purpose of the former is to exchange parts of the chains of two individuals, which allows the genetic mixing of the population. However, the mutation ensures the exploration of the workspace. The purpose of crossover is to enrich the diversity of the population by manipulating the structure of chromosomes. In general, crossovers occur between two parents to produce two offspring. The mutation operation allows the genetic algorithm to better scan the search space by modifying a gene. In our case, the mutation operator used various modifications. They are of great potential for practical applications. Indeed, genetic algorithms provide excellent performance at low costs. They make it possible to explore areas with many solutions. In addition, they use four elements to find the extrema(s) of a function defined on a data space. The pseudocode of GA is presented in Figure 5. When starting the genetic algorithm, the programmer must provide an initial population that respects the constraints of the studied problem. The algorithm starts by generating some initial solutions, which represent the initial population, from the search space in a random fashion. These solutions are presented in the form of chromosomes composed of several genes. The initial population plays an important role in the convergence of the final solution [51]. After generating the initial population, a score must be assigned to each individual to distinguish the most promising ones who will participate in the improvement of this population in future generations. At each iteration, the algorithm selects, from the current population, the individuals who will survive and reproduce to create an intermediate population. The selection phase is crucial in determining the performance and quality of the new generations. To diversify the population and explore the workspace, diversification operators are used across the succeeding generations. Among these operators, we cited the crossover and mutation operators. In fact, the purpose of the former is to exchange parts of the chains of two individuals, which allows the genetic mixing of the population. However, the mutation ensures the exploration of the workspace. The purpose of crossover is to enrich the diversity of the population by manipulating the structure of chromosomes. In general, crossovers occur between two parents to produce two offspring. The mutation operation allows the genetic algorithm to better scan the search space by modifying a gene. In our case, the mutation operator used various modifications.
Initialization: select the size of population , probability of crossover , probability of mutation and termination criterion = nPop × ; = nPop × Create an initial population While termination criterion is not met For k=1 : 2 Select two parents Apply the crossover operator End For k=1 : Select a parent Apply the mutation operator End Add offsprings to the population Evaluate the fitness of offsprings Select the new population End Return the best solution S

Simulated Annealing
The SA, proposed by Kirkpatrick et al. [52], is a technique that solves optimization problems. It is a one-solution metaheuristic whose idea stems from the physical annealing process of maximizing the temperature of a metal until it melts. Then, cooling is carried out by lowering the temperature until the minimum energy is reached to obtain a stable crystalline structure. If the cooling rate is fast, the metal will be brittle. The SA consists in generating a random initial solution S, as explained in the following section, and looking at each iteration for a feasible neighboring solution S . If the new solution is improved, an update of the solution is performed (S = S ). If not, the new solution can be accepted in order not to be trapped in a local minimum with a certain probability of acceptance represented by (44).
where f is the objective function and T denotes a temperature parameter. A uniform random number p in the interval [0, 1] is generated. If p is less than the probability of acceptance, the new solution will be accepted. The chance of accepting poor quality solutions diminishes gradually. At each iteration, the temperature decreases regularly according to a cooling coefficient α such that T = α·T where α ∈ [0, 1]. The loop is repeated until a termination criterion is met. The pseudocode of the simulated annealing algorithm is shown in Figure 6.
where f is the objective function and T denotes a temperature parameter.
A uniform random number p in the interval [0, 1] is generated. If p is less than the probability of acceptance, the new solution will be accepted. The chance of accepting poor quality solutions diminishes gradually. At each iteration, the temperature decreases regularly according to a cooling coefficient α such that T = α.T where α ∈ ]0, 1[. The loop is repeated until a termination criterion is met. The pseudocode of the simulated annealing algorithm is shown in Figure 6.

Particle Swarm Optimization
The PSO, introduced by Eberhart and Kennedy [53], has been applied to solve several real-world engineering problems. This optimization method is based on the collaboration between individuals. It relies on the intelligence of swarms to exploit globally the search space in order to find the optimum or, at the worst case, a best approximate solution in a faster and less expensive way. It is inspired from the interaction and communication between animals (a group of birds, fish, or bees) in search of food [48]. From a population, each particle is viewed as a feasible solution that moves through the problem space to find the best particle and ultimately converges to it. Initial position and velocity are considered for each particle. In each iteration t, the position x and the velocity v of the particles are modified and updated by the following equations: where P gb and P ib denote the global best position and the best position found by particle i, respectively. ρ 1 and ρ 2 are randomly chosen parameters in the interval [0, 1] while ∅ 1 and ∅ 2 are weights associated to P ib and P gb , respectively. The pseudocode of the PSO is represented by Figure 7.
where Pgb and Pib denote the global best position and the best position found by particle i, respectively. and are randomly chosen parameters in the interval [0, 1] while ∅1 and ∅2 are weights associated to Pib and Pgb, respectively. The pseudocode of the PSO is represented by Figure 7.   Figure 7. Pseudocode of the used particle swarm optimization algorithm.
Initialization: select the size of population , personal learning factor ∅ 1 , social learning factor ∅ 2 , inertia weight , inertia damping ratio and the termination criterion t = 1 For k=1 : Initialize the initial position for each particle 1 Initialize the best position for each particle If f( ) < f( ) Then = End Initialize the initial velocity for each particle 1 End While termination criterion is not met For k=1 :

End
Return the best Position

Vibration Damping Optimization
The VDO algorithm is a metaheuristic originally introduced by Mehdizadeh and Tavakkoli-Moghaddam [54]. It is inspired by the SA algorithm and imitates the process of vibration damping in the physics field. There is a practical connection between the vibration damping process and optimization. In the analogy between the vibration damping process and optimization, the states of the oscillation system represent the feasible solutions of the optimization problem. In addition, their energies correspond to the value of the objective function computed at these solutions, while the minimum energy state represents the optimal solution of the problem.
According to the explanation of Mehdizadeh and Tavakkoli-Moghaddam [54], when using a VDO algorithm, a set of principal choices must be made. The algorithm starts with a random choice of an initial solution. Then, its parameters, including the damping coefficient γ, the initial amplitude A 1 , the standard deviation σ, and the number of neighborhoods at each amplitude n max , are initialized. In each iteration, a new solution S is created by a heuristic perturbation on the current solution S. The neighborhood solution S becomes a new solution if the change in an objective function is improved (i.e., ∆ = f (S ) − f (S) ≤ 0 in our case, since it is a minimization problem). In the case where this change is not improved, the neighborhood solution becomes a new solution if r, which is a random number in the interval [0, 1], is less than or equal to an acceptance probability p acc (r ≤ p acc ) (represented by (47)). Thus, it is possible to find a global optimal solution from a local optimum. For the high amplitude (at the beginning of the search), there is some flexibility to move from a good solution to a bad one. However, for the lowest amplitude (later in the search), this flexibility decreases. As the procedure proceeds, the amplitude minimizes using (48). The loop is repeated until a stopping criterion is satisfied, as shown in the pseudocode of the VDO algorithm demonstrated in Figure 8.
comes a new solution if the change in an objective function is improved (i.e., ∆= ( ) ( ) ≤ 0 in our case, since it is a minimization problem). In the case where this change not improved, the neighborhood solution becomes a new solution if , which is a rando number in the interval [0, 1], is less than or equal to an acceptance probabili (r ≤ ) (represented by (47)). Thus, it is possible to find a global optimal solutio from a local optimum. For the high amplitude (at the beginning of the search), there some flexibility to move from a good solution to a bad one. However, for the lowest am plitude (later in the search), this flexibility decreases. As the procedure proceeds, the am plitude minimizes using (48). The loop is repeated until a stopping criterion is satisfie as shown in the pseudocode of the VDO algorithm demonstrated in Figure 8.

Computational Results
In this section, a case study, including parameter settings of the proposed algorithms and a comparison of the performance and effectiveness of these algorithms, is presented. First, an illustrative example of a French distribution network is presented. The network contains seven suppliers that collaborate to satisfy 13 retailers via shared warehouses and distribution centers. For each set, the maximum number of warehouses and distribution centers to be opened is seven. However, the number of hubs and their storage capacities will be determined by applying the proposed model. The attributed distribution network is shown in Figure 9.
First, an illustrative example of a French distribution network is presented. The network contains seven suppliers that collaborate to satisfy 13 retailers via shared warehouses and distribution centers. For each set, the maximum number of warehouses and distribution centers to be opened is seven. However, the number of hubs and their storage capacities will be determined by applying the proposed model. The attributed distribution network is shown in Figure 9. The objective of upstream and downstream collaboration is to consolidate flows by grouping goods, facilitating the transportation of a large amount of freight. For this reason, two types of vehicles (having capacities of 33 and 39 pallets) were used in upstream and downstream parts of the network. As retailers are increasingly demanding in terms of delivery frequency to reduce their stocking levels, a third type of vehicle with a capacity equal to 15 pallets was employed. Therefore, the capacity of the utilized vehicles varies between 15, 33, and 39 pallets. The required data for the used vehicles are represented in Table 2.
The used distances are given in Appendix A using Google Maps. The authorized delivery time for each partner is shown in Table 3.  The objective of upstream and downstream collaboration is to consolidate flows by grouping goods, facilitating the transportation of a large amount of freight. For this reason, two types of vehicles (having capacities of 33 and 39 pallets) were used in upstream and downstream parts of the network. As retailers are increasingly demanding in terms of delivery frequency to reduce their stocking levels, a third type of vehicle with a capacity equal to 15 pallets was employed. Therefore, the capacity of the utilized vehicles varies between 15, 33, and 39 pallets. The required data for the used vehicles are represented in Table 2. The used distances are given in Appendix A using Google Maps. The authorized delivery time for each partner is shown in Table 3. To satisfy the retailer's demands, we used the same method applied in [35]. In fact, for each product, the retailer demands in each period were randomly selected using uniform distributions in the interval [0, 50] pallets. Some additional data about the used parameters  Table 4. The used data were the same for all scenarios, i.e., they do not vary from one scenario to another.

Parameters Setting
To ensure an efficient implementation of the proposed algorithms and optimize its performance, its parameters should be well chosen and adequately set. Moreover, the choice of the optimal parameters affects considerably the algorithm exploration. To optimize this choice, several methods can be applied. The classical technique consists in running the proposed algorithms several times with different parameters and keeping the good parameters associated with the best solutions. This method was utilized in the performed experiments since it enables obtaining good results. The different values or levels for all the parameters are shown in Appendix A.
For example, the results of the population size setting (n Pop ), obtained by the genetic algorithm in both scenarios, are given in Appendix A (each algorithm was run five times independently) and their representation is provided in Figure 10. p c and p m were set to 0.85 and 0.3, respectively and uniform crossover and Swap mutation were used to determine the best population size that was generated by the levels used in Appendix A (population size between 50 and 300). As demonstrated in Figure 10, the best performance of the genetic algorithm is obtained for a population size of 150 (n Pop = 150).
To satisfy the retailer's demands, we used the same method applied in [35]. In fac for each product, the retailer demands in each period were randomly selected using un form distributions in the interval [0, 50] pallets. Some additional data about the used pa rameters are presented in Table 4. The used data were the same for all scenarios, i.e., the do not vary from one scenario to another.

Parameters Setting
To ensure an efficient implementation of the proposed algorithms and optimize i performance, its parameters should be well chosen and adequately set. Moreover, th choice of the optimal parameters affects considerably the algorithm exploration. To opt mize this choice, several methods can be applied. The classical technique consists in run ning the proposed algorithms several times with different parameters and keeping th good parameters associated with the best solutions. This method was utilized in the pe formed experiments since it enables obtaining good results. The different values or leve for all the parameters are shown in Appendix A.
For example, the results of the population size setting ( ), obtained by the genet algorithm in both scenarios, are given in Appendix A (each algorithm was run five time independently) and their representation is provided in Figure 10. and were set t 0.85 and 0.3, respectively and uniform crossover and Swap mutation were used to dete mine the best population size that was generated by the levels used in Appendix A (pop ulation size between 50 and 300). As demonstrated in Figure 10, the best performance o the genetic algorithm is obtained for a population size of 150 ( = 150).  Similarly, the best levels for the parameters of all the proposed algorithms (GA, SA, PSO and VDO), helping to achieve the best solutions, were obtained by following the steps explained above. These parameters are shown in Table 5.

Numerical Results
In this section, we solve the single objective model (economic or environmental) using the IBM CPLEX 12.10 solver to obtain the exact optimal solution. Due to the variation in solutions given by the algorithms, we computed the average value of the objective function from five runs (e.g., to determine the population number). We used MATLAB software on a computer with an Intel ® Core ™ i5-8300HQ CPU (2.30 GHz) with 8 GB of RAM. The results of the Sc1_eco, Sc1_env, Sc2_eco, and Sc2_env scenarios are summarized in Appendix B, respectively. Sc1_eco and Sc2_eco denote, respectively, the first and second solved scenarios with an economic objective function that reduces the logistics costs. On the other hand, Sc1_env and Sc2_env designate, respectively, the first and the second scenarios solved with an environmental objective function reducing CO 2 emissions.
The GAP line in the tables represents the relative percentage difference between the approximate solution and the optimal solution. The GAP formula was obtained by the following equation: where Alg sol and Min sol are, respectively, the objective value obtained by the considered algorithm and the minimum objective value found by using the linear program solved with CPLEX for each scenario. It is obvious that a minimum GAP value is desirable. The accident risk was calculated with respect to the accident risk caused by a reference distance D re f equal to 2 × 10 5 km. Therefore, to compute the reference accident risk, the following formula was applied: To evaluate our study, we compared the performance of the different scenarios, based on several sustainability indicators, and that of the four used algorithms in solving the studied problem.
Based on the results provided by the genetic algorithm shown in Figure 11, we note that the Sc2_eco scenario represented the best scenario from the economic point of view. For example, the total logistics costs for the Sc2_eco scenario and the Sc1_eco scenario werere 14.321 M€ and 14.322 M€, respectively. However, the Sc2_env scenario was the most suitable to protect the environment. It gave better values than Sc1_env for all resolution methods, as shown in Figure 11. For example, the total CO 2 emissions obtained with simulated annealing were 3.23 × 10 9 g CO 2 for the Sc2_env scenario against 3.26 × 10 9 g CO 2 for the Sc1_env scenario. From the results of the two scenarios, we can conclude that scenario Sc2 was better than Sc1 because it offers a good compromise between the two economic and environmental objectives.
based on several sustainability indicators, and that of the four used algorithms in solving the studied problem.
Based on the results provided by the genetic algorithm shown in Figure 11, we note that the Sc2_eco scenario represented the best scenario from the economic point of view. For example, the total logistics costs for the Sc2_eco scenario and the Sc1_eco scenario werere 14.321 M€ and 14.322 M€, respectively. However, the Sc2_env scenario was the most suitable to protect the environment. It gave better values than Sc1_env for all resolution methods, as shown in Figure 11. For example, the total CO2 emissions obtained with simulated annealing were 3.23 × 10 g CO2 for the Sc2_env scenario against 3.26 × 10 g CO2 for the Sc1_env scenario. From the results of the two scenarios, we can conclude that scenario Sc2 was better than Sc1 because it offers a good compromise between the two economic and environmental objectives. The results of the social/societal aspect of sustainability are represented in Figure 12. The distribution of goods via the Sc1_eco scenario generated the lowest accident rate and the lowest fatal accident rate. In addition, the Sc1_env scenario showed the best noise results, compared to the other scenarios. Therefore, from a social/societal point of view, Sc1 scenario is the best. The results of the social/societal aspect of sustainability are represented in Figure 12. The distribution of goods via the Sc1_eco scenario generated the lowest accident rate and the lowest fatal accident rate. In addition, the Sc1_env scenario showed the best noise results, compared to the other scenarios. Therefore, from a social/societal point of view, Sc1 scenario is the best. the studied problem.
Based on the results provided by the genetic algorithm shown in Figure 11, we note that the Sc2_eco scenario represented the best scenario from the economic point of view. For example, the total logistics costs for the Sc2_eco scenario and the Sc1_eco scenario werere 14.321 M€ and 14.322 M€, respectively. However, the Sc2_env scenario was the most suitable to protect the environment. It gave better values than Sc1_env for all resolution methods, as shown in Figure 11. For example, the total CO2 emissions obtained with simulated annealing were 3.23 × 10 g CO2 for the Sc2_env scenario against 3.26 × 10 g CO2 for the Sc1_env scenario. From the results of the two scenarios, we can conclude that scenario Sc2 was better than Sc1 because it offers a good compromise between the two economic and environmental objectives. The results of the social/societal aspect of sustainability are represented in Figure 12. The distribution of goods via the Sc1_eco scenario generated the lowest accident rate and the lowest fatal accident rate. In addition, the Sc1_env scenario showed the best noise results, compared to the other scenarios. Therefore, from a social/societal point of view, Sc1 scenario is the best. Based on the findings shown in Figure 13, we noticed that the genetic algorithm could provide the best solutions and even almost the exact optimal values with a GAP lower than 1% for all scenarios. It proved its excellent performance in solving the problem from the economic and environmental point of view. Moreover, comparing all the obtained results, we concluded that the two best metaheuristics were the GA and the SA with average GAP values of 0.73% and 2.47%, respectively, which was valid for most of the economic and environmental scenarios. with a running time of 16.81 s and 18.94 s, respectively. In addition, the average values of the running time show that all the algorithms gave better results compared to those provided by the exact resolution of the linear program (CPLEX), while ensuring a very fast convergence for all scenarios. For example, using CPLEX, the Sc2_eco scenario gave 21,600 s against 36 s obtained when applying the genetic algorithm. Therefore, the total resolution time was reduced by 99.84%. We present the obtained optimal network configurations of the Sc2 scenario using the GA in Figure 14. Regarding the resolution time, the average required time of GA, SA, PSO, and VDO in seconds were respectively equal to 34.42, 16.81, 45.90, and 18.94, as shown in Figure 13. Moreover, the highest convergence speeds were obtained by the SA and VDO algorithms with a running time of 16.81 s and 18.94 s, respectively. In addition, the average values of the running time show that all the algorithms gave better results compared to those provided by the exact resolution of the linear program (CPLEX), while ensuring a very fast convergence for all scenarios. For example, using CPLEX, the Sc2_eco scenario gave 21,600 s against 36 s obtained when applying the genetic algorithm. Therefore, the total resolution time was reduced by 99.84%. We present the obtained optimal network configurations of the Sc2 scenario using the GA in Figure 14.
Based on the findings shown in Figure 13, we noticed that the genetic algorithm could provide the best solutions and even almost the exact optimal values with a GAP lower than 1% for all scenarios. It proved its excellent performance in solving the problem from the economic and environmental point of view. Moreover, comparing all the obtained results, we concluded that the two best metaheuristics were the GA and the SA with average GAP values of 0.73% and 2.47%, respectively, which was valid for most of the economic and environmental scenarios.
Regarding the resolution time, the average required time of GA, SA, PSO, and VDO in seconds were respectively equal to 34.42, 16.81, 45.90, and 18.94, as shown in Figure 13. Moreover, the highest convergence speeds were obtained by the SA and VDO algorithms with a running time of 16.81 s and 18.94 s, respectively. In addition, the average values of the running time show that all the algorithms gave better results compared to those provided by the exact resolution of the linear program (CPLEX), while ensuring a very fast convergence for all scenarios. For example, using CPLEX, the Sc2_eco scenario gave 21,600 s against 36 s obtained when applying the genetic algorithm. Therefore, the total resolution time was reduced by 99.84%. We present the obtained optimal network configurations of the Sc2 scenario using the GA in Figure 14.

Sensitivity Analysis
In this section, we study the sensitivity analysis of the performance of the proposed algorithms for the two scenarios in several cases by varying the number of suppliers, warehouses, distribution centers (DCs), retailers, and vehicles. Table 6 shows the ten used instances. The results of the linear program solved by the commercial solver CPLEX and the proposed algorithms programmed using MATLAB for both economic and environmental scenarios are shown in Appendix C.
To verify the effectiveness of the introduced algorithms on large instances, we represent the results of Appendix C by the curves in Figures 15 and 16.

Sensitivity Analysis
In this section, we study the sensitivity analysis of the performance of the proposed algorithms for the two scenarios in several cases by varying the number of suppliers, warehouses, distribution centers (DCs), retailers, and vehicles. Table 6 shows the ten used instances. The results of the linear program solved by the commercial solver CPLEX and the proposed algorithms programmed using MATLAB for both economic and environmental scenarios are shown in Appendix C.
To verify the effectiveness of the introduced algorithms on large instances, we represent the results of Appendix C by the curves in Figures 15 and 16. These curves show that the GA had the best performance with a GAP less than 2%, for the economic scenario, and less than 4% for the environmental one, for all instances. Sometimes, the SA gave better GAPs than all the other algorithms. The example of the I5 These curves show that the GA had the best performance with a GAP less than 2%, for the economic scenario, and less than 4% for the environmental one, for all instances. Sometimes, the SA gave better GAPs than all the other algorithms. The example of the I5 instance was considered for the economic and environmental cases of scenario 2 because of the oscillating nature of this algorithm. However, the worst values were obtained by applying the PSO in both economic and environmental scenarios.
Using the curves in Figures 15 and 16, it was obvious that the execution time increased with the rise of the size of the studied instance (i.e., the size of the examined problem). In fact, the SA algorithm was the most efficient in terms of the execution time as it enabled reaching the optimum faster than the other metaheuristics. Therefore, we concluded that the best metaheuristics were the GA and the SA.
The results provided for the different instances were consistent with those of the illustrative example. Indeed, scenario 2 remained the best option, and the proposed metaheuristics gave good solutions with a reasonable execution time.

Conclusions
To solve problems caused by the pandemic, the scarcity of natural resources, and the rise in pollution rates, the actors in a logistics system found themselves forced to find an effective solution. In this context, collaboration is the best alternative that may improve the logistics operations and deal with issues related to the different decision levels. In our study, we examined the problem of designing a multiproduct multiperiod, three-echelon distribution network with a heterogeneous fleet of vehicles. This problem belongs to the strategic decision level, which is a critical matter that orients the general orientations of companies.
In this study, we proposed several metaheuristics to solve our model that minimized the logistics costs or CO 2 emissions and evaluated the noise level as well as accident rate through the incorporation of two different scenarios. Those metaheuristics were the genetic algorithm, simulated annealing, the particle swarm optimization, and the vibration damping optimization. In addition, to show the performance of our proposed methods, we developed an illustrative example of a distribution network located in France. In order to calibrate the parameters of the introduced metaheuristics, a typical experimental study was carried out. The analysis of the obtained results demonstrated that the genetic algorithm was the best one in terms of GAP in both scenarios. However, when dealing with the computational time, simulated annealing gave the best values within the shortest convergence time.
For future work, it would be interesting to integrate Artificial Intelligence algorithms to improve the results since they enable processing a particularly large amount of data. It is also essential to present other scenarios that take into account the problem of collaborative vehicle routes. Moreover, the consideration of uncertainty in the parameters of problems is very important to imitate reality. Finally, it is necessary to combine the economic objective with the environmental one to select solutions that offer a good compromise between the two objectives. For this reason, we intend to propose a multiobjective simulated annealing (MOSA), a multiobjective particle swarm optimization (MOPSO), and a non-dominated sorting genetic algorithm II (NSGA-II).  Acknowledgments: The authors would like to thank the anonymous referees from different fields and disciplines for their helpful suggestions.

Conflicts of Interest:
The authors declare no conflict of interest.  Table A2. Distances between warehouses (m) and distribution centers (k) in km.