In order to properly test the efficiency of the proposed model, a large number of tests had to be carried out on instances of varying sizes and structures. While acquiring single instances can be manageable by monitoring a collection center over a given period, solving only a handful of these does not show the general usefulness of the model, rather its functionality on a couple of dedicated use-cases. Existing large-scale information about waste generation, collection and processing is mostly published as statistical data, aggregated usually in a yearly fashion [
19]. Such data cannot be decomposed into actual real-life scenarios, but information about the general structure of real-world instances can be derived from them. Because of this limitation, instances used for testing the model were randomly generated based on studies about the characteristics and distributions of wood waste. Randomly generated instances can turn out to be infeasible, usually in cases where a greater amount of deliveries arriving around the same time cannot be processed by their given deadline due to the limited capacity of the available machines. However, such inputs are also useful for testing purposes to see how efficiently infeasibility can be reported. Proving the infeasibility of scenarios is not always a trivial task, but it should be within the capabilities of the model.
The total mass of deliveries was determined based on the potential payloads of biomass transport trucks [
20]. Two different payload ranges were considered. Instances with a small payload contained deliveries of 19–23 t, that could hypothetically be processed on their arrival day, while the deliveries of instances with large, 31–49 t payloads could take more than one day to process. Information of the machines (throughput and energy use) for the different tasks was based on real machinery [
21]. Three machines with different properties were available for every machine task, each extra option offering higher throughput in exchange for more energy use. Only a single crew was provided for each manual step of the process, and their throughput was intentionally set significantly lower than that of the machine steps. This resulted in inspection and sorting being the bottleneck of the scheduling process, as every delivery had to go through this step, and there are no machine alternatives for it. For this reason, tests were run both with the original 10 t/h throughput, and with doubling this efficiency.
A large number of tests were performed with optimizing for either one of the objective functions, or solving the combined bi-objective optimization problem. The proposed model was solved using Gurobi 9.1 solver for every instance. Tests were run on a PC with an Intel Core i7-5820K 3.30 GHz CPU and 32 GB memory.
4.1. Single Objective Optimization
The first set of input instances were generated over a one-week horizon. The arrival day of the deliveries was chosen in a uniformly random way from this period, and the deadlines were also uniformly random: 1–2 days from arrival for small deliveries and 2–3 days for large deliveries. Various instance sets were generated with delivery numbers between 5 and 20, and the average running times of instances of the same set are presented in
Table 2 for the slower inspection throughput, and
Table 3 for the faster one. Each row of the tables gives the number of deliveries and delivery size (S—small, L—large) of the instances group, and provides an average optimization time for a set of 10 randomized instances in the case of minimizing their lateness or energy use. The optimization time corresponds to the time needed to either achieve an optimal solution or determine the infeasibility of the instance under the current parameter settings. A running time limit of one hour (3600 s) was enforced on the solution process. In some cases, non-optimal solutions were found within the time limit, these will be explained in detail when discussing the corresponding tables. Such occurrences are marked with an * next to the average running time.
In the case of the one-week horizon and a slower inspection throughput, the running time limit was reached for two inputs of the (15,L) instance set while minimizing energy use. The instances were infeasible in both cases. However, this infeasibility was found in under 10 s when optimizing for minimal lateness, so these running times were used when calculating the energy use average. Results for the (20,L) instances are not provided, as they were infeasible in every case, because the planning horizon was too small to schedule every delivery on the available machines.
Doubling the speed of the inspection and sorting step shows a more efficient solution process, the results of which are presented in
Table 3.
Optimal solutions were found for all instances for the sizes of 5, 10 and 15 deliveries except for one: in the (15,L) instance set, a single solution reached the time limit, and produced a non-optimal result with a 2.18% gap. However, while solutions were found for all instances of the (20,L) instance set, only 5 were optimal when optimizing for lateness, and 7 were optimal when optimizing for energy. The suboptimal solutions found for the remaining instances had a greater than 50% optimality gap in every case. As a result, these are not considered in the averages of
Table 3. As bad-quality suboptimal solutions were found for multiple inputs in this (20,L) instance set, this seems to be the limit of the model considering the given time limit. Efficient (optimal, or close-to-optimal) solutions were found for all other problem classes with a short average running time.
The model was tested for larger delivery numbers as well. However, as slower inspection resulted in infeasible solutions for the (20,L) instances, these larger inputs were tested over a two-week planning horizon. The instance sets were generated under the same conditions as the one-week sets. Their average running times can be seen in
Table 4 in the case of slower inspection throughput, and
Table 5 in the faster case.
When using the slower throughput for inspection and sorting, two instances in the (20,L) set did not yield any result in the one hour limit when optimizing for energy. However, these instances were shown to be infeasible under 60 s when optimizing for lateness, so these values were used for calculating the average runtime. Another instance did not provide an optimal solution in one hour, and the best feasible solution had a 3.79% gap. The (25,L) set had one instance where no optimal solution was found for either objective function in an hour, but the optimality gap of the best solution was 83.88% and 17.29% respectively. Another three instances only gave optimal solutions for minimizing lateness, but could only find near-optimal solutions for minimizing energy with 3.65%, 5.69% and 2.57% gaps. One such instance was also present for (30,S), where only a solution with a 2.33% gap was found in one hour. Two inputs of this set did not yield any results in the one hour running time limit for either objective function. These inputs were not considered in the averages. Minimizing lateness for the same instances yielded an optimal result below 60 s. The instances of the (30,L) set, however, were either all infeasible, of no solution was found in the one hour limit, so their results are not presented in the table.
Again, transitioning to a faster inspection and sorting throughput provides more efficient results, which can be seen in
Table 5.
It can be seen from the table that optimal solutions were found for all instances scheduling short deliveries regardless of their problem sizes. In the case of the (25,L) instances, time limit was reached for a single input when optimizing for lateness. The suboptimal solution had a 44.06% optimality gap, and it is not included in the average. Time limit was reached in five cases when minimizing energy consumption for the same instance set. However, these solutions were much closer to the optimum (with respective gaps of 4.74%, 1.82%, 5.31%, 2.87% and 0.93%), and their running time was included in the average. Solution of the (30,L) instance set inputs, however, reached the time limit on every occasion, and while solutions were found, their optimality gap was above 50% in every case. This clearly shows that this instance set is not solvable in the given time limit, and results are not presented for this reason.
If the goal is the optimization of a single objective, it can be seen from the above results that the model can efficiently schedule a big number of smaller deliveries over both a one- and two-week horizon, and has no problem with larger deliveries up to a certain problem size. While solutions could not be acquired for some instances in the given time limit, or their quality was not good enough, this can be remedied with the increase of available running time for the solution process. The solution of the model is significantly easier when optimizing for lateness, which was expected due to the added complexity of the large number of extra binary decision variables in the case of the energy minimization objective.
4.2. Bi-Objective Optimization
In the case of considering both objectives at the same time, a bi-objective optimization problem has to be solved. One option for this is the augmented
-constraint method introduced in [
22]. This method yields multiple non-dominated solutions for the problem, meaning that there is no obvious best solution among them. Such a set of solutions where one cannot find an improved alternative to any of them is called a Pareto front.
The solutions of this front are achieved by solving a series of optimization problems based on the original model. First, the lexicographic method is applied: the objectives are assigned a hierarchical ordering, and the model is solved considering all objectives in this order. Two hierarchical optimization problems are solved, one having the lateness objective on top of the hierarchy, while the other having the energy objective. Using the objective values of these solutions, the possible value range can be determined for each objective. This value range is then divided along multiple grid points, and an optimization problem is solved for every region of this division. The number of problems to be solved depends on the chosen number of grid points, which acts as a parameter of the -constraint method. Using G grid points will result in regions.
Based on the experiences from the single objective cases, bi-objective optimization was carried out for problem sizes of 5–25 deliveries. Inputs with 5–15 deliveries were generated over a one-week horizon, while inputs with 20 and 25 deliveries were generated over a two-week horizon. For each delivery number, 10-10 instances were generated with small and large deliveries. Bi-objective optimization was carried out for all of these instances using both 5 and 10 grid points, resulting in 6 or 11 problems to be solved. A time limit of 30 min (1800 s) was set for the individual solution processes. Instance sets were generated with both slower and faster inspection and sorting throughput similarly to the single-objective case. Aggregated results can be seen in
Table 6 for the slower, and
Table 7 for the faster throughput. Both tables present the number and size of deliveries in the given instance set, as well as the number of instances where the solution process was terminated due to reaching the time limit. For instances where solutions were achieved, the average number of solutions in the Pareto front and the average required solution time is presented both for the 5 and 10 grid point divisions.
In the case of using the slower inspection throughput, the finer division of 10 grid points usually produced the same number of solutions for the front than the 5-grid point division, or provided at most one additional. However, having 10 grid points leads to an average of 63% increase in running time for the bigger instances. Instances of the (25,L) set were not solvable in the given time limit, and solutions are not presented for them as a result.
Instances with faster inspection throughput behaved similarly to the previous instance sets. The finer 10-grid point division again produced at most one additional solution compared to the 5-grid point division with an average of 64% increase in running time for the bigger instances. There was a notable exception for one instance in the (15,S) set, where the 10-point division resulted in 9 solutions as opposed to the 6 solutions of the 5-point division. Instances of the (25,L) set were not solvable in the given time limit, and their solutions are not presented.
It can be seen from the above tables that efficient bi-objective optimization of the model is also possible. Multiple non-dominated solutions can be found for the problems in an acceptable time. Results show that increasing the size of the division from 5 to 10 grid points usually results in the same number of solutions, or provides only one additional result. However, this comes at the cost of a significant increase in running time. There was only a single instance where the finer division provided three additional solutions to the front.