Next Article in Journal
Three-Point Boundary Value Problems of Coupled Nonlocal Laplacian Equations
Next Article in Special Issue
A Path Planning Model for Stock Inventory Using a Drone
Previous Article in Journal
Landslide Displacement Prediction Based on Time-Frequency Analysis and LMD-BiLSTM Model
Previous Article in Special Issue
Reconfiguration of Foodbank Network Logistics to Cope with a Sudden Disaster
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Artificial Bee Colony Algorithm for Static and Dynamic Capacitated Arc Routing Problems

Department of Electrical Engineering and Information Systems, Faculty of Information Technology, University of Pannonia, Egyetem St. 10, 8200 Veszprém, Hungary
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(13), 2205; https://doi.org/10.3390/math10132205
Submission received: 27 May 2022 / Revised: 20 June 2022 / Accepted: 22 June 2022 / Published: 24 June 2022

Abstract

:
The Capacitated Arc Routing Problem (CARP) is a combinatorial optimization problem, which requires the identification of such route plans on a given graph to a number of vehicles that generates the least total cost. The Dynamic CARP (DCARP) is a variation of the CARP that considers dynamic changes in the problem. The Artificial Bee Colony (ABC) algorithm is an evolutionary optimization algorithm that was proven to be able to provide better performance than many other evolutionary algorithms, but it was not used for the CARP before. For this reason, in this study, an ABC algorithm for the CARP (CARP-ABC) was developed along with a new move operator for the CARP, the sub-route plan operator. The CARP-ABC algorithm was tested both as a CARP and a DCARP solver, then its performance was compared with other existing algorithms. The results showed that it excels in finding a relatively good quality solution in a short amount of time, which makes it a competitive solution. The efficiency of the sub-route plan operator was also tested and the results showed that it is more likely to find better solutions than other operators.

1. Introduction

The Capacitated Arc Routing Problem (CARP) is an NP-hard combinatorial optimization problem that was first introduced by Golden and Wong in [1]. The CARP requires determining the least cost route plans on a graph of a road network for vehicles subject to some constraints. It has many applications in real life, for instance, in winter gritting [2,3] or in urban solid waste collection [4,5]. Since the CARP is an NP-hard problem, instead of exact methods, mainly heuristics and meta-heuristics (e.g., [6,7,8,9,10]) are considered in the literature to find solutions. The existing methods are either too slow or do not give enough good quality solutions, so there is still room for improvements.
The standard CARP assumes a static problem, which is not the closest to real life, where changes may happen during the execution of the solution. These changes modify the instance, and thus, may have an effect on the feasibility and optimality of the current solution [11]. For this reason, the Dynamic CARP (DCARP), which is a variation of the CARP that takes into account dynamic changes, is a better approach. To make the model of the problem under consideration closer to the real-life problem, the changes should be made based on the collected information about the vehicles and the roads. For instance, information can be provided by the drivers of the vehicles about the executed tasks, by the GPS of the vehicles about their current position, and (indirectly) by traffic patrol drones [12] about the current state of the roads.
The Artificial Bee Colony (ABC) algorithm is a swarm intelligence-based algorithm for optimization problems [13]. It was successfully applied on multiple combinatorial optimization problems that are similar to the CARP [14,15,16,17,18,19] and was shown that the ABC algorithm provides better performance than most of the evolutionary computation-based optimization algorithms [16]. However, the ABC algorithm was never applied before, neither on the CARP nor on the DCARP.
In our previous work [20], we collected all the possible events and analyzed their effects on the model. Based on the results, a data-driven DCARP framework with three event handling algorithms and a rerouting algorithm (RR1) was developed. The framework uses ‘the ‘virtual task” strategy [21] to be able to use static CARP solvers for DCARP instances.
The contributions of this work are as follows:
  • The definition of the first ABC algorithm for the CARP (CARP-ABC).
  • The definition of a new small step-size move operator for the CARP, the sub-route plan operator, which is utilized in the CARP-ABC algorithm.
  • The definition of a new method for creating initial population, the RSG. The purpose of the RSG is to create random but feasible solutions for the CARP quickly.
  • Numerical experiments to test the CARP-ABC algorithm on a variety of CARP and DCARP instances. The same experiments were performed with other algorithms for CARP, then the results were compared. The results showed that for both CARP and DCARP instances, the CARP-ABC algorithm excels in finding a relatively good quality solution in a short amount of time.
  • Numerical experiments to test the efficiency of the sub-route plan operator within the CARP-ABC algorithm, on a variety of CARP instances. The results showed that the sub-route plan operator is more likely to find a better solution than the other operators, especially when a greater modification is needed on the current solution (since it is a randomly generated solution and/or it is a solution of a larger CARP instance).
The rest of the paper is structured as follows. In Section 2, the related works are presented. In Section 3, the basic concepts related to the proposed CARP-ABC algorithm is introduced. In Section 4 and Section 5, the algorithm and the sub-route plan move operator are formulated in detail, respectively. In Section 6, the experiments and their results are discussed. The paper is concluded in Section 7.

2. Related Works

In this section, the related works are introduced. In Section 2.1, the algorithms that were developed for CARP are presented. In Section 2.2, the approaches for DCARP are summarized. In Section 2.3, the ABC algorithms that were developed for problems that are similar to the CARP are presented.

2.1. Algorithms for the CARP

As it was mentioned in Section 1, there are mainly approximate approaches (i.e., heuristics and metaheuristics) for the CARP. For this reason, only the methods that belong to that category are mentioned in this subsection.

2.1.1. Heuristics

Golden et al. developed the first heuristic algorithms for the CARP, namely, the path-scanning and the augment-merge [22]. Other notable heuristics for the CARP are the parallel-insert method [23], Ulusoy’s tour splitting method [24], the augment-insert method [25], the path-scanning with ellipse rule [26] and the path-scanning with efficiency rule [27].

2.1.2. Metaheuristics

The metaheuristic algorithms for the CARP can be divided into two main categories (with some exceptions): trajectory-based and population-based.
From the trajectory-based algorithms, the notable ones are the guided local search algorithm [28], the tabu search algorithms [29,30], the variable neighborhood search algorithm [31], and the greedy randomized adaptive search procedure with evolutionary path relinking [32]. It must be mentioned that in [29], two versions of the tabu search algorithm (TSA) were proposed (TSA1 and TSA2), from which the latter performed better. In [33], a global repair operator was developed and embedded into the TSA, creating the repair-based tabu search (RTS), which outperforms the TSA.
From the population-based algorithms, the notable ones are the genetic algorithm [34], the memetic algorithms [6,35], and the ant colony optimization algorithms [8,36,37]. From these, the Memetic Algorithm with Extended Neighborhood Search (MAENS) [6] is the most popular one, even though it only gives relatively good quality solutions and also has slow runtime. There are multiple solutions that try to improve some parts of the MAENS (e.g., [9,10]), but these improvements do not really increase the overall performance of it. The Ant Colony Optimization Algorithm with Path Relinking (ACOPR) [8] gives only relatively good quality solutions, but currently it has the fastest runtime on most of the CARP instances from the benchmark test sets.
The Hybrid Metaheuristic Approach (HMA) [7] is a population-based algorithm that utilizes a randomized tabu thresholding procedure as a part of its local refinement procedure. The HMA gives the best quality solutions among all existing algorithms and has faster runtime than MAENS, but it is still relatively slow on some real-life based CARP instances. The ACOPR gives only relatively good quality solutions, but currently, it has the fastest runtime on most of the CARP instances from the benchmark test sets.

2.2. Approaches for the DCARP

Despite the importance of the DCARP, the number of studies about CARP (or ARP) that consider dynamic changes in the problem during the execution of the solution are relatively small [20,21,38,39,40,41,42,43,44,45]. Moreover, there are only three studies that consider more than two type of changes [20,21,42] and only two of them (including our previous work) considers all the critical changes that can happen [20,21]. (For a more detailed comparison see [11,20].) Critical changes or events may change the problem to such an extent that the current solution is not feasible anymore, so handling them is essential. Both [20,21] propose a framework for the DCARP that, instead of using complex specialized algorithms, allows the use of any static CARP solvers for solving a DCARP instance.
To the best of our knowledge, the data-driven solution for the DCARP introduced in [20] is the only data-driven approach for DCARP or even CARP.

2.3. The ABC Algorithm and Its Applications

The original ABC algorithm was proposed by Karaboga in [13]. In [46], Karaboga and Görkemli proposed a new definition for the search behavior of the onlooker bees, which improved the convergence performance of the algorithm. For this reason, the new version of the ABC algorithm was named quick ABC (qABC).
The ABC algorithm was introduced as an algorithm for multivariable and multi-modal continuous function optimization, but later it was successfully applied on other types of optimization problems as well. Karaboga and Görkemli introduced an ABC and a qABC algorithm for combinatorial problems (CABC and qCABC, respectively) and applied them to the Traveling Salesman Problem (TSP) [14,15]. Both algorithms use the Greedy Sub Tour Mutation (GSTM) operator [47], which was developed to increase the performance of a genetic algorithm (GA) that solves the TSP. It was proven that the GSTM is significantly faster and and more accurate than other existing mutation operators [47]. Furthermore, it was shown that the ABC and the qABC algorithm provide better performance than many evolutionary computation-based optimization algorithms [16]. Since the TSP is similar to the CARP, in the hope that an ABC algorithm with a mutation operator such as GSTM will perform well, we developed the CARP-ABC algorithm (Section 4) with the sub-route plan operator (Section 5).
There are also ABC algorithms for the Vehicle Routing Problem (VRP) [48] and its variations [17,18]. However, there is only one ABC algorithm for the CARP and even that is for just a variation of CARP, the undirected CARP with profits [19]. Therefore, to the best of our knowledge, currently there are no ABC algorithms, neither for the CARP nor for the DCARP.

3. Problem Formulations

This section introduces basic concepts related to the proposed CARP-ABC algorithm to help to understand how it works. The concepts are introduced only briefly, for a more detailed description the corresponding works are referred to.
In this section, first, the static CARP, then the (data-driven) DCARP is formulated. It is followed by the introduction of the basic ABC algorithm and the existing move operators for CARP (which are used in the proposed CARP-ABC solution). The notations used for the CARP and the DCARP are collected in Table A1 in Appendix A.

3.1. The CARP

In the existing works, as the input graph, some assume an undirected graph [7], others assume a directed graph [8,49], and other ones a mixed graph [6,34]. In this work, a directed graph is assumed, in which undirected edges are regarded as two oppositely directed edges.
The (directed) graph of the CARP can be described the following way: G = ( V , A ) , with a set of vertices V and a set of arcs (directed edges) A. A set of tasks T A is also given, which defines the arcs that have tasks assigned to them. If the graph of a CARP instance contains (undirected) edges, then an edge is added to A as a pair of arcs, one for each direction. For instance, if ( v i , v j ) is an edge and v i , v j V , then the arcs ( v i , v j ) and ( v j , v i ) are added to A. Similarly, if ( v i , v j ) is an edge with tasks assigned to it and v i , v j V , then the arcs ( v i , v j ) and ( v j , v i ) are added to T. The graph also has a special vertex v 0 ( v 0 V ), the depot, and a dummy task t 0 = ( v 0 , v 0 ) , the significance of which is explained later.
The tasks are performed by a fleet of w homogeneous vehicles of capacity q. Every vehicle starts and ends its route at the depot ( v 0 ). Each task must be performed in a single operation, and each vehicle can satisfy at most as many demands as its maximum capacity.
The graph can be mapped to a road network where the arcs are road segments. Some of the road segments have tasks. To fulfill the tasks, different amounts but the same type of demand must be served. Each arc is characterized by the following functions:
  • h e a d : the head vertex of the arc;
  • t a i l : the tail vertex of the arc;
  • d c : the dead-heading or traversing cost, the cost of crossing the arc.
In addition, each task is characterized by the following functions:
  • i d : the unique identifier of the task, which is a positive integer;
  • d e m : (positive) demand, which indicates the load necessary to serve the task;
  • s c : service cost, which is the cost of executing the task and crossing the arc (i.e., d c is included in s c ).
Although an edge is regarded as two oppositely directed arcs, if a task is assigned to it, then the task should be executed only once, in either direction. Let t T be a task of one of the arcs of an edge, then let i n v ( t ) denote the inversion of t, the other task of the edge. If h e a d ( t ) and t a i l ( t ) are the head and the tail vertexes of t, then h e a d ( i n v ( t ) ) = t a i l ( t ) and t a i l ( i n v ( t ) ) = h e a d ( t ) are the head and the tail vertexes of i n v ( t ) . The d c , d e m and s c values are the same for t and i n v ( t ) .
Let the total number of tasks that have to be executed by at least one of the vehicles be denoted by n. The value of n depends on the composition of T: if T only contains arc tasks from edges, then n = | T | / 2 , if T only contains arc tasks from arcs then n = | T | .
The minimal total dead-heading cost between two vertices is provided by the m d c : V × V N function, which uses Dijkstra’s algorithm as the search algorithm. For instance, m d c ( v i , v j ) denotes the minimal total dead-heading cost traversing from vertex v i to vertex v j , where v i , v j V .
A CARP instance (I) is defined as follows:
I = ( V , v 0 , A , T , n , w , q , h e a d , t a i l , d c , i d , d e m , s c , i n v , m d c )

3.1.1. Solution Representation

A solution for a CARP instance is expressed as a set of route plans. The route plans are sequences of the t T tasks that need to be executed in the given order. The consecutive tasks are connected by the shortest paths, which is provided by the m d c function. Therefore, a solution S for a CARP instance can be expressed the following way:
S = { r 1 , r 2 , , r | S | }
where | S | is the number of route plans and r k ( k { 1 , 2 , , | S | } ) is the k-th route plan within the solution S. The k-th route plan can be expressed the following way:
r k = t 0 , t k , 1 , t k , 2 , , t k , l k , t 0
where l k is the number of (not dummy) tasks and t k , i is the i-th task within the k-th route plan. It must be noted that here, k is an index, which is only used to identify a specific route plan in the solution. The order of the route plans within the solution has no effect on the quality of the solution.
Since every route starts and ends in the depot, the dummy task t 0 – which represents the vehicle being in the depot – is added also as the first and the last element of the route plan sequences. Its i d , d c , d e m and s c are set to 0, and both the head and the tail vertexes are the depot vertex.
For the solution representation of the CARP, a natural encoding approach can be used, just like in most vehicle routing problems. This means that all route plans can be encoded as an ordered list of ids of the tasks, so a solution can be represented as the concatenation of these lists. However, every route plan starts and ends with the dummy task t 0 , so if the encoded route plans are concatenated, then there are consecutive dummy tasks in the resulting list. For the sake of simplicity, only one of each consecutive dummy task is kept in the encoded solution. Figure 1 shows an example of a solution representation.

3.1.2. Objective and Constraints

The objective of the CARP is to minimize the total cost of the solution S subject to some constraints, which are defined in this section. The total cost of a solution S (i.e., T C ( S ) ) is calculated with the following formula (Equations (4)–(6)):
T C ( S ) = k = 1 | S | D C ( r k ) + S C ( r k )
D C ( r k ) = m d c t 0 , h e a d ( t k , 1 ) + i = 1 l k 1 m d c t a i l ( t k , i ) , h e a d ( t k , i + 1 ) + m d c t a i l ( t k , l k ) , t 0
S C ( r k ) = i = 1 l k s c ( t k , i )
where D C ( r k ) and S C ( r k ) are the total dead-heading and service cost of the route plan r k .
The solution S has to satisfy the following constraints. First, each route plan starts and ends at the depot. Second, each task is executed exactly once. Therefore, the total number of tasks executed on each route plan (excluding the dummy task t 0 ) is equal to n:
k = 1 | S | l k = n
Moreover, a task cannot be executed more than once, neither in the same route nor in another route:
t a , i t b , j , ( a , i ) ( b , j )
where r a and r b are route plans within S, t a , i is the i-th task in the route plan r a , and t b , j is the j-th task in the route plan r b . If a task t has an inverse (i.e., i n v ( t ) ), then either t or i n v ( t ) is executed. Both cannot be executed in the same solution. Third, the total demand served each route plan does not exceed the capacity (q) of the vehicle:
i = 1 l k d e m ( t k , i ) q , k { 1 , 2 , , | S | }

3.2. The Data-driven DCARP

There are various approaches for DCARP, but in this work, the data-driven version of DCARP is considered, which was recently formulated in [20].
In this problem, instead of one static CARP instance, there is a series of DCARP instances (i.e., a DCARP scenario [21]) that needs to be solved. A DCARP scenario is denoted by I = I 0 , I 1 , , I i , , I m 1 , where m is the number of DCARP instances within the scenario (i.e., the number of dynamic events that occurred and changed the previous DCARP instance is m 1 ). Each I i ( I i I ) DCARP instance contains all the information about the current problem. The previous DCARP instance I i 1 , the execution of the accepted solution for I i 1 and the occurred event(s) define the next DCARP instance I i , where 0 < i < m . The initial instance ( I 0 ) can be viewed as a static (data-driven) CARP instance, since initially every vehicle is in the depot (in good state) and no task has been executed yet.
For a data-driven DCARP instance, information needs to be stored about all the vehicles and route plans. For each vehicle, the current location and state have to be known. Furthermore, identifiers are needed to be used for the vehicles and the route plans, since a vehicle may follow multiple route plans (one after another) and it is important to know for each route plan whether a vehicle already executed it, its execution is still in progress, or a vehicle still needed to be assigned to it to start its execution.
Instead of the number of vehicles (w), a set of identifiers of all the vehicles is needed to be defined, which is denoted by H. The set of the identifiers of the (currently) free vehicles is denoted by H f ( H f H ), which is initially equal to H. The identifier of a vehicle is added to H f , if the vehicle finishes the execution of a route plan, and the identifier is removed, when a new route plan is assigned to the vehicle. If the execution of all the tasks is finished and all the vehicles are returned to the depot (i.e., there are no broken down vehicles outside on the roads), then H f = H , otherwise H f H .
The set of identifiers of all the route plans is denoted by R, and the set of identifiers of the route plans that currently cannot be modified and not executed by any vehicle is denoted by R e ( R e R ). When a new route plan is created, its identifier is added to R, and when the execution of it is finished or suspended (due to vehicle breakdown), its identifier is added to R e . If the execution of all the route plans is finished and there are no more tasks to execute, then R e = R , otherwise R e R . The identifier is removed from R e only if the vehicle which is assigned to it was broken, but got fixed and can continue the execution of the plan. The function that defines which vehicle is assigned to a specific route plan is denoted by r v : R H .
To store the current location of the vehicles in the instance, the virtual task strategy introduced in [21] is used, which replaces the executed tasks in each route plan with “virtual tasks”. A “virtual task” is an arc whose h e a d is the depot vertex v 0 and t a i l is the current location of the vehicle, vertex v ( v V ). For the sake of simplicity, it is assumed that when an unexpected event occurs, every vehicle is located exactly at a vertex. Since this task is “virtual”, it cannot be traversed, for this reason, it has an infinite traversing cost (i.e., d c ( v ) = ). Furthermore, since it is a “task”, there is a demand and a service cost assigned to it, which are calculated according to the provided data: the service cost is the total cost produced by the vehicle so far (i.e., it is the sum of traversing and serving cost of the arcs that were crossed or served by the vehicle), and the demand is the total demand served by the vehicle so far. A route plan can have at most one virtual task. Therefore, if a route plan already has a virtual task, then it is updated taking into account the arcs traversed and the tasks executed since then by the corresponding vehicle. The set of all virtual tasks is denoted by T v ( T v T ), and the function that defines which virtual task belongs to a specific route plan is denoted by r t : R T v .
The set of arc tasks that need to be executed is denoted by T. If according to the gathered information a task t ( t T ) was executed by the vehicle h ( h H ), then in the new DCARP instance t needs to be removed from T. Furthermore, the virtual task of the route plan of the vehicle (e.g., t k , v , where r v ( k ) = h and t k , v T v ) needs to be updated. The new virtual task is generated in such a way that t is included in it (along with the other tasks the vehicle executed and arcs the vehicle traversed). If t has an inverse (i.e., i n v ( t ) ), then it is removed from T as well. Accordingly, the total number of tasks that have to be executed (n) is decreased by one or two.
The initial DCARP instance ( I 0 ) is similar to a static CARP instance. The sets of the route plan identifiers (R) and the function r v are created and filled only after the solution is found for I 0 . The set H f is initially equal to H, then based on r v , all the vehicle identifiers that are assigned to a route plan are removed from H f . At this stage, the sets R e and T v are empty sets, therefore the function r t is an empty function as well. According to these, the initial DCARP instance ( I 0 ) is defined as follows:
I 0 = ( V , v 0 , A , T , n , w , q , H , h e a d , t a i l , d c , i n v , d e m , s c , m d c )
The subsequent DCARP instances ( I i , where 0 < i < m ) are defined as follows:
I i = ( V , v 0 , A , T , T v , n , q , H , H f , R , R e , r t , r v , h e a d , t a i l , d c , i n v , d e m , s c , m d c )

3.2.1. Structure of a Scenario

A new DCARP instance is constructed and added to the DCARP scenario, when an unexpected event happens that changes the current problem to such an extent that it has effect on the currently executed solution. In [21] all the possible events (based on realistic assumptions) were collected and analyzed based on their effect.
It is assumed that the roadmap, the number of vehicles, and the maximum capacity of the vehicles cannot change (at least during the execution of the solution). Therefore, V, v 0 , A, h e a d , t a i l , i n v , q and H are the same in all the DCARP instances of a DCARP scenario.
It is assumed that roads can become closed/opened (it changes d c , thus m d c , too), the traffic can decrease/increase (it changes d c and in some cases s c , thus m d c , too), tasks can get cancelled/added (it changes T, n, d e m , and s c ) and vehicles can breakdown/restart (it changes R e ), which are unexpected events. The expected events are the events that normally occur during the execution of the solution: a task is executed (it changes T), a vehicle moves (it changes T v , thus r t , too), or a vehicle returns to the depot (it changes H f and in some cases r v ). The affected components are updated only when a new instance is constructed. If rerouting is performed, then R and r v may change, but the changes are visible only in the next DCARP instance. Therefore, T, T v , n, H f , R, R e , r t , r v , d e m , s c , d c , and m d c may be different among the DCARP instances of a DCARP scenario.
Since due to the unexpected events some components of the DCARP instance change, the optimal solution may change, too. It is one’s choice to construct a new DCARP instance and reroute when there might be a better solution available, but the current solution is still feasible. However, constructing a new DCARP instance and rerouting is necessary when the current solution is not feasible anymore.

3.2.2. Solution Representation

For each DCARP instance, the solution representation is mainly the same as for static CARP instances. The only difference is that if the route plan has a virtual task assigned to it, then the virtual task is the second task within the route plan (since the first task is always the dummy task t 0 ). For instance, if the route plan r k = t 0 , t k , 1 , t k , 2 , , t k , l k , t 0 has an identifier k ( k R ) and there is a virtual task t v , k assigned to it (i.e., r t ( k ) = t v , k ), then t k , 1 = t v , k .

3.2.3. Objective and Constraints

For each DCARP instance, the objective and the constraints are mainly the same, as well as for static CARP instances. The only difference is at the second constraint, which requires that the total number of tasks in the solution S (excluding the dummy task t 0 ) is equal to the sum of the number of tasks that still need to be executed (n) and the total number of virtual tasks ( | T v | ):
k = 1 | S | l k = n + | T v |
The attributes of a virtual task guarantee that a solver will always place the virtual task right after the dummy task within a route plan of a (nearly optimal) solution, so there is no need to add a constraint regarding it.

3.2.4. Finding a Solution

The data-driven DCARP framework allows rerouting when a critical event (i.e., an unexpected event that may change the feasibility of the current solution) occurs. These events are the task appearance, the demand increased and the vehicle breakdown.
The data-driven DCARP framework allows the use of static CARP solvers by converting the current data-driven DCARP instance into a static CARP instance. After a (sufficiently good) solution is found by the CARP solver, the solution is converted into a data-driven DCARP solution.
Converting a data-driven DCARP instance into a static CARP instance works as follows: the sets of vehicle and route plan identifiers (i.e., H, H f , R, and R e ) are omitted, along with the related functions ( r t and r v ). Furthermore, all virtual tasks related to finished and suspended route plans are removed from T. For instance, if r t ( k ) = t v , k ( t v , k T ) is the virtual task of the route plan with identifier k ( k R ) and k R e , then t v , k is removed from T (i.e., T \ { t v , k } ).
Converting a static CARP solution into a data-driven DCARP solution works as follows: the virtual tasks that are related to finished and suspended route plans are added to the solution in separate route plans to keep track the total cost of the DCARP scenario. Furthermore, if there are any new route plans within the solution, the framework gives them identifiers and also attempts to assign each of them to a free vehicle. For the other route plans, it can be easily determined which route plan identifier belongs to which route plan, based on the virtual task within them.

3.3. The Basic ABC Algorithm

This section introduces the basic ABC algorithm for combinatorial problems, based on [16]. Just like in the original ABC algorithm [13], the artificial bees are classified into the three groups:
  • employed bees, who are exploiting the food sources;
  • onlooker bees, who are making the decision about which food source to select;
  • scout bees, who are randomly choosing a new food source.
In the ABC algorithm, a food source is corresponded to a solution and the nectar amount of a food source is corresponded to the fitness of a solution.
The ABC algorithm is an iterative process with four phases in total. It begins with the initial phase, then it iterates three bee phases (always in the same order) until a predefined termination criterion is met. In the initial phase, the population is initialized with randomly generated food sources. In the first phase, the employed bee phase, the employed bees are sent to the food sources, where they determine the nectar amounts of the food sources. In the second phase, the onlooker bee phase, the probability value of the sources are calculated based on their nectar amount, then the onlooker bees are sent to the preferred food source to find neighboring food sources and determine their nectar amount. In the third phase, the scout bee phase, the exploitation process of the sources exhausted by the bees are stopped and the scout bees are sent out to randomly discover new food sources within the search area. In each phase, the best food source found so far is memorized. The phases are described in more detail in the subsections below.

3.3.1. Initialization Phase

In the initialization phase, the parameters and the population are initialized. The parameters of the ABC algorithm can be defined as follows:
  • s n : the number of food sources, which is also the number of the employed bees and onlooker bees (i.e., for every food source, there is only one employed bee);
  • l i m i t : the number of trials after which a food source is assumed to be abandoned;
  • a termination criterion.
The population is initialized by randomly generating s n number of food sources and assigning one employed bees to each of them. The employed bees evaluate the fitness of these solutions.

3.3.2. Employed Bee Phase

At this phase, each employed bee x i generates a new food source x n e w in the neighborhood of its current position. Once x n e w is obtained, it will be evaluated and compared to x i . If the nectar amount of x n e w is equal to or higher than that of x i , x n e w replaces x i and becomes a new member of the population, otherwise x i is retained. In other words, a greedy selection mechanism is employed between the old and the new candidate solutions.

3.3.3. Onlooker Bee Phase

An onlooker bee evaluates the nectar information taken from all the employed bees and selects a food source x i depending on its probability value p i calculated by the following expression:
p i = f i t i j = 1 s n f i t j
where f i t i is the nectar amount (i.e., the fitness value) of the i-th food source x i . The higher the value of f i t i is, the higher the probability of that the i-th food source is selected.
Once the onlooker has selected her food source x i , she produces a modification on x i by using a local search operator. The local search operator randomly selects a position in the neighborhood of x i . As in the case of the employed bees, if the modified food source has a better or equal nectar amount than x i , the modified food source replaces x i and becomes a new member in the population.

3.3.4. Scout Bee Phase

If a food source x i cannot be further improved through a predetermined number of trials limit, the food source is assumed to be abandoned, and the corresponding employed bee becomes a scout. The scout produces a food source randomly.
In the basic ABC algorithm, in each cycle, at most one scout bee goes outside to search for a new food source.

3.4. Move Operators for the CARP

In population-based evolutionary algorithms, to enrich the diversity of the population, move operators with different levels of step-size are utilized to generate new, neighboring solutions. These move operators can be divided into two main categories: small step-size operators and large step-size operators. Small step-size operators can modify the position and/or the direction of the tasks within one or two route plans. In contrast, large step-size operators are able to modify more than two route plans. The most commonly used small step-size operators in the literature, which are used in this work as well, are inversion, (single) insertion, swap, and two-opt operators [6,7,8]. In this work, a novel small step-size operator is used as well, the sub-route plan operator, which is introduced in this work, in Section 5. The only large step-size operator used in this work is merge-split, which was introduced in [6]. It is called a large-step-size operator, since it is able to modify more than two route plans.
The inversion and the sub-route operators can only change the direction and the order of the tasks within one route plan, so they do not change the feasibility of the solution. In contrast, the insertion, the swap, and the two-opt operators may change the amount of demand that needs to be served in some of the route plans, so the feasibility of the solution may change, too. For this reason, based on the settings, the output solution of these operators could be different. If infeasible solutions are not accepted and the calculated output solution is infeasible, then the operator returns the original, input solution instead (assuming that the input solution is a feasible solution).

3.4.1. Inversion Operator

The inversion operator randomly selects a task t T within the input solution. If this task has an inverse (i.e., i n v ( t ) T ), then the operator replaces t with i n v ( t ) within the solution, else it returns the input solution.

3.4.2. Insertion Operator

The insertion operator randomly selects a task t 1 T , then replaces (inserts) it before or after another randomly selected task t 2 T within the input solution. The selected tasks can be in different route plans or in the same route plan, but they cannot be the same tasks (i.e., t 1 t 2 ).
It creates two potential output solutions, based on where t 1 is inserted (before or after t 2 ). If t 1 has an inverse, then the operator creates other potential output solutions, which contains the inverse task of the task (i.e., i n v ( t 1 ) ) instead of t 1 . It selects the solution as the output solution which has the smallest total cost among the potential output solutions.

3.4.3. Swap Operator

The swap operator randomly selects two tasks ( t 1 and t 2 , where t 1 , t 2 T ), then replaces them with each other (i.e., swaps them). Similarly to the insertion operator, the selected tasks can be from the same route plan or different route plans, but they cannot be the same tasks.
It creates potential output solutions, which contain one or two inverse task(s) of the selected tasks instead of the task(s). All the four possible combinations are considered. It selects the solution as the output solution which has the smallest total cost among the potential output solutions.

3.4.4. Two-Opt Operator

The two-opt operator randomly selects two route plans (e.g., r 1 and r 2 ) of the solution. Based on the selected two route plans, two cases exist for this move operator. If the selected two route plans are the same (i.e., r 1 = r 2 ), then a sub-route plan (i.e., a part of the route plan) is selected randomly and its direction is reversed. If the selected two route plans are different (i.e., r 1 r 2 ), then these two route plans are randomly cut into four sub-route plans, and then two new potential output solutions are generated by reconnecting the four sub-route plans and the best one from them is selected. For example, r 1 and r 2 are cut into sub-route plans r 11 r 12 and r 21 r 22 , respectively. Two new solutions are generated by connecting them in the following ways: (1) r 11 with r 22 and r 21 with r 12 and (2) r 11 with reversed r 21 and reversed r 12 with r 22 .

3.4.5. Merge-split Operator

As it was mentioned before, the merge-split operator can make large changes in the solution (e.g., it can modify the order of all the tasks within one or more route plans), so it is considered a large step-size operator. This operator randomly selects x number of different route plans in the input solution, where x is a random number ( 1 x | S | ). It obtains an unordered list of tasks by merging the tasks of the selected route plans into one list, and then sorts this unordered list with a path scanning heuristic (e.g., [22], which is used in this work as well). The obtained ordered list is then optimally split into new route plans using Ulusoy’s splitting procedure [24].
The ordered list is constructed by the path scanning heuristic the following way. First, an empty path is initialized, then, the affected tasks are added one by one into the current path, until no tasks are left in the unordered list. In each iteration, only those tasks are taken into account that can be added to the current path without breaking the capacity constraint. If there are no any tasks like that, then the depot is added to the current path and a new path is initialized (that becomes the current path). When a task or the depot is added to the current path, the task/depot is connected to the end of the current path with the shortest path between them. If there are multiple tasks that can be added, the one that is closest to the end of the current path is added. If there are multiple tasks that are closest to the end of the current path, then one of the following rules are applied to determine which task should be added next:
1.
maximize the distance between h e a d ( t ) and v 0 ;
2.
minimize the distance between h e a d ( t ) and v 0 ;
3.
maximize the term d e m ( t ) s c ( t ) ;
4.
minimize the term d e m ( t ) s c ( t ) ;
5.
use rule 1, if the vehicle is less than half-full, otherwise use rule 2.
In the rules above, t ( t T ) is a task and v 0 ( v 0 V ) is the depot. In one run, only one of the rules can be used. Therefore, the path scanning heuristic is ran five times, which results in five ordered lists.
The Ulusoy’s splitting procedure creates five new candidate output solutions from the five ordered lists by splitting the lists into route plans. How the procedure works is best summarized in [50]. The procedure starts with constructing the Directed Acyclic Graph (DAG) from the ordered list. A DAG is a graph with arcs that represent feasible sub-tours of one giant tour. Next, the shortest path through the graph is calculated, which gives the optimal partition of the giant tour into feasible route plans. As the final step, a new candidate solution is created from the untouched route plans of the input solution and the route plans returned by the procedure. From the five candidate solutions the best one is chosen and returned by the operator.

4. The Proposed ABC Algorithm for CARP (CARP-ABC Algorithm)

In this chapter, the ABC algorithm developed for CARP (CARP-ABC algorithm) is presented. The notations used for the CARP-ABC algorithm are collected in Table A2 in Appendix A.
The algorithmic description of the main CARP-ABC algorithm can be seen in Algorithm 1. The main algorithm can be divided into four main phases: initialization, employed bee, onlooker bee, and scout bee phases. The algorithm begins with the initialization phase, then enters a cycle, where it repeats the mentioned phases in the respective order until the termination criterion is satisfied (line 4). In the initialization phase (line 2), the colony (C), the age of the solutions within the colony ( A ), the global best solution ( S * ), and its age ( α * ) are initialized. In the employed bee phase (line 6), local search is performed around the members of the colony. In the onlooker bee phase (line 7), a more in-depth local search is performed around one solution from the colony. In the scout bee phase (line 8), global search is performed.
Algorithm 1: Main CARP-ABC algorithm.
Mathematics 10 02205 i001
The parameters of the algorithm are the followings:
  • I: a CARP instance;
  • n c s : the size of the colony, the number of solutions in the population;
  • n m i : the maximum number of iterations of the algorithm;
  • n g s l : the global search limit, the maximally allowed number of consecutive iterations in which the currently known global best solution is not improved;
  • n l s l : the local search limit, the maximally allowed number of consecutive iterations in which the currently known local best solution is not improved in the employed bee phase and the onlooker bee phase of the algorithm;
  • n s a l : the solution age limit, the maximally allowed number of consecutive iterations of the algorithm in which a solution is kept in the population;
  • a termination criterion, which in default is that either the n m i or the n g s l is reached.

4.1. Initialization Phase

The algorithmic description of the initialization phase of the CARP-ABC algorithm can be seen in Algorithm 2. The algorithm in this phase first initializes the sets for the solutions S i for i = 1 , 2 , , n c s and their age (lines 2–3). To guarantee an initial population with certain quality and diversity, the solutions are generated randomly by using the Random Solution Generation (RSG) algorithm for CARP (line 6), which is introduced in this work (in Section 4.1.1). Other population based evolutionary algorithms usually use the Randomized Path-Scanning Heuristic (RPSH) [22] to generate initial solutions. However, our experiments showed that, in the case of the CARP-ABC algorithm, it does not improve the convergence speed of the algorithm, so only the RSG is used.
After the initialization of the colony, the algorithm selects the solution S i C with the best (highest) fitness value by using the s e l e c t B e s t S o l u t i o n function (line 11). The fitness of a solution S i is defined by its total cost T C ( S i ) (it needs to be as small as possible). Therefore, the fitness value of a solution S i is computed by the following fit function:
f i t ( S i ) = L B T C ( S i )
where L B is the lower bound of the solution (i.e., the total service cost of all the tasks, involving only one of each tasks which has an inverse). Its value ranges between 0 and 1. Solutions with greater fitness values are preferred, since greater fitness value means that the total cost of the solution is closer to the lower bound.
Algorithm 2: initializationP (Initialization Phase of the CARP-ABC algorithm).
Mathematics 10 02205 i002

4.1.1. Random Solution Generation Algorithm

As the first step, the algorithm generates n c s random permutations, which contain positive integers from 1 to n (i.e., the id of every arc task and one of the two i d s of every edge task) in random order. As the next step, the algorithm reads the i d s in the permutation one-by-one from left to right, while summing up the demand of the corresponding tasks. If the task assigned to the currently read i d would break the capacity constraint of the current route plan, the algorithm inserts a “0” (the i d of the dummy task t 0 ) before the i d of the task in the sequence (i.e., the task is added to a new route plan). After it is finished with separating the i d s of the tasks into route plans, the algorithm checks each task in the solution and, if it has an inverse task, then randomly (e.g., with 0.5 probability) replaces the id of the task with the i d of its inverse. As the final step, the algorithm inserts a “0” as the first and last task of the solution to make it a valid solution.

4.2. Employed Bee Phase

The algorithmic description of the employed bee phase of the CARP-ABC algorithm can be seen in Algorithm 3. The algorithm in this phase, for each employed bee, generates new candidate solutions in the neighborhood of S i with each small step-size operator, then evaluates and selects the best solution (lines 2–11). In this phase, only the inversion operator (line 6) and the sub-route plan operator (line 7) are used, because only these operators guarantee that the new candidate solution will be feasible. It is repeated until the known best local solution S i * cannot be improved within the defined number of iterations (i.e., its age reached n l s l ). If the fitness value of the new candidate solution S i * is greater than or equal to the fitness value of the current solution S i , the new solution replaces the current one in the population (lines 14–15).
As the next step, the algorithm calculates the winning probability values p i for the solutions S i (lines 16–19). The probability values p i are calculated with the same function as in the basic ABC algorithm (Equation (13)).
Algorithm 3: employedBP (Employed Bee Phase of the CARP-ABC algorithm).
Mathematics 10 02205 i003

4.3. Onlooker Bee Phase

The algorithmic description of the onlooker bee phase of the CARP-ABC algorithm can be seen in Algorithm 4. The algorithm in this phase, depending on the p i values, selects a solution S i with the s e l e c t S o l u t i o n function. This function first performs a roulette selection I n t ( n c s ) times to select I n t ( n c s ) number of solutions from the colony. Next, it compares the selected solutions to each other and selects the best one from them (i.e., the one with the greatest fitness value).
As a next step, the algorithm generates n c s number of new candidate solutions S i , j in the neighborhood of S i (i.e., one solution for each onlooker bee) with the merge-split operator (lines 5–6). It generates new candidate solutions in the neighborhood of these solutions with the small step-size operators, until the known best local solution S i , j * cannot be improved within the defined number of iterations (i.e., the age of the solution, α i , j * , reaches n l s l ) (lines 7–21). In this phase, all the small step-size operators (i.e., inversion, insertion, swap, 2-opt, and sub-route plan) are applied to S i , j , which is the best solution that was found in the previous iteration (lines 11–15). From the resulted solutions, the best one is chosen with the s e l e c t B e s t S o l u t i o n function as the new S i , j (line 16). If the new S i , j is better than the currently known best solution in the neighborhood of S i , j (i.e., S i , j * ), then it is set as the new best solution S i , j * (lines 17–19). Otherwise, the age of S i , j * (i.e., α i , j * ) is increased by one (lines 20–21). After the search ends in the neighborhood of S i , j , it is checked whether the best solution found (i.e., S i , j * ) is better than the best solution found in the whole neighborhood of S i (i.e., S i * ). If S i , j * has a higher fitness value and is also feasible (its total excess demand is zero), then it will be set as the new S i * (lines 22–23).
If the best solution found in this phase (i.e., S i * ) is better than the current solution S i , then S i is replaced by S i * in the colony (lines 25–26).
Algorithm 4: onlookerBP (Onlooker Bee Phase of the CARP-ABC algorithm).
Mathematics 10 02205 i004

4.4. Scout Bee Phase

The algorithmic description of the scout bee phase of the CARP-ABC algorithm can be seen in Algorithm 5. The algorithm in this phase increases the age of unchanged solutions (lines 3–4) and sets the age to zero for new solutions (lines 8–9) within the colony. Furthermore, if there is an abandoned solution (i.e., a solution which could not be improved through a predetermined number of trials, which is called n s a l ), the algorithm replaces it with a new solution (lines 5–7), which is generated by using the RSG algorithm (as in the initialization phase).
In this phase, the algorithm also updates the global best solution, S * . First, the best solution of the new colony is selected with the s e l e c t B e s t S o l u t i o n function as solution S (line 10). If S is better than S * , then S is set as the new global best solution (lines 11–13). Otherwise, the age of S * (i.e., α * ) is increased by one (lines 14–15).
Algorithm 5: scoutBP (Scout Bee Phase of the CARP-ABC algorithm).
Mathematics 10 02205 i005

4.5. Computational Complexity Analysis

In this section, the computational complexity of the proposed CARP-ABC algorithm is discussed. The computational complexity is expressed by using the big-O notation. For the sake of simplicity, approximations are used and the constant values are omitted. The computational complexity of the whole algorithm depends on the given parameter values and the complexity of the input CARP instance, mainly on n (i.e., the number of tasks that have to be executed).

4.5.1. Initialization Phase

The computational complexity of the initialization phase is O ( n c s * n + n c s ) , in which O ( n ) is the complexity of the RSG algorithm and O ( n c s ) is the complexity of selecting the best solution. O ( n ) is multiplied by n c s , because RSG is executed n c s times to create the initial population.
Within the RSG algorithm, the complexity of generating a random permutation of the task identifiers is O ( n ) , assuming that the Fisher–Yates shuffle algorithm [51] is used for it. After a permutation is generated, the algorithm iterates over each element, which also has O ( n ) as complexity. Therefore, the complexity of the RSG algorithm is around O ( 2 n ) , which is O ( n ) if the constant multiplier is omitted.

4.5.2. Employed Bee Phase

The computational complexity of the employed bee phase is O ( n c s n l s l ( n + log n + n m a x ) + n c s ) , in which the complexity of the local search is O ( n l s l ( log n + n + n m a x ) ) . The complexity of the probability calculation is O ( n c s ) (assuming, that the sum of the fitness values is calculated only once). O ( n l s l ( log n + n + n m a x ) ) is multiplied by n c s , because the local search is executed for all the n c s members of the population.
Within the local search, the complexity of the inversion operator is O ( n ) and the complexity of the sub-route plan operator is O ( log n + n + n m a x ) . Within the sub-route plan operator, the complexity of selecting a route plan is O ( log n ) , since in the worst case | S | = n (i.e., every task is on a separate route). After a route plan is selected, one of the methods of the operator is executed. From the methods, the sub-route plan rotation method has the greatest complexity, which is O ( n + n m a x ) , since in the worst case l k = n (i.e., there is only one route plan in the solution).

4.5.3. Onlooker Bee Phase

The computational complexity of the onlooker bee phase is O ( n c s + n c s ( log n + n + n l s l ( n + log n + n m a x ) ) ) , in which the complexity of selecting a solution from the colony is O ( k n c s ) or O ( n c s ) is the constant k is omitted. The complexity of the main search is O ( n c s ( log n + n + n l s l ( n + log n + n m a x ) ) .
Within the main search, the complexity of the merge-split operator is O ( log n + n ) , because the complexity of selecting the number of route plans is O ( log n ) and the complexity of the other components of the operator (i.e., selecting the route plans, collecting the affected tasks, and executing the RPSH) is O ( n ) . The complexity of RPSH is O ( n ) , since in the worst case all the n tasks are affected in the solution. After the merge-split operator returns a solution, search is performed around this solution. The complexity of this search is O ( n l s l ( n + log n + n m a x ) ) , because the complexity of the sub-route plan operator is O ( log n + n + n m a x ) and the complexity of the other operators (i.e., inversion, insertion, swap, and two-opt) are O ( n ) .

4.5.4. Scout Bee Phase

The computational complexity of the scout bee phase is O ( n c s n + n c s ) , because in the worst case, all the solutions have to be replaced in the colony for exceeding n s a l ), so RSG is executed n c s times. Next, the best solution is chosen from the colony, which has O c s complexity.

4.5.5. Whole Algorithm

The computational complexity of the whole CARP-ABC algorithm is composed of the complexity of the initialization phase and the multiplication of the other phases by n m i , since in the worst case the algorithm runs till the maximum number of iterations is reached. If the duplications are removed, then it is the following: O ( n c s n + n c s + n m i ( n c s n l s l ( n + log n + n m a x ) + n c s + n c s ( log n + n + n l s l ( n + log n + n m a x ) ) ) ) .
If the parameters of the CARP-ABC algorithm and the sub-route plan operator are set to a fixed value, then the computational complexity is the following: O ( n + log n ) . Therefore, the time complexity of the CARP-ABC algorithm is mostly linear but it contains components with logarithmic time complexity (e.g., when a route plan is selected).

5. Sub-Route Plan Operator

The sub-route plan operator is based on the GSTM operator for TSP [47]. The main differences between the modified version and the original version are due to the differences between the TSP and the CARP. Therefore, the modified version works with arcs instead of nodes. Furthermore, since the solution for a TSP is always one route plan, while the solution for a CARP (usually) consists of more than one route plan, the modified version takes into account only a part of the solution (one route plan) instead of the whole solution.
The sub-route plan operator is a complex move operator which consists of two different greedy search methods (greedy reconnection and sub-route rotation) and a method that provides distortion. In all three methods, inversion of the affected tasks is considered. Inversion of the tasks has real importance when a sequence of tasks is inverted in the sub-route rotation method because when the execution order of the tasks changes, the direction in which the tasks are executed should be changed too, to keep the traveling cost minimal. The used notations within this chapter are collected in Table A3 in Appendix A.

5.1. The Main Algorithm

The algorithmic description of the main algorithm of the sub-route plan operator can be seen in Algorithm 6. As input, a CARP instance I, a solution S of I, and the parameters of the algorithm are expected. The parameters are the following:
  • the reconnection probability ( p r c );
  • the correction and perturbation probability ( p c p );
  • the linearity probability ( p l );
  • the minimum length of the sub-route plan ( l m i n );
  • the (maximal) size of the neighborhood of a task arc that is considered ( n m a x ).
Algorithm 6: subRoutePlan (main algorithm of the sub-route plan operator).
Mathematics 10 02205 i006
The maximal length of the sub-route plan ( l m a x ) is determined after the route plan is selected. In the proposed CARP-ABC algorithm, the parameters of this algorithm are given as constant values, so only I and S are expected.
In the first step of the algorithm, a route plan r k is selected from the solution S (line 2), then, if the number of (not dummy) tasks within r k is sufficient (i.e., l k is greater than or equal to l m i n , line 3), the algorithm proceeds to the next step. Otherwise, it returns the input solution S unchanged (line 21).
In the following step of the algorithm, the parameters are initialized and the (sub-)route plans are generated. The maximum length of the sub-route plan ( l m a x ) is determined based on the number of tasks within r k ( l k ) and the predefined minimum length of the sub-route plan ( l m i n ) (line 4), and the new route plan ( r k ) is initialized (line 5). The length of the sub-route plan l is determined randomly based on l m i n and l m a x (line 6). The position index of the starting task of the sub-route plan (s) is randomly selected taking into account l (line 7). The position index of the ending task of the sub-route plan (e) is determined by s and l (line 8). The sub-route plan r k * is constructed by taking the sub-route plan that is enclosed by the tasks t k , s and t k , e from r k (line 9). The route plan without r k * is denoted by r k # (line 10).
As the next step of the algorithm, a random number is generated ( p r n d ) between 0 and 1 (line 11), which determines the operation of the operator. If p r n d is less than or equal to the predefined reconnection probability p r c (i.e., p r n d p r c , line 12), then the greedy reconnection method is executed (line 13, Section 5.2), otherwise, a new random number is generated (line 15). If the new value of p r n d is less than or equal to the predefined correction and perturbation probability p c p (i.e., p r n d p c p , line 16), then distortion is added to r k (line 17, Section 5.3), otherwise, the sub-route plan rotation method is executed (line 19, Section 5.4). As the final step, the solution S is updated by removing the old route plan r k and adding the new one, r k (line 20), then the updated solution is returned (line 21).

5.2. Greedy Reconnection Method

The greedy reconnection method inserts r k * into the position within r k # that generates the least amount of increase in the total cost of the route plan.

5.2.1. Algorithm

The algorithmic description of the greedy reconnection method within the sub-route plan operator can be seen in Algorithm 7. As input, the CARP instance I, the original route plan r k of the solution S, the selected sub-route plan r k * , and the truncated route plan r k # (i.e., r k without r k * ) are expected.
Algorithm 7: greedyReconnection (algorithm of the greedy reconnection method within the sub-route plan operator).
Mathematics 10 02205 i007
In the first step of the algorithm, the new route plan r k is initialized with the current route plan r k (line 2). The position index i that is used to find the best position for insertion of r k * into r k # is initialized as well (line 3). The value “1” refers to the first (not dummy) task within r k # (i.e., t k , 1 # ). The value “0” would refer to the first dummy task ( t 0 ) and “ l k # + 1 ” to the last dummy task within r k # (assuming l k # is the number of not dummy tasks within r k # ). In the following step, the algorithm checks each position within r k # to find the best one to insert r k * into (lines 4–8). In each iteration, before task t k , i # within r k # , it inserts r k * with the i n s e r t S u b R o u t e P l a n function (line 5). The total cost of the resulting route plan ( r k , i ) is then compared with the total cost of r k (line 6). If r k , i is better than r k (i.e., it has lower total cost), then it becomes the new value of r k (line 7).
In the final step of the algorithm, r k is returned by the function (line 9).

5.2.2. Example

For a better understanding of the method, see the following example. Let the selected route plan be r k = t 0 , t k , 1 , t k , 2 , , t k , 13 , t 0 and the length of the sub-route plan be l = 3 . Based on these, let the selected starting and ending task be t k , s = t k , 5 and t k , e = t k , 7 , then the selected sub-route plan is r k * = t k , 5 , t k , 6 , t k , 7 and the route plan r k without r k * is r k # = t 0 , t k , 1 , t k , 2 , t k , 3 , t k , 4 , t k , 8 , t k , 9 , t k , 10 , t k , 11 , t k , 12 , t k , 13 , t 0 . Let us assume that inserting r k * between t k , 8 and t k , 9 in r k # results in the least amount of increase in the total cost of the solution, then r k = t 0 , t k , 1 , t k , 2 , t k , 3 , t k , 4 , t k , 8 , t k , 5 , t k , 6 , t k , 7 , t k , 9 , t k , 10 , t k , 11 , t k , 12 , t k , 13 , t 0 will be the new k-th route plan in the solution.
The example discussed in the previous paragraph is depicted in Figure 2 and Figure 3. In both figures, the arc tasks served within a route plan are depicted with solid lines, and the other arcs, which are only traversed, are depicted with dashed lines. The original route plan r k can be seen in Figure 2. In Figure 3, the sub-route plan r k * and the truncated route plan r k # are shown, highlighted with red and blue colors, respectively. The selected starting and ending tasks ( t k , s and t k , e ) are depicted with thicker lines.

5.3. Distortion Method

The distortion method takes the tasks in r k * and inserts them one-by-one into r k # , starting from the position index s, and by rolling or mixing with the predefined linearity probability ( p l ). Rolling means selecting the current last task in r k * and mixing means selecting a random task in r k * .

5.3.1. Algorithm

The algorithmic description of the distortion method within the sub-route plan operator can be seen in Algorithm 8. As input, the CARP instance I, the selected sub-route plan r k * , the truncated route plan r k # (i.e., r k without r k * ), the position index s of the starting task of r k * within the original route plan r k , and the linearity probability p l are expected.
Algorithm 8: Distortion (algorithm of the distortion method within the sub-route plan operator).
Mathematics 10 02205 i008
In the first step of the algorithm, the position index i is initialized with s (line 2) and the new route plan r k is initialized with the truncated route plan r k # (line 3). While there are tasks in r k * , the algorithm executes the following steps (lines 4–12). First, a random number is generated ( p r n d ) between 0 and 1 (line 5) and the last task is selected from r k * into t (line 6). If p r n d is less than or equal to p l , then the value of t is changed into a random task from r k * (lines 7–9). The selected task t is then inserted into the position i in r k with the i n s e r t T a s k function (line 10) and removed from r k * with the r e m o v e T a s k function (line 11). The task is always inserted right after the previously inserted task. When r k * runs out of tasks (i.e., all the tasks within it have been inserted into r k ), the algorithm returns r k (line 13).

5.3.2. Example

For a better understanding of the method, see the following example. Let the selected route plan be r k = t 0 , t k , 1 , t k , 2 , , t k , 13 , t 0 and the length of the sub-route plan be l = 3 . Based on these, let the selected starting and ending task be t k , s = t k , 5 and t k , e = t k , 7 , then the selected sub-route plan is r k * = t k , 5 , t k , 6 , t k , 7 and the route plan r k without r k * is r k # = t 0 , t k , 1 , t k , 2 , t k , 3 , t k , 4 , t k , 8 , t k , 9 , t k , 10 , t k , 11 , t k , 12 , t k , 13 , t 0 . The length of r k * is three, so in this case the algorithm has three iterations. Let the linearity probability be p l = 0.2 .
In the first iteration, let the random number be p r n d = 0.1 . Since it is smaller than p l , a random task is selected from r k * . Let the selected task be t k , 6 . In this case, the new route plan is r k = t 0 , t k , 1 , t k , 2 , t k , 3 , t k , 4 , t k , 6 , t k , 8 , (i.e., t k , 6 is inserted between t k , 4 and t k , 8 ) and r k * = t k , 5 , t k , 7 (i.e., t k , 6 is removed).
In the second iteration, let the random number be p r n d = 0.8 . Since it is greater than p l , the currently last task is selected from r k * , which is t k , 7 . In this case, the new route plan is r k = t 0 , t k , 1 , t k , 2 , t k , 3 , t k , 4 , t k , 6 , t k , 7 , t k , 8 , (i.e., t k , 7 is inserted between t k , 6 and t k , 8 ) and r k * = t k , 5 .
In the third iteration, since only one task left in r k * , regardless of the value of p r n d , t k , 5 is selected. Therefore, r k = t 0 , t k , 1 , t k , 2 , t k , 3 , t k , 4 , t k , 6 , t k , 7 , t k , 5 , t k , 8 , and r k * = t k , 5 . Since r k * is now empty, the algorithm returns r k .

5.4. Sub-route Plan Rotation Method

The sub-route plan rotation method selects one neighbor task randomly from the neighbors of t k , s and t k , e ( t k , s * and t k , e * , respectively), then inverts the sequence of tasks enclosed by t k , i and t k , i * (including t k , i * in the sequence), where ( i , i * ) { ( s , s * ) , ( e , e * ) } . The inversion of the sequence is performed in such a manner, that t k , i and t k , i * (or i n v ( t k , i * ) ) become direct neighbors in the new route plan r k .

5.4.1. Algorithm

The algorithmic description of the sub-route plan rotation method within the sub-route plan operator can be seen in Algorithm 9. As input, the CARP instance I, the original route plan r k , the position index s of the starting task and the position index e of the ending task of r k * within the original route plan r k , and the size of the neighborhood n m a x are expected.
Algorithm 9: subRoutePlanRotation (algorithm of the sub-route plan rotation method within the sub-route plan operator).
Mathematics 10 02205 i009
In the first step of the algorithm, one position index of the n m a x closest neighbor tasks is selected randomly for both t k , s and t k , e ( s * and e * , respectively) with the s e l e c t N e i g h b o r T a s k function (lines 2–3) and the new route plan r k is initialized with the original route plan r k (line 4). In the next step, for all ( i , i * ) { ( s , s * ) , ( e , e * ) } , the following steps are executed (lines 5–22). First, the potential new route plan r k , i is initialized with an empty sequence (line 6), then, based on the relationship between i and i * , a sub-route plan is selected and inverted. If t k , i is before t k , i * (i.e., i < i * ) in r k , then t k , i is directly followed by t k , i * (or i n v ( t k , i * ) ) in r k (lines 7–8). Otherwise, if t k , i * is before t k , i in r k , then t k , i * (or i n v ( t k , i * ) ) is directly followed by t k , i in r k (lines 14–15). In both cases, each task that has an inverse and is within the inverted sub-sequence, is replaced by its inverse task in the new route plan r k with the r e p l a c e T a s k function (lines 9–13, lines 16–20). From the sub-route plan rotations, the one that has the lowest total cost is chosen (lines 21–22) and returned by the algorithm (line 23).

5.4.2. Determining the Neighborhood

The neighbors are determined according to the predefined size of the neighborhood ( n m a x ). The distance between arc task t k , i and another arc task is calculated based on their order within r k and whether the other arc task has an inverse task (i.e., it is from an edge task) or not. Let t k , j be an arc task in r k that is not t k , i (i.e., t k , i t k , j ). If t k , j is before t k , i in r k (i.e., j < i ) and has inverse task (i.e., i n v ( t k , j ) T ), then the distance between the two arc tasks is the shortest path between the head vertices of t k , j and t k , i , so it can be calculated with the expression m d c h e a d ( t k , j ) , h e a d ( t k , i ) . The shortest path is calculated starting from h e a d ( t k , j ) , because during the sub-route plan rotation t k , j gets reversed (i.e., it gets replaced by its inverse in the route plan) and it is known that the head vertex of the task is the same as the tail vertex of the inverse task (i.e., t a i l i n v ( t k , j ) = h e a d ( t k , j ) ). If the task does not have inverse, then the shortest path is calculated starting from t a i l ( t k , j ) . If t k , j is after t k , i in r k (i.e., j > i ) and has inverse task, then m d c t a i l ( t k , i ) , t a i l ( t k , j ) is calculated. If the task does not have inverse, then the shortest path is calculated ending at h e a d ( t k , j ) . The reasoning behind what expression to use in each case is summarized in Table 1.

5.4.3. Example

For a better understanding of the method, see the following example. Let the selected route plan be r k = t 0 , t k , 1 , t k , 2 , , t k , 13 , t 0 , the length of the sub-route plan be l = 3 , and the size of the neighborhood be n m a x = 5 . (At this method, l only defines the distance between the two selected arc tasks, it has no effect on the length of the rotated sub-route plans.) Based on these, let the selected two arcs task be t k , s = t k , 6 and t k , e = t k , 8 . Let us assume that all the arc tasks in r k are from an edge task, so they all have inverse task.
The route plan, the selected arc tasks, and their neighborhood are illustrated on Figure 4. The arc tasks served within the route plan are depicted with solid lines, and the other arcs, which are only traversed, are depicted with dashed lines. The arc tasks t k , s and t k , e and their neighborhood are highlighted with red and blue color, respectively. Only the arc tasks that are completely covered by the ellipses are part of the neighborhood. It must be noted that the ellipses are only for representational purpose. Since identifying the neighborhood is quite complex due to the distance calculation, this yields to that the shape that covers only the neighbor arc tasks varies. Based on this, for t k , s , the set of the neighbor arc tasks is { t k , 2 , t k , 3 , t k , 7 , t k , 8 , t k , 11 } , and for t k , e , it is { t k , 4 , t k , 6 , t k , 7 , t k , 9 , t k , 10 } .
For t k , s , let us assume that the selected neighbor task is t k , 2 (i.e., t k , s * = t k , 2 ), then the sub-route plan is r k * = t k , 2 , t k , 3 , t k , 4 , t k , 5 (Figure 5a). Since t k , s * precedes t k , s (i.e., s * < s ), the sub-route plan is reversed in a manner that in the new route plan i n v ( t k , s * ) is directly followed by t k , s . The reversed sub-route plan is i n v ( t k , 5 ) , i n v ( t k , 4 ) , i n v ( t k , 3 ) , i n v ( t k , 2 ) , therefore the new route plan is r k = t 0 , t k , 1 , i n v ( t k , 5 ) , i n v ( t k , 4 ) , i n v ( t k , 3 ) , i n v ( t k , 2 ) , t k , 6 , (Figure 5b).
For t k , e , let us assume that the selected neighbor task is t k , 10 (i.e., t k , e * = t k , 10 ), then the sub-route plan is r k * = t k , 9 , t k , 10 (Figure 6a). Since t k , e * is after t k , e (i.e., e < e * ), the sub-route plan is reversed in a manner that in the new route plan i n v ( t k , e * ) directly follows t k , e . The reversed sub-route plan is i n v ( t k , 10 ) , i n v ( t k , 9 ) , therefore the new route plan is r k = t 0 , t k , 1 , , t k , 8 , i n v ( t k , 10 ) , i n v ( t k , 9 ) , t k , 11 , , t k , 13 , t 0 (Figure 6b).

6. Experiments

The proposed CARP-ABC algorithm (ABC algorithm from now on in this section) along with the sub-route plan operator was implemented in Python (3.6) in the Spyder (4.2.1) development environment. To compare the ABC algorithm with other static CARP solvers, the HMA and the ACOPR were also implemented. To test the ABC algorithm and the HMA as complete rerouting algorithms within the DCARP framework and compare them to the minimal rerouting algorithm RR1, the implementations from our previous work [20] were used. Python programming language was chosen for the implementation, because the DCARP framework will be supported in the future with the PM4Py process mining platform, which is written in Python. The experiments were performed on a laptop PC with Windows 10 operation system, equipped with an Intel(R) Core(TM) i5-3320M 2.60 GHz 2-core CPU and 8 GB of RAM.
It must be signified, that the HMA and the ACOPR were implemented based on their algorithmic description (in [7] and in [8], respectively), because the original implemented version of them is not available. Therefore, the implementations used in this work might have errors that decrease the effectiveness of these algorithms.
In this section, first, the setups of the experiments are specified (Section 6.1). Next, the results of the CARP experiments (Section 6.2), the results of the DCARP experiments (Section 6.4), and the results of the operator experiments (Section 6.3) are discussed, in the respective order.

6.1. Experimental Setups

For the CARP experiments, five CARP instances of different sizes were used. The ABC algorithm, the HMA, and the ACOPR were run 30 times with a time limit set to 10 min and applied to the CARP instances, independently, then the recorded outputs were compared and analyzed.
For the DCARP experiments, one CARP instance of medium size was used. Since the initial DCARP instance is fundamentally a static CARP instance and the HMA is the currently known the most accurate metaheuristic for CARP, the HMA was used to obtain the initial solution. The travel and service logs and the events were generated with the algorithms introduced in [20]. For each event type, 15 events were independently generated, then executed on the initial instance, creating new DCARP instances. The RR1 algorithm, the ABC algorithm, and the HMA were run with a time limit set to 1 min and applied to these instances, independently, then the recorded outputs were compared and analyzed.
For the operator experiments, three CARP instances of different sizes were used. The sub-route plan operator and the other small step-size operators for CARP (i.e., inversion, insertion, swap, and two-opt) were used as local search operators within the employed bee phase of the ABC algorithm. The employed bee phase was chosen instead of the onlooker bee phase so the efficiency of the operators can be measured on solutions of different qualities, not only on good quality solutions. The (modified) ABC algorithm was run 30 times with a time limit set to 10 min and applied to the CARP instances, independently, then the recorded outputs were compared and analyzed.
At the CARP and the DCARP experiments, during the execution of each algorithm, the new global best solution and the time it took for the algorithm to find the new global best solution (i.e., the elapsed time since the beginning of the execution of the algorithm) were recorded and analyzed. At the operator experiments, during the execution of the algorithm, the number of search trials in which the move operators found a better local best solution (i.e., S i * ) was recorded and analyzed.
The used instances and the parameter settings of the used algorithms are specified in the subsections (in Section 6.1.1 and in Section 6.1.2, respectively).

6.1.1. Used Instances

To test the CARP solvers, benchmark test sets are often used in the literature. These test sets can be divided into two main categories: synthetic (e.g., containing randomly generated instances) [52,53,54] and real-life based (containing examples based on real road networks and tasks) [2,28,29].
Since testing an algorithm on all the instances would be time-consuming, only the following five instances were selected for the CARP experiments:
  • “kshs1” from the KSHS set [54];
  • “egl-e1-A” and “egl-s1-A” from the EGL set [2];
  • “egl-g1-A” and “egl-g2-A” from EGL-Large set [29].
For the DCARP experiments, only the “egl-e1-A” instance was used. For the operator experiments, the “egl-e1-A”, “egl-s1-A”, and “egl-g1-A” instances were used.
The EGL and the EGL-Large sets originate from the data of a winter gritting application in Lancashire (UK). The EGL set contains 24 instances in which two different graphs are combined with various attribute values. The instances “egl-e1-A” and “egl-s1-A” were selected to represent one of each graph. The EGL-Large set contains 10 instances in which the graph is the same but the number of task edges is 347 in 5 instances and 375 in the other 5 instances. The instances “egl-g1-A” and “egl-g2-A” were selected to represent one from both kinds of instances.
The attributes of all the five selected instances are briefly summarized in Table 2. These CARP instances were selected to represent CARPs of very different sizes, thus requiring very different complexity levels to solve them. It can be seen, that the “kshs1” instance is a small synthetic CARP instance, which means a small search space for the algorithm, so for problem difficulty it can be put into the easy category. The EGL instances are real-life based CARP instances, so they are naturally more complex. Based on their size, the difficulty of the “egl-e1-A” instance is medium, the difficulty of the “egl-s1-A” instance is hard, and the difficulty of the instances “egl-g1-A” and “egl-g2-A” are very hard.

6.1.2. Parameter Settings

The ABC algorithm was tested with multiple parameter settings. Based on the results, the following settings provided the best quality results without unnecessarily increasing the running time of the algorithm, thus these are used in the experiments:
  • n c s : 10;
  • n m i : 10,000;
  • n g s l : 100;
  • n l s l : 20;
  • n s a l : 20.
According to the investigation in [47], the ideal parameter values for the GSTM operator are the followings: p r c = 0.5 , p c p = 0.8 , p l = 0.2 , l m i n = 2 , l m a x = I n t ( n ) , and n m a x = 5 , where n is the number of cities. In the experiments, the same parameter values are used for the sub-route plan move operator. The only difference is that n is the number of tasks within the selected route plan.
For the HMA and the ACOPR, the optimal parameter settings defined by the corresponding works [7,8] were used. For the RR1, no parameters are needed.

6.2. Results of the CARP Experiments

The charts on Figure 7, Figure 8 and Figure 9 show the convergence speed of 30 independent runs of the algorithms for the selected instances. As it was mentioned before, for these, the new global best solution and the time it took for the algorithm to find the new global best solution were recorded. The y-axis shows the total cost of the solution, and the x-axis shows the elapsed time since the algorithm started running in seconds. The different colors indicate the outputs of the different algorithms. The colored lines indicate the average convergence speed and the colored areas cover all the values that were recorded (i.e., the areas are enclosed by the minimum and maximum values). The closer the line is to the intersection of the axes, the better the convergence speed of the algorithm.
In the case of the “kshs1” instance (Figure 7), it can be seen that the convergence speed of the ACOPR and the ABC algorithm is around twice as fast as the speed of the HMA. However, even though the speed of the ACOPR and the ABC algorithm is nearly the same, the ACOPR algorithm has failed to find the best solution in 30 runs, thus making the ABC algorithm the best solver for CARPs of small size like the “kshs1” instance.
In the case of the “egl-e1-A” instance (Figure 8), the differences between the convergence speed of the algorithms start to show. It can be seen that in all cases, the ABC algorithm provides better solutions and faster, than the ACOPR algorithm. The HMA algorithm has a very slow cycle time, thus it has a very slow convergence speed as well. If time is not taken into account, the HMA can generally provide better solutions than the other algorithms.
In Table 3, the total cost of the globally best solution within different time limits is examined, based on 30 independent runs of the ABC algorithm and the HMA. The calculated statistics are the following: minimum (Min.), maximum (Max.), average (Avg.), and standard deviation (Std.). It can be seen that within 1 min, the ABC algorithm always provided better solutions. Within 5 min, in some cases, the HMA algorithm found better results (it has smaller Min. value), but in average the ABC algorithm still performed better (it has smaller Max. and Avg. values). Nevertheless, within 10 or more minutes, the HMA algorithm provided better solutions. Regardless of the time limit, the ABC algorithm is slightly more stable than the HMA algorithm, in terms of how much the solution varies for different runs (it has smaller Std. values).
In the case of the “egl-s1-A” instance (Figure 9 and Table 4), the differences between the convergence speed of the HMA and the ABC algorithm is more complex. It can be seen that before 200 s, the ABC algorithm performs better, between 200 and 400 s they perform around the same, then after 400 s the HMA performs better.
The results were similar for the “egl-g1-A” and “egl-g2-A” instances (Figure 10 and Table 5 and Table 6). In most of the runs, the set time limit was not enough for the HMA to improve its initial solution, so only its initial solution was recorded. That is why the graph for the HMA looks like a straight line in Figure 10. As a result of this, in the measured time period, the ABC algorithm performed better than the HMA after around 100 s.
Based on the results, it can be concluded that the ABC algorithm can provide a good enough solution within a short amount of time. Since it has a small cycle time, the best global solution can be updated more frequently. The ABC algorithm is better than the ACOPR algorithm in all aspects. The ABC algorithm has faster convergence speed and finds better quality solutions, than the ACOPR algorithm. Furthermore, it is competitive with the HMA, when the running time of the algorithms is set to a short time interval.

6.3. Results of the Operator Experiments

The results of the operator experiments are summarized in Table 7. In each row, the percentage of the number of trials is shown, in which the operator (specified by the column header) found a better solution, compared to the total number of trials in which a better solution was found for the instance (that is specified in the first column). For the sake of simplicity, let us call this measure efficiency. It can be seen that the sub-route plan operator has the highest efficiency in all the three cases, thus, among the examined operators, it has the highest chance to improve the current solution, regardless of the problem size.
A correlation can be observed between the size and complexity of the CARP problem and the efficiency of the operators. As it was mentioned before, the complexity of the “egl-e1-A” instance is medium, the “egl-s1-A” instance is difficult, and the the “egl-g1-A” instance is the most difficult. By increasing the size of the problem, the efficiency of the inversion and the sub-route plan operator increases compared to the other operators, and by decreasing the size of the problem, the efficiency of the insertion, the swap, and the two-opt operators increases compared to the other operators.
Based on the results, it can be concluded that the sub-route plan operator is more likely to find a better solution than the other operators, especially when a greater modification is needed on the current solution (since it is a randomly generated solution and/or it is a solution of a larger CARP instance).

6.4. Results of the DCARP Experiments

The results for the task appearance events, the demand increase events, and the vehicle breakdown events for the “egl-e1-A” instance can be seen in Table 8, Table 9, and Table 10, respectively. In all the three tables, the first few columns contain the parameter values that can be used to reconstruct the event data components of the problem:
  • the travel and service log (with crc);
  • the task appearance event (with nt_arc, nt_dem, and nt_sc);
  • the demand increase event (with dit_arc, dit_dem_inc, and dit_sc_inc);
  • the vehicle breakdown event (with vb_id).
The best total cost calculated by the RR1 rerouting algorithm, the ABC algorithm, and the HMA are contained by the last three columns. The best total cost of each run is highlighted with bold font.
The results of all the events are summarized in Table 11. It can be seen that in total, the ABC algorithm performed better than the other examined algorithms (RR1 and HMA). The HMA performed better only at the vehicle breakdown events, but the difference is negligible. The RR1 algorithm gave the best results in nearly the same amount of times as the ABC algorithm, in case of task appearance and vehicle breakdown events.
It is not shown in the tables, but the RR1 algorithm has the shortest run time (in the test cases, it was always less than one second). The run time of the other algorithms (ABC and HMA) is approximately the same whether a DCARP or a CARP instance is used as input, since it is the complexity of the problem that mainly defines the convergence speed.
Based on the results, similar conclusions can be made as in the CARP experiments. The ABC algorithm outperforms the HMA for a certain period of time, but then the HMA slowly takes over the lead. If time is the priority, then in the case of task appearance and vehicle breakdown events, the RR1 algorithm should be used. If time and the quality of the solution are equally important, then the ABC algorithm should be used for all events. If the quality of the solution is the priority, then the HMA should be used.

7. Conclusions and Future Work

In this study, an ABC algorithm for the CARP (CARP-ABC) was developed along with a new move operator, the sub-route plan operator, which is utilized by the proposed CARP-ABC algorithm. The CARP-ABC algorithm was tested both as a CARP and a DCARP solver, then, its performance was compared with other algorithms. The results showed that for both CARP and DCARP instances, the CARP-ABC algorithm excels in finding a relatively good quality solution in a short amount of time. It makes the algorithm highly competitive with the currently most accurate CARP solver, the HMA, when the running time of the algorithms is limited to around one minute.
In the future, the CARP-ABC algorithm will be improved upon, to increase the accuracy of the algorithm without increasing its runtime. The goal is to make the algorithm better than the HMA, even when the running time is unlimited.

Author Contributions

Conceptualization, Z.N.; Formal analysis, Z.N.; Investigation, Z.N.; Methodology, Z.N.; Software, Z.N.; Supervision, Á.W.-S.; Validation, Z.N.; Visualization, Z.N.; Writing—original draft, Z.N.; Writing—review & editing, Z.N., Á.W.-S. and T.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Széchenyi 2020 under the EFOP-3.6.1-16-2016-00015.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The used CARP datasets are available at https://www.uv.es/belengue/carp.html (accessed on 21 June 2022). The output data of the experiments discussed in the paper are available at https://drive.google.com/file/d/1LFgct7Z8_W_yx_CppVmN1kAYry3VGM8Q/ (accessed on 21 June 2022).

Acknowledgments

The authors acknowledge support from the Slovenian–Hungarian bilateral project “Optimization and fault forecasting in port logistics processes using artificial intelligence, process mining and operations research”, grant 2019-2.1.11-TÉT-2020-00113, and from the National Research, Development and Innovation Office–NKFIH under the grant SNN 129364.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ABCArtificial Bee Colony
ACOPRAnt Colony Optimization Algorithm with Path Relinking
CARPCapacitated Arc Routing Problem
DAGDirected Acyclic Graph
DCARPDynamic Capacitated Arc Routing Problem
GAGenetic Algorithm
GPSGlobal Positioning System
GSTMGreedy Sub Tour Mutation
HMAHybrid Metaheuristic Approach
MAENSMemetic Algorithm with Extended Neighborhood Search
NP-hardNon-deterministic Polynomial-time hard
RPSHRandomized Path-Scanning Heuristic
RSGRandom Solution Generation
TSPTraveling Salesman Problem
VRPVehicle Routing Problem

Appendix A

This appendix shows all the notations that are used in this work, categorized by the context where they appear and along with their meaning.
Table A1. Notations used in the CARP and the DCARP.
Table A1. Notations used in the CARP and the DCARP.
NotationMeaning
Ggraph G = ( V , A )
Vset of vertices
v 0 the depot ( v 0 V )
Aset of arcs
Tset of tasks ( T A )
t 0 the dummy task t 0 = ( v 0 , v 0 )
nthe number of tasks that have to be executed
i n v ( a ) the inverse of arc a A
h e a d ( a ) the head vertex of arc a A
t a i l ( a ) the tail vertex of arc a A
d c ( a ) the dead-heading cost of arc a A
i d ( t ) the identifier of the task t T
d e m ( t ) the demand of task t T
s c ( t ) the serving cost of task t T
m d c ( v i , v j ) the minimal total dead-heading cost from vertex v i to v j ( v i , v j V )
wthe number of vehicles
qthe maximum capacity of a vehicle
r k the k-th route plan
l k the number of tasks on the k-th route plan
t ( k , i ) the i-th task in the k-th route plan
D C ( r k ) the total dead-heading cost of the route plan r k
S C ( r k ) the total service cost of the route plan r k
Ia CARP instance
Sa solution for the CARP instance I
T C ( S ) the total cost of solution S
T v set of virtual tasks ( T v T )
Hset of identifiers of all the vehicles ( | H | = w )
H f set of the identifiers of the (currently) free vehicles ( H f H )
Rset of identifiers of all the route plans
R e set of identifiers of the route plans whose execution stopped ( R e R )
r t ( k ) the virtual task of the k-th route plan ( k R , r t ( k ) T v )
r v ( k ) the identifier of the vehicle that is executing the k-th route plan ( k R , r v ( k ) H )
mthe number of DCARP instances within a DCARP scenario
I a DCARP scenario, a set of DCARP instances ( | I | = m )
I i the i-th DCARP instance within a DCARP scenario ( 0 i < m )
S i the accepted solution for the i-th DCARP instance
T C ( S i ) the total cost of solution S i
Table A2. Notations used in the CARP-ABC algorithm.
Table A2. Notations used in the CARP-ABC algorithm.
NotationMeaning
n c s the size of the colony
n m i the maximum number of iterations
n g s l the global search limit
n l s l the local search limit
n s a l the solution age limit (within the population)
Cthe current colony, a set of solutions ( C = { S 1 , S 2 , } , | C | = n c s )
C ¯ the previous colony
A set of the age of the solutions within C ( A = { α 1 , α 2 , } , | A | = n c s )
Pset of probability values of the solutions within C ( P = { p 1 , p 2 , } , | P | = n c s )
Ia CARP instance
S * the currently known globally best solution
S the best solution found within one iteration
S i * the best solution found in the neighborhood of S i C
S i the best solution found within one iteration, in the neighborhood of S i C
S i , j * the best solution found in the neighborhood of S i , j ( S i , j is a neighbor of S i C )
S i , j the best solution found within one iteration, in the neighborhood of S i , j ( S i , j is a neighbor of S i C )
α * the current age of S *
α the current age of S
α i * the current age of S i *
α i the current age of S i
Table A3. Notations used in the description of the sub-route plan operator.
Table A3. Notations used in the description of the sub-route plan operator.
NotationMeaning
p r c the reconnection probability
p c p the correction and perturbation probability
p l the linearity probability
p r n d a random number between 0 and 1
l m i n the minimum length of the sub-route plan
l m a x the maximal length of the sub-route plan
lthe selected length of the sub-route plan ( l m i n l l m a x )
t k , s the selected starting task of the sub-route plan
t k , e the selected ending task of the sub-route plan
n m a x the (maximal) size of the neighborhood of a task arc
t k , s * the selected task from the neighborhood of t k , s
t k , e * the selected task from the neighborhood of t k , e
r k * the selected sub-route plan within a route plan r k
r k # the route plan r k without the sub-route plan r k *
r k the resulting route plan

References

  1. Golden, B.L.; Wong, R.T. Capacitated arc routing problems. Networks 1981, 11, 305–315. [Google Scholar] [CrossRef]
  2. Eglese, R.W. Routeing winter gritting vehicles. Discret. Appl. Math. 1994, 48, 231–244. [Google Scholar] [CrossRef] [Green Version]
  3. Fink, J.; Loebl, M.; Pelikánová, P. Arc-routing for winter road maintenance. Discret. Optim. 2021, 41, 100644. [Google Scholar] [CrossRef]
  4. Maniezzo, V. Algorithms for Large Directed CARP Instances: Urban Solid Waste Collection Operational Support; Technical Report; University of Bolonha: Bolonha, Italy, 2004. [Google Scholar]
  5. Babaee Tirkolaee, E.; Mahdavi, I.; Seyyed Esfahani, M.M.; Weber, G.W. A hybrid augmented ant colony optimization for the multi-trip capacitated arc routing problem under fuzzy demands for urban solid waste management. Waste Manag. Res. 2020, 38, 156–172. [Google Scholar] [CrossRef]
  6. Tang, K.; Mei, Y.; Yao, X. Memetic algorithm with extended neighborhood search for capacitated arc routing problems. IEEE Trans. Evol. Comput. 2009, 13, 1151–1166. [Google Scholar] [CrossRef] [Green Version]
  7. Chen, Y.; Hao, J.K.; Glover, F. A hybrid metaheuristic approach for the capacitated arc routing problem. Eur. J. Oper. Res. 2016, 253, 25–39. [Google Scholar] [CrossRef]
  8. Ting, C.J.; Tsai, H.S. Ant Colony Optimization with Path Relinking for the Capacitated Arc Routing Problem. Asian Transp. Stud. 2018, 5, 362–377. [Google Scholar]
  9. Fu, H.; Mei, Y.; Tang, K.; Zhu, Y. Memetic algorithm with heuristic candidate list strategy for capacitated arc routing problem. In Proceedings of the IEEE Congress on Evolutionary Computation, Barcelona, Spain, 18–23 July 2010; IEEE: New York, NY, USA, 2010; pp. 1–8. [Google Scholar]
  10. Chen, X. Maens+: A divide-and-conquer based memetic algorithm for capacitated arc routing problem. In Proceedings of the 2011 Fourth International Symposium on Computational Intelligence and Design, Hangzhou, China, 28–30 October 2011; IEEE: New York, NY, USA, 2011; pp. 83–88. [Google Scholar]
  11. Corberán, Á.; Eglese, R.; Hasle, G.; Plana, I.; Sanchis, J.M. Arc routing problems: A review of the past, present, and future. Networks 2021, 77, 88–115. [Google Scholar] [CrossRef]
  12. Wu, G.; Zhao, K.; Cheng, J.; Ma, M. A Coordinated Vehicle–Drone Arc Routing Approach Based on Improved Adaptive Large Neighborhood Search. Sensors 2022, 22, 3702. [Google Scholar] [CrossRef]
  13. Karaboga, D. An Idea Based on Honey Bee Swarm for Numerical Optimization; Technical Report; Erciyes University: Kayseri, Turkey, 2005. [Google Scholar]
  14. Karaboga, D.; Görkemli, B. A combinatorial artificial bee colony algorithm for traveling salesman problem. In Proceedings of the 2011 International Symposium on Innovations in Intelligent Systems and Applications, Istanbul, Turkey, 15–18 June 2011; IEEE: New York, NY, USA, 2011; pp. 50–53. [Google Scholar]
  15. Görkemli, B.; Karaboga, D. Quick combinatorial artificial bee colony -qCABC- optimization algorithm for TSP. In Proceedings of the 2nd International Symposium on Computing in Informatics and Mathematics (ISCIM 2013), Tirana, Albania, 26 September 2013; Epoka University: Tirana, Albania, 2013; pp. 97–101. [Google Scholar]
  16. Karaboga, D.; Gorkemli, B. Solving traveling salesman problem by using combinatorial artificial bee colony algorithms. Int. J. Artif. Intell. Tools 2019, 28, 1950004. [Google Scholar] [CrossRef]
  17. Kantawong, K.; Pravesjit, S. An Enhanced ABC algorithm to Solve the Vehicle Routing Problem with Time Windows. ECTI Trans. Comput. Inf. Technol. (ECTI-CIT) 2020, 14, 46–52. [Google Scholar] [CrossRef]
  18. Mortada, S.; Yusof, Y. A Neighbourhood Search for Artificial Bee Colony in Vehicle Routing Problem with Time Windows. Int. J. Intell. Eng. Syst. 2021, 14, 255–266. [Google Scholar] [CrossRef]
  19. Cura, T. An artificial bee colony approach for the undirected capacitated arc routing problem with profits. Int. J. Oper. Res. 2013, 17, 483–508. [Google Scholar] [CrossRef]
  20. Nagy, Z.; Werner-Stark, A.; Dulai, T. A Data-driven Solution for The Dynamic Capacitated Arc Routing Problem. In Proceedings of the IAC in Budapest 2021, Budapest, Hungary, 26–27 November 2021; Kratochvílová, H., Kratochvíl, R., Eds.; Czech Institute of Academic Education z.s.: Prague, Czech Republic, 2021; pp. 64–83. [Google Scholar]
  21. Tong, H.; Minku, L.L.; Menzel, S.; Sendhoff, B.; Yao, X. A Novel Generalised Meta-Heuristic Framework for Dynamic Capacitated Arc Routing Problems. arXiv 2022, arXiv:2104.06585. [Google Scholar] [CrossRef]
  22. Golden, B.L.; DeArmon, J.S.; Baker, E.K. Computational experiments with algorithms for a class of routing problems. Comput. Oper. Res. 1983, 10, 47–59. [Google Scholar] [CrossRef]
  23. Chapleau, L.; Ferland, J.A.; Lapalme, G.; Rousseau, J.M. A parallel insert method for the capacitated arc routing problem. Oper. Res. Lett. 1984, 3, 95–99. [Google Scholar] [CrossRef]
  24. Ulusoy, G. The fleet size and mix problem for capacitated arc routing. Eur. J. Oper. Res. 1985, 22, 329–337. [Google Scholar] [CrossRef]
  25. Pearn, W.L. Augment-insert algorithms for the capacitated arc routing problem. Comput. Oper. Res. 1991, 18, 189–198. [Google Scholar] [CrossRef]
  26. Santos, L.; Coutinho-Rodrigues, J.; Current, J.R. An improved heuristic for the capacitated arc routing problem. Comput. Oper. Res. 2009, 36, 2632–2637. [Google Scholar] [CrossRef] [Green Version]
  27. Arakaki, R.K.; Usberti, F.L. An efficiency-based path-scanning heuristic for the capacitated arc routing problem. Comput. Oper. Res. 2019, 103, 288–295. [Google Scholar] [CrossRef]
  28. Beullens, P.; Muyldermans, L.; Cattrysse, D.; Van Oudheusden, D. A guided local search heuristic for the capacitated arc routing problem. Eur. J. Oper. Res. 2003, 147, 629–643. [Google Scholar] [CrossRef]
  29. Brandão, J.; Eglese, R. A deterministic tabu search algorithm for the capacitated arc routing problem. Comput. Oper. Res. 2008, 35, 1112–1126. [Google Scholar] [CrossRef] [Green Version]
  30. Hertz, A.; Laporte, G.; Mittaz, M. A tabu search heuristic for the capacitated arc routing problem. Oper. Res. 2000, 48, 129–135. [Google Scholar] [CrossRef]
  31. Polacek, M.; Doerner, K.F.; Hartl, R.F.; Maniezzo, V. A variable neighborhood search for the capacitated arc routing problem with intermediate facilities. J. Heuristics 2008, 14, 405–423. [Google Scholar] [CrossRef] [Green Version]
  32. Usberti, F.L.; França, P.M.; França, A.L.M. GRASP with evolutionary path-relinking for the capacitated arc routing problem. Comput. Oper. Res. 2013, 40, 3206–3217. [Google Scholar] [CrossRef] [Green Version]
  33. Mei, Y.; Tang, K.; Yao, X. A global repair operator for capacitated arc routing problem. IEEE Trans. Syst. Man Cybern. Part B (Cybern.) 2009, 39, 723–734. [Google Scholar] [CrossRef] [Green Version]
  34. Lacomme, P.; Prins, C.; Ramdane-Chérif, W. A genetic algorithm for the capacitated arc routing problem and its extensions. In Proceedings of the Applications of Evolutionary Computing (EvoWorkshops 2001: EvoCOP, EvoFlight, EvoIASP, EvoLearn, and EvoSTIM, Como, Italy, 18–20 April 2001 Proceedings); Boers, E.J.W., Ed.; Springer: Berlin/Heidelberg, Germany, 2001; pp. 473–483. [Google Scholar]
  35. Lacomme, P.; Prins, C.; Ramdane-Cherif, W. Competitive memetic algorithms for arc routing problems. Ann. Oper. Res. 2004, 131, 159–185. [Google Scholar] [CrossRef] [Green Version]
  36. Lacomme, P.; Prins, C.; Tanguy, A. First competitive ant colony scheme for the CARP. In Ant Colony Optimization and Swarm Intelligence (4th International Workshop, ANTS 2004, Brussels, Belgium, 5–8 September 2004); Dorigo, M., Birattari, M., Blum, C., Gambardella, L.M., Mondada, F., Stützle, T., Eds.; Springer: Berlin/Heidelberg, Germany, 2004; pp. 426–427. [Google Scholar]
  37. Santos, L.; Coutinho-Rodrigues, J.; Current, J.R. An improved ant colony optimization based algorithm for the capacitated arc routing problem. Transp. Res. Part B 2010, 44, 246–266. [Google Scholar] [CrossRef]
  38. Tagmouti, M.; Gendreau, M.; Potvin, J.Y. A dynamic capacitated arc routing problem with time-dependent service costs. Transp. Res. Part C Emerg. Technol. 2011, 19, 20–28. [Google Scholar] [CrossRef]
  39. Archetti, C.; Guastaroba, G.; Speranza, M.G. Reoptimizing the rural postman problem. Comput. Oper. Res. 2013, 40, 1306–1313. [Google Scholar] [CrossRef]
  40. Mei, Y.; Tang, K.; Yao, X. Evolutionary computation for dynamic capacitated arc routing problem. In Evolutionary Computation for Dynamic Optimization Problems; Yang, S., Yao, X., Eds.; Springer: Berlin/Heidelberg, Germany, 2014; pp. 377–401. [Google Scholar]
  41. Yazici, A.; Kirlik, G.; Parlaktuna, O.; Sipahioglu, A. A dynamic path planning approach for multirobot sensor-based coverage considering energy constraints. IEEE Trans. Cybern. 2013, 44, 305–314. [Google Scholar] [CrossRef] [PubMed]
  42. Liu, M.; Singh, H.K.; Ray, T. A memetic algorithm with a new split scheme for solving dynamic capacitated arc routing problems. In Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC), Beijing, China, 6–11 July 2014; IEEE: New York, NY, USA, 2014; pp. 595–602. [Google Scholar]
  43. Monroy-Licht, M.; Amaya, C.A.; Langevin, A.; Rousseau, L.M. The rescheduling arc routing problem. Int. Trans. Oper. Res. 2017, 24, 1325–1346. [Google Scholar] [CrossRef]
  44. Padungwech, W. Heuristic algorithms for dynamic capacitated arc routing. Ph.D. Thesis, Cardiff University, Cardiff, UK, 2018. [Google Scholar]
  45. Padungwech, W.; Thompson, J.; Lewis, R. Effects of update frequencies in a dynamic capacitated arc routing problem. Networks 2020, 76, 522–538. [Google Scholar] [CrossRef]
  46. Karaboga, D.; Gorkemli, B. A quick artificial bee colony -qABC- algorithm for optimization problems. In Proceedings of the 2012 International Symposium on Innovations in Intelligent Systems and Applications, Trabzon, Turkey, 2–4 July 2012; IEEE: New York, NY, USA, 2012; pp. 1–5. [Google Scholar]
  47. Albayrak, M.; Allahverdi, N. Development a new mutation operator to solve the traveling salesman problem by aid of genetic algorithms. Expert Syst. Appl. 2011, 38, 1313–1320. [Google Scholar] [CrossRef]
  48. Bhagade, A.S.; Puranik, P.V. Artificial bee colony (ABC) algorithm for vehicle routing optimization problem. Int. J. Soft Comput. Eng. 2012, 2, 329–333. [Google Scholar]
  49. Consoli, P.; Yao, X. Diversity-driven selection of multiple crossover operators for the capacitated arc routing problem. In Evolutionary Computation in Combinatorial Optimisation (14th European Conference, EvoCOP 2014, Granada, Spain, 23–25 April 2014, Revised Selected Papers); Blum, C., Ochoa, G., Eds.; Springer: Berlin/Heidelberg, Germany, 2014; pp. 97–108. [Google Scholar]
  50. Willemse, E.J.; Joubert, J.W. Splitting procedures for the mixed capacitated arc routing problem under time restrictions with intermediate facilities. Oper. Res. Lett. 2016, 44, 569–574. [Google Scholar] [CrossRef] [Green Version]
  51. Durstenfeld, R. Algorithm 235: Random permutation. Commun. ACM 1964, 1477, 420. [Google Scholar] [CrossRef]
  52. DeArmon, J.S. A Comparison of Heuristics for the Capacitated Chinese Postman Problem. Master’s Thesis, University of Maryland, College Park, MD, USA, 1981. [Google Scholar]
  53. Benavent, E.; Campos, V.; Corberán, A.; Mota, E. The capacitated arc routing problem: Lower bounds. Networks 1992, 22, 669–690. [Google Scholar] [CrossRef]
  54. Kiuchi, M.; Shinano, Y.; Hirabayashi, R.; Saruwatari, Y. An exact algorithm for the capacitated arc routing problem using parallel branch and bound method. In Spring National Conference of the Operational Research Society of Japan; INFORMS: Catonsville, MD, USA, 1995; pp. 28–29. [Google Scholar]
Figure 1. An example of a solution representation for a CARP instance with 10 required tasks, where 0 is the i d of the dummy task. In this example, there are 3 routes. The first route services the tasks with i d s 1, 4 and 10. The second services the tasks with i d s 8, 2, 7 and 3. The third services the tasks with i d s 6, 9 and 5.
Figure 1. An example of a solution representation for a CARP instance with 10 required tasks, where 0 is the i d of the dummy task. In this example, there are 3 routes. The first route services the tasks with i d s 1, 4 and 10. The second services the tasks with i d s 8, 2, 7 and 3. The third services the tasks with i d s 6, 9 and 5.
Mathematics 10 02205 g001
Figure 2. Example route plan r k .
Figure 2. Example route plan r k .
Mathematics 10 02205 g002
Figure 3. Greedy reconnection method: (a) sub-route plan r k * subtracted from route plan r k ; (b) sub-route plan r k * connected to the route plan r k # .
Figure 3. Greedy reconnection method: (a) sub-route plan r k * subtracted from route plan r k ; (b) sub-route plan r k * connected to the route plan r k # .
Mathematics 10 02205 g003
Figure 4. Nearest neighbors of the arc tasks t k , s ( t k , 6 ) and t k , e ( t k , 8 ) within the route plan r k , when n m a x = 5 .
Figure 4. Nearest neighbors of the arc tasks t k , s ( t k , 6 ) and t k , e ( t k , 8 ) within the route plan r k , when n m a x = 5 .
Mathematics 10 02205 g004
Figure 5. Rotation of the sub-route plan enclosed by t k , s * and t k , s : (a) An arc task (here t k , 2 ) is randomly selected as t k , s * from the neighbor list of arc task t k , s ( t k , 6 ), thus a sub-route plan is obtained; (b) The sub-route plan is inverted, so i n v ( t k , s * ) will be directly followed by t k , s in the new route plan r k .
Figure 5. Rotation of the sub-route plan enclosed by t k , s * and t k , s : (a) An arc task (here t k , 2 ) is randomly selected as t k , s * from the neighbor list of arc task t k , s ( t k , 6 ), thus a sub-route plan is obtained; (b) The sub-route plan is inverted, so i n v ( t k , s * ) will be directly followed by t k , s in the new route plan r k .
Mathematics 10 02205 g005
Figure 6. Rotation of the sub-route plan enclosed by t k , e and t k , e * : (a) An arc task ( t k , 10 ) is randomly selected as t k , e * from the neighbor list of arc task t k , e ( t k , 8 ), thus, a sub-route plan is obtained; (b) The sub-route plan is inverted, so t k , e will be directly followed by i n v ( t k , e * ) in the new route plan r k .
Figure 6. Rotation of the sub-route plan enclosed by t k , e and t k , e * : (a) An arc task ( t k , 10 ) is randomly selected as t k , e * from the neighbor list of arc task t k , e ( t k , 8 ), thus, a sub-route plan is obtained; (b) The sub-route plan is inverted, so t k , e will be directly followed by i n v ( t k , e * ) in the new route plan r k .
Mathematics 10 02205 g006
Figure 7. The convergence speed of 30 independent runs of HMA, ACOPR, and ABC algorithms on the “kshs1” instance, plotted on one chart.
Figure 7. The convergence speed of 30 independent runs of HMA, ACOPR, and ABC algorithms on the “kshs1” instance, plotted on one chart.
Mathematics 10 02205 g007
Figure 8. The convergence speed of 30 independent runs of HMA, ACOPR, and ABC algorithm on the “egl-e1-A” instance, plotted on one chart, with time limit ( t 600 s).
Figure 8. The convergence speed of 30 independent runs of HMA, ACOPR, and ABC algorithm on the “egl-e1-A” instance, plotted on one chart, with time limit ( t 600 s).
Mathematics 10 02205 g008
Figure 9. The convergence speed of 30 independent runs of HMA, ACOPR, and ABC algorithm on the “egl-s1-A” instance, plotted on one chart with time limit ( t 600 s).
Figure 9. The convergence speed of 30 independent runs of HMA, ACOPR, and ABC algorithm on the “egl-s1-A” instance, plotted on one chart with time limit ( t 600 s).
Mathematics 10 02205 g009
Figure 10. The convergence speed of 30 independent runs of HMA, and ABC algorithm on the “egl-g1-A” instance, plotted on one chart with time limit (t≤ 600 s).
Figure 10. The convergence speed of 30 independent runs of HMA, and ABC algorithm on the “egl-g1-A” instance, plotted on one chart with time limit (t≤ 600 s).
Mathematics 10 02205 g010
Table 1. Summary table about what expression to use to calculate the distance between an arbitrary arc task t k , j from the route plan r k and t k , i ( i { s , e } ).
Table 1. Summary table about what expression to use to calculate the distance between an arbitrary arc task t k , j from the route plan r k and t k , i ( i { s , e } ).
Is t k , j before or after t k , i in r k ?Does t k , j Have Inverse Task?Expression
beforeyes m d c h e a d ( t k , j ) , h e a d ( t k , i )
beforeno m d c t a i l ( t k , j ) , h e a d ( t k , i )
afteryes m d c t a i l ( t k , i ) , t a i l ( t k , j )
afterno m d c t a i l ( t k , i ) , h e a d ( t k , j )
Table 2. Attributes of the used CARP instances.
Table 2. Attributes of the used CARP instances.
Namekshs1 [54]egl-e1-A [2]egl-s1-A [2]egl-g1-A [29]egl-g2-A [29]
Number of vertices877140255255
Number of task edges155175347375
Number of other edges047115280
Number of vehicles4572022
Capacity of the vehicles15030521028,60028,000
Lower bound of the total cost870514681394553,696604,228
Total cost of the best solution 114,66135485018992,0451,088,040
1 Based on the literature.
Table 3. Statistics of the total cost of the globally best solution of the HMA and the ABC algorithms on the “egl-e1-A” instance, within different time limits.
Table 3. Statistics of the total cost of the globally best solution of the HMA and the ABC algorithms on the “egl-e1-A” instance, within different time limits.
AlgorithmStatistic ValueOutput at Different Time Limits
≤1 min≤5 min≤10 min
ABCMin.383536513651
Max.402138873872
Avg.3894.93812.53796.23
Std.38.0268.9065.70
HMAMin.373135823582
Max.435741334133
Avg.4091.63821.53802.4
Std.184.92140.69148.10
Table 4. Statistics of the total cost of the globally best solution of the HMA and the ABC algorithms on the “egl-s1-A” instance, within different time limits.
Table 4. Statistics of the total cost of the globally best solution of the HMA and the ABC algorithms on the “egl-s1-A” instance, within different time limits.
AlgorithmStatistic ValueOutput at Different Time Limits
≤1 min≤5 min≤10 min
ABCMin.563553985398
Max.597759035883
Avg.5876.535752.875708.5
Std.66.73131.41130.64
HMAMin.550752355235
Max.662866116401
Avg.6176.575738.675459.87
Std.355.71440.63209.63
Table 5. Statistics of the total cost of the globally best solution of the HMA and the ABC algorithms on the “egl-g1-A” instance, within different time limits.
Table 5. Statistics of the total cost of the globally best solution of the HMA and the ABC algorithms on the “egl-g1-A” instance, within different time limits.
AlgorithmStatistic ValueOutput at Different Time Limits
≤1 min≤5 min≤10 min
ABCMin.1,272,7331,224,2891,222,579
Max.2,069,3581,299,6091,274,297
Avg.1,370,687.21,266,411.971,244,482.33
Std.161,727.5019,121.1713,385.36
HMAMin.1,245,3581,245,3581,245,358
Max.1,380,7271,380,7271,380,727
Avg.1,323,545.831,323,544.971,323,519.2
Std.32,452.8532,452.5332,443.37
Table 6. Statistics of the total cost of the globally best solution of the HMA and the ABC algorithms on the “egl-g2-A” instance, within different time limits.
Table 6. Statistics of the total cost of the globally best solution of the HMA and the ABC algorithms on the “egl-g2-A” instance, within different time limits.
AlgorithmStatistic ValueOutput at Different Time Limits
≤1 min≤5 min≤10 min
ABCMin.1,389,7201,349,0321,343,764
Max.1,897,5131,416,2081,385,876
Avg.1,527,504.471,386,266.171,366,140.43
Std.163,125.667587.1211,848.98
HMAMin.1,356,2041,356,2041,356,204
Max.1,478,2791,478,2791,478,279
Avg.1,429,3531,425,706.431,424,908.2
Std.30,287.6133,358.2834,299.91
Table 7. The efficiency of the move operators compared with each other within the CARP-ABC algorithm, measured on instances of different sizes.
Table 7. The efficiency of the move operators compared with each other within the CARP-ABC algorithm, measured on instances of different sizes.
InstanceEfficiency of the Operators
InversionInsertionSwapTwo-OptSub-Route Plan
egl-e1-A15%26%17%16%27%
egl-s1-A16%23%16%16%29%
egl-g1-A20%19%13%8%40%
Table 8. The best total costs of 15 independent runs, calculated by the RR1, ABC, and HMA within one minute, after the occurrence of a random task appearance event in the “egl-e1-A” instance.
Table 8. The best total costs of 15 independent runs, calculated by the RR1, ABC, and HMA within one minute, after the occurrence of a random task appearance event in the “egl-e1-A” instance.
#Parameter Values Outputs of the Algorithm
crcnt_arcnt_demnt_sc RR1ABCHMA
1315(25, 75)5816 388937023941
2315(46, 45)6712 370037053799
3356(43, 42)1111 372036453883
4379(42, 57)2014 370436474091
5406(2, 1)6232 364236583671
6409(73, 74)4025 407739914202
7419(9, 8)3726 369337093709
8427(32, 31)558 424842274235
9436(24, 22)644 366736673667
10439(15, 14)997 387239053905
11457(6, 5)468 371437143714
12490(41, 40)2079 376437643764
13517(22, 75)6624 386638663841
14520(21, 51)892 376737673767
15522(39, 35)677 362236223622
Table 9. The best total costs of 15 independent runs, calculated by the RR1, ABC, and HMA within one minute, after the occurrence of a random demand increased event in the “egl-e1-A” instance.
Table 9. The best total costs of 15 independent runs, calculated by the RR1, ABC, and HMA within one minute, after the occurrence of a random demand increased event in the “egl-e1-A” instance.
#Parameter Values Outputs of the Algorithm
crcdit_arcdit_demdit_sc RR1ABCHMA
1326(32, 34)3636 431839314171
2344(54, 52)1111 374936483907
3345(50, 52)1515 375336363678
4374(52, 54)99 374736303814
5376(68, 66)3232 403637623930
6384(44, 45)1818 390536683905
7415(46, 47)99 397535903590
8431(44, 59)1111 388536323766
9449(32, 35)1212 428036363683
10468(35, 32)6565 433443264326
11490(44, 46)22 363635503550
12490(32, 33)2828 428937603674
13493(59, 44)55 387936263553
14516(35, 32)2424 429337563572
15545(35, 41)1313 377536573559
Table 10. The best total costs of 15 independent runs, calculated by the RR1, ABC, and HMA within one minute, after the occurrence of a random vehicle breakdown event in the “egl-e1-A” instance.
Table 10. The best total costs of 15 independent runs, calculated by the RR1, ABC, and HMA within one minute, after the occurrence of a random vehicle breakdown event in the “egl-e1-A” instance.
#Parameter Values Outputs of the Algorithm
crcvb_id RR1ABCHMA
13052 412442174737
23111 406040124365
33422 420441264585
43440 409641124303
53640 409641124241
63994 400440204308
74301 396639663966
84512 428242614261
94630 371836523639
104902 428242614261
114952 428242614261
125060 362136213585
135071 387437363572
145232 426142614261
155402 428242824282
Table 11. The total number of the best outputs (and their percentage compared to the total number of outputs) of the algorithms RR1, ABC, and HMA summarized for the “egl-e1-A” instance, for each event type.
Table 11. The total number of the best outputs (and their percentage compared to the total number of outputs) of the algorithms RR1, ABC, and HMA summarized for the “egl-e1-A” instance, for each event type.
Event TypeOutputs of the Algorithm
RR1ABCHMA
Task appearance9 (60%)10 (67%)6 (40%)
Demand increase0 (0%)11 (73%)7 (47%)
Vehicle breakdown7 (47%)8 (53%)9 (60%)
Total16 (36%)29 (64%)22 (49%)
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Nagy, Z.; Werner-Stark, Á.; Dulai, T. An Artificial Bee Colony Algorithm for Static and Dynamic Capacitated Arc Routing Problems. Mathematics 2022, 10, 2205. https://doi.org/10.3390/math10132205

AMA Style

Nagy Z, Werner-Stark Á, Dulai T. An Artificial Bee Colony Algorithm for Static and Dynamic Capacitated Arc Routing Problems. Mathematics. 2022; 10(13):2205. https://doi.org/10.3390/math10132205

Chicago/Turabian Style

Nagy, Zsuzsanna, Ágnes Werner-Stark, and Tibor Dulai. 2022. "An Artificial Bee Colony Algorithm for Static and Dynamic Capacitated Arc Routing Problems" Mathematics 10, no. 13: 2205. https://doi.org/10.3390/math10132205

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop