Next Article in Journal
Pareto Explorer for Finding the Knee for Many Objective Optimization Problems
Next Article in Special Issue
Binary Whale Optimization Algorithm for Dimensionality Reduction
Previous Article in Journal
Unsteady Stagnation Point Flow of Hybrid Nanofluid Past a Convectively Heated Stretching/Shrinking Sheet with Velocity Slip
Previous Article in Special Issue
Success History-Based Adaptive Differential Evolution Using Turning-Based Mutation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Opposition-Based Ant Colony Optimization Algorithm for the Traveling Salesman Problem

1
School of Electrical Engineering and Automation, Jiangsu Normal University, Xuzhou 221116, China
2
School of Computer Science & School of Physics and Information Technology, Shaanxi Normal University, Xi’an 710119, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2020, 8(10), 1650; https://doi.org/10.3390/math8101650
Submission received: 18 August 2020 / Revised: 22 September 2020 / Accepted: 22 September 2020 / Published: 24 September 2020
(This article belongs to the Special Issue Evolutionary Computation 2020)

Abstract

:
Opposition-based learning (OBL) has been widely used to improve many swarm intelligent optimization (SI) algorithms for continuous problems during the past few decades. When the SI optimization algorithms apply OBL to solve discrete problems, the construction and utilization of the opposite solution is the key issue. Ant colony optimization (ACO) generally used to solve combinatorial optimization problems is a kind of classical SI optimization algorithm. Opposition-based ACO which is combined in OBL is proposed to solve the symmetric traveling salesman problem (TSP) in this paper. Two strategies for constructing opposite path by OBL based on solution characteristics of TSP are also proposed. Then, in order to use information of opposite path to improve the performance of ACO, three different strategies, direction, indirection, and random methods, mentioned for pheromone update rules are discussed individually. According to the construction of the inverse solution and the way of using it in pheromone updating, three kinds of improved ant colony algorithms are proposed. To verify the feasibility and effectiveness of strategies, two kinds of ACO algorithms are employed to solve TSP instances. The results demonstrate that the performance of opposition-based ACO is better than that of ACO without OBL.

1. Introduction

As an important branch of computational intelligence, swarm intelligence (SI) [1] provides a competitive solution for dealing with large-scale, nonlinear, and complex problems, and has become an important research direction of artificial intelligence. In the SI model, each individual constitutes an organic whole by simulating the behavior of natural biological groups. Although each individual is very simple, the group shows complex emergent behavior. In particular, it does not require prior knowledge of the problem and has the characteristics of parallelism, so it has significant advantages in dealing with problems that are difficult to solve by traditional optimization algorithms. With the deepening of research, more and more swarm intelligence algorithms have been proposed, such as ant colony optimization algorithm (ACO) [2], particle swarm optimization (PSO) [3], artificial bee colony algorithm (ABC) [4], firefly algorithm (FA) [5], cuckoo algorithm (CA) [6], krill herd algorithm [7], monarch butterfly optimization (MBO) [8], and moth search algorithm [9], etc.
ACO as one of the typical SI is first proposed by Macro Dorigo [2] based on the observation of group behaviors of ants in nature. During the process of food searching, ants will release pheromones in the path when they pass through. Pheromones can be detected by other ants and can affect their further path choices. Generally, the shorter the path is, the more intense the pheromones will be, which means the shortest path will be chosen with the highest probability. The pheromone in other paths will disappear with time. Therefore, given enough time, the optimal path will have the most condensed pheromone. In this way, ants will find the shortest path from their nest to the food source in the end.
ACO has advantages in reasonable robustness, distributed parallel computing, and easy combination with other algorithms. It has been successfully applied in many fields, including traveling salesman problem (TSP) [10,11], satellite control resource scheduling problem [12], knapsack problem [13,14], vehicle routing problem [15,16], and continuous function optimization [17,18,19]. However, conventional ACO is still far from perfect due to issues like premature convergence and long search time [20].
Many scholars have made substantial contributions to improve ACO, mainly focusing on two perspectives, including model modification and algorithms combination. For example, in the line of model improvement, an ant colony system (ACS) [21] employs a pseudo-random proportional rule, which leads to faster convergence. In ACS, only the pheromone of the optimal path will be increased after each iteration. To prevent premature convergence caused by excessive pheromones concentration in some paths, the max-min ant system (MMAS) modifies AS with three main strategies for pheromone [22], including limitation, maximum initialization, and updating rules. To avoid the early planning of the blind search, an improved ACO algorithm by constructing the unequal allocation initial pheromones is proposed in [23]. Path selecting is based on the pseudo-random rule for state transition. The probability is decided by the number of iterations, and the optimal solution. Introducing a penalty function to the pheromone updating, a novel ACO algorithm is addressed in [24] to improve the solution accuracy.
Considering the other primary kind of modification to the original ACO, algorithm combination, several approaches are proposed as well. A multi-type ant system (MTAS) [25] is proposed combining ACS and MMAS, inheriting advantages from both of these algorithms. Combining particle swarm algorithm (PSO) with ACO, a new ant colony algorithm was proposed in [26] and named PS-ACO. PS-ACO employs pheromones updating rules of ACO and searches mechanisms of PSO simultaneously to keep the trade-off between the exploitation and exploration. A multi-objective evolutionary algorithm via decomposition is combined with ACO, an algorithm, termed MOEA/D-ACO [27], which proposes a series of single-objective optimization problems to solve multi-objective optimization problems. Executing ACO in combination with a genetic algorithm (GA), a new hybrid algorithm is proposed in [28]. Embedding GA into ACO, this method improves ACO in convergence speed and GA in searching ability.
Besides the above primary improvement strategies considering model modification and algorithm combination, approaches based on machine learning are also proposed in recent decades [29]. On the one hand, swarm intelligence can be used to solve the optimization problems in deep learning. In deep neural networks, for example, convolutional neural network (CNN), the optimization of hyperparameters is an NP hard problem. Using the SI method can solve this kind of problem better. PSO, CS, and FA were employed to properly select dropout parameters concerning CNN in [30]. The hybridized algorithm [31] based on original MBO with ABC and FAs was proposed to solve CNN hyperparameters optimization. On the other hand, we can learn from machine learning to improve performance of SI. For example, information feedback models are used to enhance the ability of algorithms [32,33,34]. In addition, opposition-based learning (OBL) [35], which was first proposed by Tizhoosh, is a famous algorithm. Its main idea is to calculate all the opposite solutions after current iteration, and then optimal solutions are selected among the generated solutions and their opposite solutions for the next round of iteration. OBL has been widely accepted in SI, including ABC [36], differential evolution (DE) [37,38,39], and PSO [40,41], leading to reasonable performances.
Since opposite solutions to continuous problems are convenient to construct, OBL has been used to solve continuous problems more commonly as above, compared with discrete problems. OBL is combined with ACS and applied to solve the TSP as an example for discrete problems in [42] to acquire the better solution. The solution construction phase and the pheromone updating phase of ACS are the primary foci of this hybrid approach. Besides TSP, the graph coloring problem is also employed as a discrete optimization problem in [43], and an improved DE algorithm based on OBL is proposed, which introduces two different methods of opposition. In [44], a pretreatment step was added in the initial stage when the two-membered evolution strategy was used to solve the total rotation minimization problem. The opposite solutions generated by OBL is compared with the initial solutions randomly generated, and a better solution is selected for the subsequent optimization process.
Inspired by the idea of OBL, in this paper, a series of methods, focusing on the opposite solution construction and the pheromone updating rule, are proposed. Aiming to solve TSP, our proposed methods introduce OBL to ACO and enable ACO no longer limited to the local optimal solutions, avoid premature convergence, and improve its performances.
The rest of this paper is organized as follows. In Section 2, the  background knowledge of ACO and OBL are briefly reviewed. In Section 3, the opposition-based extensions to ACO are presented. In Section 4, the effectiveness of the improvement is verified through experiments. Section 5 presents the conclusions of this paper.

2. Background

In this section, we will take AS as an example to introduce the main process of ACO algorithm. At the same time, some necessary explanations of OBL will also be given.

2.1. Ant System

TSP can be described as finding the shortest route for a salesman who needs to visit each city at least once and no more than once [45]. TSP is a classical combination optimization problem which is employed to test ACO algorithms, and, therefore, TSP is used here as an example as well. The TSP includes symmetric TSP and asymmetric TSP. We only discuss symmetric TSP in this paper.
There are two primary steps in the AS algorithm, path construction, and pheromone updating [2]. During the first step, a solution is established according to the random proportion rule, and it can be described in detail as follows.
In the beginning, m ants are randomly assigned to n cities. At the t-th iteration, the probability, called the state transition probability, for the k-th ant to travel from the city i to j is
p i j k t = τ i j t α η i j t β s J k i τ i s t α η i s t β , if j J k i 0 , otherwise
where τ i j is the pheromone trail and η i j is the heuristic information, accordingly, while α and β are parameters deciding their relative influences, respectively. Generally, η i j = 1 / d i j and d i j is the distance of the path i , j . J k i is the feasible neighborhood of k-th ant at the i-th city.
When all the ants finish touring around each city, pheromone updating is as follows:
| τ i j t + 1 = 1 ρ τ i j t + k = 1 m Δ τ i j k
where ρ 0 < ρ 1 is the evaporation rate, Δ τ i j k represents the extra pheromone left in the path i , j by the k-th ant. Δ τ i j k could be decided through
Δ τ i j k t = Q L k , if ant k passes the path i , j 0 , otherwise
where Q is the pheromone enhancement coefficient and L k is the total path length for the k-th ant.

2.2. Opposition-Based Learning

In the continuous domain, OBL is employed to evaluate the current solutions and their opposite solutions. Among these solutions, optimal ones are selected to boost the searchability [46]. Relative definitions are given as follows.
    Definition 1. Let x R be a real number defined on a specific interval x a , b . The opposite number x ˜ is defined according to the following formula
x ˜ = a + b x
Definition 2.Let X i = x i 1 , x i 2 , , x i D be a point inDdimensional space, x i j a i j , b i j , j = 1 , 2 , , D . The opposite point X ˜ i = x ˜ i 1 , x ˜ i 2 , , x ˜ i D is defined by
x ˜ i j = a i j + b i j x i j
Experiments show that, if there is no prior knowledge of optimization problem, the probability that the opposite solution can reach the global optimum is higher than that of the random solution [47]. Based on the OBL, quasi-opposition based learning [48] and quasi-reflective based learning [49] are proposed later. In this paper, we only consider OBL.
Taking TSP as an example, its solution is a sequence of numbers as the indices of cities. In addition, according to the opposite solutions for a continuous domain, it is challenging to construct opposite solutions for TSP due to the features of a discrete domain. Therefore, only a few scholars have made contributions towards this topic, and Ergeze is one of them. In [43], Ergeze addresses the definition of opposite paths according to the moving direction. For example, the initial path for n cities is given by
P = 1 , 2 , , n
where the entries stand for the order of the cities that a salesman travels through. Then, the corresponding opposite path in a clockwise (CW) direction could be given by
P CW = 1 , 1 + n 2 , 2 , 2 + n 2 , , n 2 1 , n 1 , n 2 , n
where n is even.
In the case when n is odd, append an auxiliary city and make n even. In the end, find opposite solutions according to Equation (7) and then remove this city. Since different moving directions may lead to different opposite paths or solutions, moving in a counterclockwise (CCW) will result in different opposite solutions compared with P CW .
When the number of cities is odd, one way of implementing CW opposition is to add an auxiliary city to the end of the path. After the opposite path is found, remove the auxiliary city.

3. Opposition-Based ACO

The method of construction opposite path based on OBL is given in this section. At the same time, in order to use the opposite path information, three kinds of frameworks of opposite-based ACO algorithms including ACO-Index, ACO-MaxIt, and ACO-Rand will also be proposed. In order to unify the content, the construction method of the opposite path will be combined with the specific algorithm. Details will be given in the following subsections.

3.1. ACO-Index

According to the definition given in Equation (7), the same route may lead to different opposite paths. Taking a TSP of six cities as an example, path 1 , 2 , 3 , 4 , 5 , 6 and path 2 , 3 , 4 , 5 , 6 , 1 are the same path; however, their opposite paths, 1 , 4 , 2 , 5 , 3 , 6 and 2 , 5 , 3 , 6 , 4 , 1 , are different.
In addition, the initialization procedure of ACO is not random compared with DE, but more similar to the greedy algorithm, which selects a closer according to the rule of state transition. Therefore, opposite paths are always longer than the original ones generally and cannot be used pheromone updating. Aiming to solve the shortcomings, a novel ACO algorithm, namely ACO-Index, is proposed based on a modified strategy of opposite path construction.
Opposite path construction is mainly composed of two steps. The first step is the path sorting, and the second is the decision of opposite path. Suppose the number of cities n is even, then, during path sorting, put the path P back into a cycle and appoint a particular city A as the starting city with index 1. In addition, the rest of the cities will be given indices according to their position in this cycle. In this way, we could get the indices P ind = 1 , 2 , . . . , n .
During the second step, indices of the opposite path P ind CW should be found through Equation (7) and the opposite path P CW can be found based on the indices P ind CW appointed previously.
Moreover, when the number of cities n is odd, an auxiliary index should be added to the end of the indices P ind , and we could get P aux . According to Equation (7), we could get the opposite indices P aux CW , and its last index is the auxiliary index itself. Remove the latest index, and we can get P ind CW . In addition, then, decide the opposite path P CW according to the opposite indices P ind CW .
In this way, opposite paths for different paths that share the same cycle route are the same. Pseudocode for opposite path construction is addressed in Algorithm 1.
Algorithm 1 Constructing the opposite path
Input: 
original path P
1:
Put the path back into a circle
2:
Appoint a specific city A with index 1
3:
Appoint other cities in this circle with indices 2, 3, ⋯, and get the indices P ind = 1 , 2 , , n
4:
ifn is even
5:
   Calculate the opposite indices P ind CW according to Equation (7)
6:
else
7:
   Add an auxiliary index at the end of P ind and get P aux
8:
   Calculate the opposite indices P aux CW according to Equation (7)
9:
   Delete the final index from P aux CW and get P ind CW
10:
endif
11:
Calculate the P CW based on P ind CW
Output: 
opposite path P CW
Although some paths may be longer than the optimal path, they still contain useful information within themselves, which inspires us to apply them to reasonably modifying pheromone. For ACO algorithms, if the number of ants is m, the number of paths should also be m for pheromone updating. In the proposed ACO-Index, the top m 1 shortest original paths and the top m 2 shortest opposite paths will be chosen to form the m = m 1 + m 2 paths. Algorithm 2 presents the pseudocode for pheromone updating.
Algorithm 2 Updating pheromone
Input: 
original paths and opposite paths
1:
Sort original paths and opposite paths by length
2:
Select the top m 1 shortest paths and the top m 2 shortest opposite paths
3:
Construct m = m 1 + m 2 new paths
4:
Update pheromone according to Equation (2)
Output: 
Pheromone trail in each path
Algorithm 3 shows the pseudocode for the primary steps of ACO-Index for total iterations N max .
Algorithm 3 ACO-Index algorithm
Input: 
parameters: m, n, α , β , ρ , Q, m 1 , m 2 , N max
1:
Initialize pheromone and heuristic information
2:
for iteration index N c N max do
3:
   for k = 1 to m do
4:
      Construct paths according to Equation (1)
5:
      Construct opposite paths through Algorithm 1
6:
   endfor
7:
   Update pheromone according to Algorithm 2
8:
endfor
Output: 
the optimal path

3.2. ACO-MaxIt

Although ACO-Index modifies ACO with a better path construction strategy, it inherits a similar opposite path generation method from [43]. In this section, a novel opposite path generation method, together with a novel pheromone updating rule, is proposed as an improved ACO algorithm, named ACO-MaxIt, which will be described in detail as follows.
The mirror point M is defined by
M = 1 + n 2
where · denotes the ceiling operator.
Considering the case when n is odd, the opposite city C ˜ for the current city C could be defined as follows:
C ˜ = C , C + M , C M , if C = M if C < M if C > M
Considering the case when n is even, the opposite city C ˜ for the current city C could be defined as follows
C ˜ = C , C + M , C M , if C = n / 2 or n / 2 + 1 else if C < M else if C > M
The pseudocode for opposite path construction is shown in Algorithm 4.
Algorithm 4 Constructing the opposite path based on the mirror point
Input: 
original path
1:
Decide mirror point M, according to Equation (8)
2:
for C = 1 to n do
3:
   Calculate C ˜ through Equation (9) or Equation (10) according to the parity of n
4:
endfor
Output: 
opposite path
The pheromone update process consists of two stages. For the first stage, when N c g N max and 0 < g < 1 , opposite paths will be decided through Algorithm 4. Meanwhile, the pheromone will be updated according to Algorithm 2. In the later stage, when N c > g N max , no more opposite paths could be calculated, and pheromones will still be updated according to Equation (2).
The pseudocode of ACO-MaxIt is presented in Algorithm 5.
Algorithm 5 ACO-MaxIt algorithm
Input: 
parameters: m, n, α , β , ρ , Q, g, m 1 , m 2 , N max
1:
Initialize pheromone and heuristic information
2:
for iteration index N c N max do
3:
   for k = 1 to m do
4:
      Construct paths according to Equation (1)
5:
   endfor
6:
   if N c g N max then
7:
      Construct opposite paths according to Algorithm 4
8:
      Update pheromone according to Algorithm 2
9:
   else
10:
      Update pheromone according to Equation (2)
11:
   endif
12:
endfor
Output: 
the optimal solution

3.3. ACO-Rand

In the pheromone updating stage of ACO-Index or ACO-MaxIt, it is decided based on experiences of when to calculate the opposite paths. Therefore, in this section, another strategy to update pheromones is addressed, and the novel ACO algorithm is named ACO-Rand since whether or not to construct the opposite path is decided by two random variables.
The whole procedure of ACO-Rand is much like that of ACO-MaxIt; however, two random variables R 0 and R are introduced. R 0 is chosen randomly but fixed after generated, and R is randomly selected during each iteration. The pseudocode of ACO-Rand is given in Algorithm 6.
Algorithm 6 ACO-Rand algorithm
Input: 
parameters: m, n, α , β , ρ , Q, m 1 , m 2 , N max , R 0
1:
Initialize pheromone and heuristic information
2:
for iteration index N c N max do
3:
   for k = 1 to m do
4:
      Construct paths according to Equation (1)
5:
   endfor
6:
   Generate a random variable R
7:
   if R < R 0 then
8:
      Construct opposite paths according to Algorithm 4
9:
      Update pheromone according to Algorithm 2
10:
   else
11:
      Update pheromone according to Equation (2)
12:
   endif
13:
endfor
Output: 
the optimal solution

3.4. Time Complexity Analysis

The main steps of the three improved ant colony algorithms include initialization, solution construction and pheromone updating. The time complexity of initialization is O ( n 2 + m ) . The time complexity of constructing the solution is O ( m n 2 ) . The time complexity of pheromone updating is O ( n 2 ) . In addition, the time complexity of constructing and sorting the inverse solutions is O ( n 2 ) . Therefore, the complexity of the final algorithm is O ( N m a x m n 2 ) . It is the same time complexity as the basic ant colony algorithm. Therefore, the improved algorithm does not increase significantly in time.

4. Experiments and Results

AS and PS-ACO are employed as ACO algorithms to verify the feasibility of three opposition-based ACO algorithms. The experiments were performed in the following hardware and software environments. CPU is Core [email protected] GB, and RAM is 16 GB. The operating system is Windows 10. TSP examples are exported from TSPLIB (http://comopt.ifi.uni-heidelberg.de/software/TSPLIB95/tsp/).

4.1. Parameter Setting

In the following experiments, the parameters are setting as m = 50 , m 1 = 40 , m 2 = 10 , α = 1 , β = 2 , ρ = 0 . 05 , Q = 1 , N max = 2000 , g = 0.5 for ACO-MaxIt, R 0 = 0.6 while R 0 , 1 for ACO-Rand. Twenty cycles of experiments are carried out for each example independently. Then, minimum solution S min , maximum solution S max , average solution S a v g , standard deviation S t d , and average runtime T a v g for different examples of 20 times are given in the tables, where minimum solution S min , maximum solution S max , and average solution S a v g are the percentage value deviation against the known optimal solution. The minimum value in each result is bolded in the tables.

4.2. Experimental Results Comparison Based on AS

First, we employ AS to three kinds of opposite based ACO, called AS-Index, AS-MaxIt, and AS-Rand, to verify the effectiveness of the improved algorithm. Twenty-six TSP examples are divided into three main categories, the small-scale, the medium-scale, and the large-scale according to the number of cities, respectively.
Small-scale city example sets are selected from TSPLIB, including eil51, st70, pr76, kroA100, eil101, bier127, pr136, pr152, u159, and rat195. The results are shown in Table 1.
From Table 1, the proposed AS-Index, AS-MaxIt, and AS-Rand show superior performances over AS for the examples, st70, kroA100, eil101, bier127, pr136, and u159. For other examples, the proposed algorithms outperform AS in general, except eil51. Meanwhile, stability by standard deviation is the not the primary concern when evaluating an algorithm. Compared among three proposed algorithms, AS-MaxIt illustrates superior performances for most cases.
To show more details in the process of evolutionary, curves for different stages are given in Figure 1 based on the case of kroA100.
According to Figure 1, AS shows faster convergence speed than the other three proposed methods in early iterations, while AS-Index, AS-MaxIt, and AS-Rand all surpass AS in average path length in later iterations. Meanwhile, AS-MaxIt performs best among all these algorithms, which also verifies the results in Table 1.
In the early stage, opposite path information introduced by OBL has negative impact on the convergence speed for all three proposed algorithms; however, it can provide extra information which guarantees the boost in accuracy for the later stage. The results lie in the fact that introducing extra information of opposite paths help to increase the diversity of the population, which balances the exploration and exploitation of solution space.
Medium-scale city example sets are selected from TSPLIB, including kroA200, ts225, tsp225, pr226, pr299, lin318, fl417, pr439, pcb442, and d493. The results are shown in Table 2.
From Table 2, it can be found that the proposed algorithms outperform AS in all the cases except ts225. Among all three algorithms, AS-Index and AS-MaxIt perform similarly, but better than AS-Rand generally. From these results, it can be seen that, with the help of extra information from opposite paths, three proposed methods all improve the original AS in solution accuracy.
Taking fl417 as the example, evolutionary curves in detail for different iteration stages are given in Figure 2, accordingly. According to Figure 2, AS also converges faster than the other three proposed methods in early iterations—for example before 1000 iterations. In addition, in later iterations, the other three proposed methods all exceed AS in average path length. This further validates the conclusions obtained from the analysis of Figure 1.
Large-scale city example sets are selected from TSPLIB, including att532, rat575, d657, u724, vm1084, and rl1304. The results are shown in Table 3.
From Table 3, it can be discovered that the AS-Index shows the obvious superior performance over all the other algorithms, which reveals a fact that the advantages of AS-Index appears as the scale of the example increases based on these results.
Taking vm1084 as the example, evolutionary curves in detail for different iteration stages are given in Figure 3, accordingly. According to Figure 3, AS still shows faster convergence speed than the other three proposed methods in early iterations, but AS-Index outperforms all the others in the end.
Based on all the tables and figures, it can be found that, in most scenarios, at least one of AS-Index, AS-MaxIt, and AS-Rand outperforms AS in average path length. For small-scale examples, AS-MaxIt shows better performance, while, for medium-scale cases, AS-Index and AS-MaxIt perform similarly better than the others. For large-scale city sets, AS-Index is the best algorithm, while AS-Rand ranks in the middle for most cases regarding figures, and it illustrates its stability to some extent. Therefore, it can be drawn that the strategy to introduce OBL into AS provides more information, namely better exploration capability, which explains the superiority of these proposed methods over the original AS. By comparing the results of the running time from Table 1, Table 2 and Table 3, we can also find that the running time of the three improved algorithms is not significantly increased compared with AS. It also validates our previous discussion on time complexity.

4.3. Experimental Results Comparison Based on PS-ACO

To further verify the effectiveness of the proposed algorithm, we employed another PS-ACO to three kinds of opposite based ACO, PS-ACO-Index, PS-ACO-MaxIt, and PS-ACO-Rand to verify the effectiveness of the improved algorithm. The number of ants is 50, and the other parameters are the same as in [26]. Twelve sets of TSP examples are eil51, st70, kroA100, pr136, u159, rat195, tsp225, pr299, lin318, fl417, att532, and d657. The results are given in Table 4.
From Table 4, the proposed PS-ACO-Index, PS-ACO-MaxIt, and PS-ACO-Rand show superior performances over PS-ACO for the examples, eil51, st70, rat195, tsp225, and pr299. For other examples, the proposed algorithms outperform PS-ACO in general, except lin318 and fl417. Compared among three proposed algorithms, PS-ACO-Rand illustrates superior performances for most cases. By comparing the results of the running time, we can also find that the running time of the three improved algorithms is not significantly increased compared with PS-ACO.

5. Conclusions

The performances of swarm optimization algorithms based on OBL present advantages when handling problems of continuous optimization. However, there are only a few approaches proposed to solve problems of discrete optimization. The difficulty in opposite solution construction is considered as one top reason. To solve this problem, two different strategies, direction and indirection, of constructing opposite paths are presented individually in this paper. For indirection strategy, other than using the order of cities from the current solution directly, it studies the positions, noted as indices, of the cities rearranged in a circle, and then calculates the opposite indices. While for direction strategy, opposite operations are carried out directly to the cities in each path.
To use the information of the opposite path, three different frameworks of opposite-based ACO, called ACO-Index, ACO-MaxIt, and ACO-Rand, are also proposed. All ants need to get the increment of pheromone in three improved frameworks. Among three proposed algorithms, ACO-Index employs the strategy of indirection to construct the opposite path and introduces it to pheromone updating. ACO-MaxIt also employs direction strategy to obtain opposite path but only adopts it in the early updating period. Similar to ACO-MaxIt in opposite path construction, ACO-Rand employs this opposite path throughout the stage of pheromone updating. In order to verify the effectiveness of the improvement strategy, AS and PS-ACO are used in three frameworks, respectively. Experiments demonstrate that all three methods, As-Index, As-MaxIt, and AS-Rand, outperform original AS in the cases of small-scale and medium-scale cities while AS-Index performs best when facing large-scale cities. The three improved PS-ACO also showed good performance.
Constructing the opposite path mentioned in this paper is only suitable for symmetric TSP. This is mainly because the path (solution) of the problem is an arrangement without considering the direction. However, if it is replaced by the asymmetric TSP, this method needs to be modified. In addition, if it is replaced by a more general combinatorial optimization problem, it is necessary to restudy how to construct the opposite solution according to the characteristics of the problem. Therefore, our current method of constructing opposite solution is not universal. This is one of the limitations of this study. At the same time, the improved algorithm requires all ants to participate in pheromone updating in order to use the information of opposite path. However, now many algorithms use the best ant to update pheromone, so the method in this paper will have some limitations when it is extended to more ant colony algorithm. However, we also find that it is effective to apply reverse learning to combinatorial optimization problems. Therefore, we will carry out our future research work from two aspects. On the one hand, we plan to continue to study the construction method of more general opposite solution for combinatorial optimization problems, so as to improve its generality. In addition, it will be applied to practical problems such as path optimization to further expand the scope of application. Meanwhile, applying OBL to more widely used algorithms is also one interesting and promising topic. Therefore, on the other hand, we plan to study more effective use of the reverse solution and extend it to the more wildly used ACO, such as MMAS and ACS, and even some other optimization algorithms such as PSO and ABC, to solve more combinatorial optimization problems more effectively.

Author Contributions

Conceptualization, Z.Z.; methodology, Z.Z.and Z.X.; software, Z.X. and X.L.; formal analysis, Z.Z.and Z.X.; resources, Z.Z.; writing—original draft preparation, Z.X.; writing—review and editing, Z.Z.and S.L.; supervision, Z.Z.and Y.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (Grant No. 61703256, 61801197), Jiangsu Natural Science Foundation (Grant No. BK20181004), Natural Science Basic Research Plan In Shaanxi Province of China (Program No. 2017JQ6070), and the Fundamental Research Funds for the Central Universities (Grant No. GK201603014, GK201803020).

Acknowledgments

The authors are grateful to the anonymous reviewers and the editor for the constructive comments and valuable suggestions.

Conflicts of Interest

The authors declare no conflict of interest.The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Karaboga, D.; Akay, B. A survey: Algorithms simulating bee swarm intelligence. Artif. Intell. Rev. 2009, 31, 61–85. [Google Scholar]
  2. Dorigo, M.; Maniezzo, V.; Colorni, A. The ant system: Optimization by a colony of cooperating agents. IEEE Trans. Syst. Man Cybern. Part B 1996, 26, 29–41. [Google Scholar] [CrossRef] [Green Version]
  3. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; pp. 1942–1948. [Google Scholar]
  4. Karaboga, D.; Akay, B. A comparative study of artificial bee colony algorithm. Appl. Math. Comput. 2009, 214, 108–132. [Google Scholar] [CrossRef]
  5. Yang, X.S. Firefly algorithm, stochastic test functions and design optimisation. Int. J. Bio-Inspired Comput. 2010, 2, 78–84. [Google Scholar] [CrossRef]
  6. Gandomi, A.H.; Yang, X.S.; Alavi, A.H. Cuckoo search algorithm: A metaheuristic approach to solve structural optimization problems. Eng. Comput. 2013, 29, 17–35. [Google Scholar] [CrossRef]
  7. Wang, G.G.; Guo, L.G.; Gandomi, A.H.; Hao, G.S.; Wang, H. Chaotic krill herd algorithm. Inf. Sci. 2014, 274, 17–34. [Google Scholar] [CrossRef]
  8. Wang, G.G.; Deb, S.; Cui, Z. Monarch butterfly optimization. Neural Comput. Appl. 2019, 31, 1995–2014. [Google Scholar] [CrossRef] [Green Version]
  9. Wang, G.G. Moth search algorithm: A bio-inspired metaheuristic algorithm for global optimization problems. Memet. Comput. 2018, 10, 151–164. [Google Scholar]
  10. Mollajafari, M.; Shahhoseini, H.S. An efficient ACO-based algorithm for scheduling tasks onto dynamically reconfigurable hardware using TSP-likened construction graph. Appl. Intell. 2016, 45, 695–712. [Google Scholar] [CrossRef]
  11. Elloumi, W.; El Abed, H.; Abraham, A.; Alimi, A.M. A comparative study of the improvement of performance using a PSO modified by ACO applied to TSP. Appl. Soft Comput. 2014, 25, 234–241. [Google Scholar] [CrossRef]
  12. Zhang, Z.Z.; Hu, F.N.; Zhang, N. Ant colony algorithm for satellite control resource scheduling problem. Appl. Intell. 2018, 48, 3295–3305. [Google Scholar] [CrossRef]
  13. Rahim, S.; Javaid, N.; Ahmad, A.; Khan, S.A.; Khan, Z.A.; Alrajeh, N.; Qasim, U. Exploiting heuristic algorithms to efficiently utilize energy management controllers with renewable energy sources. Energy Build. 2016, 129, 452–470. [Google Scholar] [CrossRef]
  14. Bhattacharjee, K.K.; Sarmah, S.P. Modified swarm intelligence based techniques for the knapsack problem. Appl. Intell. 2017, 46, 158–179. [Google Scholar] [CrossRef]
  15. Huang, S.H.; Huang, Y.H.; Blazquez, C.A.; Paredes-Belmar, G. Application of the ant colony optimization in the resolution of the bridge inspection routing problem. Appl. Soft Comput. 2018, 65, 443–461. [Google Scholar] [CrossRef]
  16. Lee, C.Y.; Lee, Z.J.; Lin, S.W.; Ying, K.C. An enhanced ant colony optimization (EACO) applied to capacitated vehicle routing problem. Appl. Intell. 2010, 32, 88–95. [Google Scholar] [CrossRef]
  17. Kumar, A.; Thakur, M.; Mittal, G. A new ants interaction scheme for continuous optimization problems. Int. J. Syst. Assur. Eng. Manag. 2018, 9, 784–801. [Google Scholar] [CrossRef]
  18. Yang, Q.; Chen, W.N.; Yu, Z.; Gu, T.; Li, Y.; Zhang, H.; Zhang, J. Adaptive multimodal continuous ant colony optimization. IEEE Trans. Evol. Comput. 2017, 21, 191–205. [Google Scholar] [CrossRef] [Green Version]
  19. Liao, T.J.; Stützle, T.; Oca, M.A.M.; Dorigo, M. A unified ant colony optimization algorithm for continuous optimization. Eur. J. Oper. Res. 2014, 234, 597–609. [Google Scholar] [CrossRef]
  20. Dorigo, M.; Blum, C. Ant colony optimization theory: A survey. Theor. Comput. Sci. 2005, 344, 243–278. [Google Scholar] [CrossRef]
  21. Dorigo, M.; Gambardella, L.M. Ant colony system: A cooperative learning approach to the traveling salesman problem. IEEE Trans. Evol. Comput. 1997, 1, 53–66. [Google Scholar] [CrossRef] [Green Version]
  22. Stützle, T.; Hoos, H.H. Max-min ant system. Future Gener. Comput. Syst. 2000, 16, 889–914. [Google Scholar]
  23. Luo, Q.; Wang, H.; Zheng, Y.; He, J. Research on path planning of mobile robot based on improved ant colony algorithm. Future Gener. Comput. Syst. 2020, 32, 1555–1566. [Google Scholar] [CrossRef]
  24. Huang, M.; Ding, P. An improved ant colony algorithm and its application in vehicle routing problem. Future Gener. Comput. Syst. 2013, 2013, 1–9. [Google Scholar]
  25. Deng, Y.; Zhu, W.; Li, H.; Zheng, Y.H. Multi-type ant system algorithm for the time dependent vehicle routing problem with time windows. J. Syst. Eng. Electron. 2018, 29, 625–638. [Google Scholar]
  26. Shuang, B.; Chen, J.; Li, Z. Study on hybrid PS-ACO algorithm. Appl. Intell. 2011, 34, 64–73. [Google Scholar] [CrossRef]
  27. Ke, L.J.; Zhang, Q.F.; Battiti, R. MOEA/D-ACO: A multiobjective evolutionary algorithm using decomposition and ant colony. IEEE Trans. Cybern. 2013, 43, 1845–1859. [Google Scholar]
  28. Akpınar, S.; Bayhan, G.M.; Baykasoglu, A. Hybridizing ant colony optimization via genetic algorithm for mixed-model assembly line balancing problem with sequence dependent setup times between tasks. Appl. Soft Comput. 2013, 13, 574–589. [Google Scholar]
  29. Ting, T.O.; Yang, X.S.; Cheng, S.; Huang, K. Hybrid metaheuristic algorithms: Past, present, and future. In Recent Advances in Swarm Intelligence and Evolutionary Computation; Yang, X.S., Ed.; Springer International Publishing: Cham, Switzerland, 2015; pp. 71–83. [Google Scholar]
  30. Rosa, G.H.D.; Papa, J.P.; Yang, X.S. Handling dropout probability estimation in convolution neural networks using meta-heuristics. Soft Comput. 2018, 22, 6147–6156. [Google Scholar] [CrossRef] [Green Version]
  31. Bacanin, N.; Bezdan, T.; Tuba, E.; Strumberger, I.; Tuba, M. Monarch butterfly optimization based convolutional neural network design. Mathematics 2020, 8, 936. [Google Scholar] [CrossRef]
  32. Wang, G.G.; Tan, Y. Improving metaheuristic algorithms with information feedback models. IEEE Trans. Cybern. 2019, 49, 542–555. [Google Scholar] [CrossRef]
  33. Gao, D.; Wang, G.G.; Pedrycz, W. Solving fuzzy job-shop scheduling problem using DE algorithm improved by a selection mechanism. IEEE Trans. Fuzzy Syst. 2020. [Google Scholar] [CrossRef]
  34. Li, W.; Wang, G.G.; Alavi, A.H. Learning-based elephant herding optimization algorithm for solving numerical optimization problems. Knowl. Based Syst. 2020, 195, 105675. [Google Scholar] [CrossRef]
  35. Mahdavi, S.; Rahnamayan, S.; Deb, K. Opposition based learning: A literature review. Swarm Evol. Comput. 2017, 39, 1–23. [Google Scholar] [CrossRef]
  36. Wang, B. A novel artificial bee colony algorithm based on modified search strategy and generalized opposition-based learning. J. Intell. Fuzzy Syst. 2015, 28, 1023–1037. [Google Scholar] [CrossRef]
  37. Rahnamayan, S.; Tizhoosh, H.R.; Salama, M.M.A. Opposition-based differential evolution. IEEE Trans. Evol. Comput. 2008, 12, 64–79. [Google Scholar] [CrossRef] [Green Version]
  38. Chen, J.; Cui, G.; Duan, H. Multipopulation differential evolution algorithm based on the opposition-based learning for heat exchanger network synthesis. Numer. Heat Transf. Part A Appl. 2017, 72, 126–140. [Google Scholar] [CrossRef]
  39. Park, S.Y.; Lee, J.J. Stochastic opposition-based learning using a beta distribution in differential evolution. IEEE Trans. Cybern. 2016, 46, 2184–2194. [Google Scholar] [CrossRef] [PubMed]
  40. Dong, W.; Kang, L.; Zhang, W. Opposition-based particle swarm optimization with adaptive mutation strategy. Soft Comput. 2017, 21, 5081–5090. [Google Scholar] [CrossRef]
  41. Kang, Q.; Xiong, C.; Zhou, M.; Meng, L. Opposition-based hybrid strategy for particle swarm optimization in noisy environments. IEEE Access 2018, 6, 21888–21900. [Google Scholar] [CrossRef]
  42. Malisia, A.R.; Tizhoosh, H.R. Applying opposition-based ideas to the ant colony system. In Proceedings of the 2007 IEEE Swarm Intelligence Symposium (SIS), Honolulu, HI, USA, 1–5 April 2007; pp. 182–189. [Google Scholar]
  43. Ergezer, M.; Simon, D. Oppositional biogeography-based optimization for combinatorial problems. In Proceedings of the 2011 IEEE Congress of Evolutionary Computation (CEC), New Orleans, LA, USA, 5–8 June 2011; pp. 1496–1503. [Google Scholar]
  44. Srivastava, G.; Singh, A. Boosting an evolution strategy with a preprocessing step: Application to group scheduling problem in directional sensor networks. Appl. Intell. 2018, 48, 4760–4774. [Google Scholar] [CrossRef]
  45. Venkatesh, P.; Alok, S. A swarm intelligence approach for the colored traveling salesman problem. Appl. Intell. 2018, 48, 4412–4428. [Google Scholar]
  46. Sarkhel, R.; Das, N.; Saha, A.K.; Nasipuri, M. An improved harmony search algorithm embedded with a novel piecewise opposition based learning algorithm. Eng. Appl. Artif. Intell. 2018, 67, 317–330. [Google Scholar] [CrossRef]
  47. Wang, H.; Wu, Z.; Rahnamayan, S. Enhanced opposition-based differential evolution for solving high-dimensional continuous optimization problems. Soft Comput. 2011, 15, 2127–2140. [Google Scholar] [CrossRef]
  48. Guha, D.; Roy, P.K.; Banerjee, S. Load frequency control of large scale power system using quasi-oppositional grey wolf optimization algorithm. Eng. Sci. Technol. Int. J. 2016, 19, 1693–1713. [Google Scholar] [CrossRef] [Green Version]
  49. Ewees, A.A.; Elaziz, M.A.; Houssein, E.H. Improved grasshopper optimization algorithm using opposition-based learning. Expert Syst. Appl. 2018, 112, 156–172. [Google Scholar] [CrossRef]
Figure 1. Evolutionary curves for different iteration periods based on kroA100.
Figure 1. Evolutionary curves for different iteration periods based on kroA100.
Mathematics 08 01650 g001
Figure 2. Evolutionary curves for different iteration periods based on fl417.
Figure 2. Evolutionary curves for different iteration periods based on fl417.
Mathematics 08 01650 g002
Figure 3. Evolutionary curves for different iteration periods based on vm1084.
Figure 3. Evolutionary curves for different iteration periods based on vm1084.
Mathematics 08 01650 g003
Table 1. Results comparison for small-scale example sets.
Table 1. Results comparison for small-scale example sets.
InstanceAlgorithm S min (%) S max (%) S avg (%) S td T avg
eil51AS2.586.13.564.9465.21
AS-Index2.817.514.275.2664.73
AS-MaxIt2.587.044.256.6663.1
AS-Rand2.825.633.864.6763.76
st70AS5.037.416.225.1690.38
AS-Index3.857.566.037.05104.02
AS-MaxIt4.596.815.834.6389.87
AS-Rand4.598.156.255.4585.29
pr76AS6.039.117.49897.25101.6
AS-Index6.229.837.81926.53118.32
AS-MaxIt5.048.466.661151.295.9
AS-Rand4.718.787.18133695.72
kroA100AS4.866.785.3294.22146.29
AS-Index4.296.364.93111.14163.71
AS-MaxIt4.355.664.7980.21123.84
AS-Rand4.535.85.1156.40124.99
eil101AS8.1112.19.995.78140.25
AS-Index6.210.818.559.55140.2
AS-MaxIt7.1511.89.36.86128.16
AS-Rand7.1512.49.968.23145.56
bier127AS4.757.156.05828.19195.33
AS-Index4.076.755.32870.88176.32
AS-MaxIt3.346.825.03904.32173.58
AS-Rand3.526.85.05961.25175.14
pr136AS9.8313.511.82924.44189.68
AS-Index9.6412.4711.47832.93209.51
AS-MaxIt8.1312.3410.73976.21190.95
AS-Rand10.6812.6210.95890.29191.23
pr152AS4.37.345.79552.91214.30
AS-Index3.568.036.25804.54238.72
AS-MaxIt3.496.835.33739.761982
AS-Rand4.166.955.14537.49220.52
u159AS7.6710.36.86348.22219.92
AS-Index6.318.977.44349.95223.39
AS-MaxIt3.858.326.28479.93206.23
AS-Rand4.888.557.25412.44229.64
rat195AS3.929.387.4335.28286.38
AS-Index3.498.526.4737.02283.31
AS-MaxIt3.837.585.5937.69278.63
AS-Rand4.006.545.3117.01280.39
Table 2. Results comparison for medium-scale example sets.
Table 2. Results comparison for medium-scale example sets.
InstanceAlgorithm S min (%) S max (%) S avg (%) S td T avg
kroA200AS10.3316.5412.36454.71330.43
AS-Index10.1713.0611.12243.66324.495
AS-MaxIt8.0913.4811.27367.58270.07
AS-Rand6.6913.0110.66485.83292.98
ts225AS3.534.273.89275.55345.62
AS-Index3.134.743.91488.06329.03
AS-MaxIt3.354.944.01560.16329.35
AS-Rand3.635.524.23607.16325.54
tsp225AS9.412.0310.3730.67346.81
AS-Index8.0212.169.938.02345.35
AS-MaxIt7.5112.8710.9146.8319.75
AS-Rand9.512.3910.8836.13355.0
pr226AS5.57.946.76632.0327.3
AS-Index4.967.286.47458.47348.97
AS-MaxIt4.297.246.34631.53335.54
AS-Rand4.397.55.92578.15325.31
pr299AS13.3119.4817.03732.30512.6
AS-Index9.4317.4114.951193.8495.6
AS-MaxIt15.5318.2416.91419.72487.8
AS-Rand11.3518.7316.41836.03500.35
lin318AS12.5717.1515.33517.72508.6
AS-Index11.817.1614.97544.18554.6
AS-MaxIt13.9316.8915.59376.06542.2
AS-Rand14.2817.7416.22415.81542.55
fl417AS8.1612.5510.61132.22811.45
AS-Index7.7912.579.91141.28804.6
AS-MaxIt8.3511.5510.3108.13773
AS-Rand8.6713.2410.39146.72795.35
pr439AS9.513.7411.561410.4881.25
AS-Index8.3811.7510.221010.9845.75
AS-MaxIt9.3415.9611.81555.1841.9
AS-Rand11.7315.9113.761205.6859.8
pcb442AS13.5718.8716.84610.98933.5
AS-Index11.6216.3714.22640.69930.7
AS-MaxIt13.6819.2716.66656.89899.35
AS-Rand14.5418.8516.68644.38888.45
d493AS13.419.1816.26459.031032.25
AS-Index12.2316.3114.71412.281043.25
AS-MaxIt14.0817.0115.7280.32969.5
AS-Rand13.4619.2116.4449.971063.8
Table 3. Results comparison for large-scale example sets.
Table 3. Results comparison for large-scale example sets.
InstanceAlgorithm S min (%) S max (%) S avg (%) S td T avg
att532AS13.7920.2117.31415.41407.05
AS-Index13.3719.4915.641203.41436.4
AS-MaxIt14.331816.15766.91384.65
AS-Rand14.6317.9816.34803.931460.35
rat575AS18.6922.5220.5.375.221651.5
AS-Index16.3120.7118.9461.31661.7
AS-MaxIt20.2424.3222.2380.881602.55
AS-Rand22.8826.9325.3686.121598
d657AS17.623.8821.97823.362685.45
AS-Index16.3221.519.7636.02673.65
AS-MaxIt19.424.0921.69526.442114.15
AS-Rand18.2123.8221.24615.172138.1
u724AS20.926.4324.19625.62630.95
AS-Index15.5323.4520.46871.172624.7
AS-MaxIt21.626.7224.05469.32651.65
AS-Rand19.7427.1324.3825.262609.7
vm1084AS22.6527.7825.763490.56722
AS-Index17.6924.7321.147006726.5
AS-MaxIt19.9126.5923.294476.66594
AS-Rand20.7225.7123.0431116695.5
rl1304AS19.5124.3721.763633.67436.5
AS-Index16.6521.6318.953632.37383
AS-MaxIt18.4524.1920.633709.88767.5
AS-Rand17.0922.8220.124359.28652.5
Table 4. Comparisons of PS-ACO, PS-ACO-Index, PS-ACO-MaxIt, and PS-ACO-Rand.
Table 4. Comparisons of PS-ACO, PS-ACO-Index, PS-ACO-MaxIt, and PS-ACO-Rand.
InstancePS-ACOPS-ACO-IndexPS-ACO-MaxItPS-ACO-Rand
S min (%) S max (%) S avg (%) S td T avg S min (%) S max (%) S avg (%) S td T avg S min (%) S max (%) S avg (%) S td T avg S min (%) S max (%) S avg (%) S td T avg
eil5100.70.520.8977.400.70.421.1581.5900.70.461.1578.0600.70.460.9478.58
st700.152.671.075.8499.3502.670.884.67121.440.152.960.935.6199.310.156922.523.7999.02
kroA1000.061.160.5458.85163.60.122.20.72107.74176.340.0610.5559.94151.0601.110.5479.47146.32
pr1367.6412.210.081200206.417.9813.1210.191452.7230.688.0311.979.961116.4203.957.4812.519.811274.4204.24
u15900.880.12112.74886.2500.880.17117.7875.500.220.0228.93860.800.750.0998.07868.45
rat1950.431.330.75.86882.250.431.210.64.24875.650.431.330.66.45907.850.430.990.573.88906.4
tsp2253.75.984.9323.781087.353.55.364.2121.141140.753.735.874.6625.181158.853.525.574.7620.511150.8
pr2999.7814.0811.68605.791927.89.0312.7810.63450.921919.157.814.7411.14885.131882.49.1714.4911.6792.391881.45
lin31810.2915.5912.53523.89571.911.4215.1413.16417.59651.49.6615.6412.85600.33608.7510.6415.7113.52498.17607.8
fl41710.5114.5612.43155.86782.611.117.4514.7226.592778.9511.4517.7714.17216.73714.310.3517.5614.45232.11724.6
att53219.925.0622.821106.81518.4519.5825.2622.421372.81529.2520.0225.322.31113.11342.420.5926.1723.2511141363.95
d65722.6727.0624.66534.72247.421.0427.9124.57780.932202.422.0825.7424.32547.22002.523.1728.1625.31697.212175.95

Share and Cite

MDPI and ACS Style

Zhang, Z.; Xu, Z.; Luan, S.; Li, X.; Sun, Y. Opposition-Based Ant Colony Optimization Algorithm for the Traveling Salesman Problem. Mathematics 2020, 8, 1650. https://doi.org/10.3390/math8101650

AMA Style

Zhang Z, Xu Z, Luan S, Li X, Sun Y. Opposition-Based Ant Colony Optimization Algorithm for the Traveling Salesman Problem. Mathematics. 2020; 8(10):1650. https://doi.org/10.3390/math8101650

Chicago/Turabian Style

Zhang, Zhaojun, Zhaoxiong Xu, Shengyang Luan, Xuanyu Li, and Yifei Sun. 2020. "Opposition-Based Ant Colony Optimization Algorithm for the Traveling Salesman Problem" Mathematics 8, no. 10: 1650. https://doi.org/10.3390/math8101650

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop