You are currently viewing a new version of our website. To view the old version click .
Sensors
  • Article
  • Open Access

17 April 2019

Dynamic Flying Ant Colony Optimization (DFACO) for Solving the Traveling Salesman Problem

,
,
and
1
Department of Information System, College of Computer Engineering and Science, Prince Sattam Bin Abdulaziz University, Al Kharj 11942, Saudi Arabia
2
Department of Computer Science, College of Computer and Information Sciences, King Saud University, Riyadh 11543, Saudi Arabia
*
Author to whom correspondence should be addressed.
This article belongs to the Special Issue Computational Intelligence for Smart Sensor Networks: Possibilities and Prospects for Soft Computing-Based Sensors

Abstract

This paper presents an adaptation of the flying ant colony optimization (FACO) algorithm to solve the traveling salesman problem (TSP). This new modification is called dynamic flying ant colony optimization (DFACO). FACO was originally proposed to solve the quality of service (QoS)-aware web service selection problem. Many researchers have addressed the TSP, but most solutions could not avoid the stagnation problem. In FACO, a flying ant deposits a pheromone by injecting it from a distance; therefore, not only the nodes on the path but also the neighboring nodes receive the pheromone. The amount of pheromone a neighboring node receives is inversely proportional to the distance between it and the node on the path. In this work, we modified the FACO algorithm to make it suitable for TSP in several ways. For example, the number of neighboring nodes that received pheromones varied depending on the quality of the solution compared to the rest of the solutions. This helped to balance the exploration and exploitation strategies. We also embedded the 3-Opt algorithm to improve the solution by mitigating the effect of the stagnation problem. Moreover, the colony contained a combination of regular and flying ants. These modifications aim to help the DFACO algorithm obtain better solutions in less processing time and avoid getting stuck in local minima. This work compared DFACO with (1) ACO and five different methods using 24 TSP datasets and (2) parallel ACO (PACO)-3Opt using 22 TSP datasets. The empirical results showed that DFACO achieved the best results compared with ACO and the five different methods for most of the datasets (23 out of 24) in terms of the quality of the solutions. Further, it achieved better results compared with PACO-3Opt for most of the datasets (20 out of 21) in terms of solution quality and execution time.

1. Introduction

The traveling salesman problem (TSP) [1] involves finding the shortest tour distance for a salesperson who wants to visit each city in a group of fully connected cities exactly once. TSP is a discrete optimization problem. It is a classic example of a category of computing problems known as NP-hard problems [2,3]. Although there are simple algorithms for solving these problems, these algorithms require exponential time, which makes them impractical for solving large-size problems. Hence, metaheuristic optimization algorithms are usually applied to find good solutions, although these solutions may not be optimal.
The TSP problem can be used for modeling several wireless sensor network (WSN) problems [4]. In a WSN, the sensors are located in a sensing field to collect data, and send these data to the source node wirelessly. There are two ways to increase the lifetime of the sensors: first, by reducing the size and number of data [5,6], and second, by reducing the cost of transferring the data [4]. For example, a good solution to the TSP problem can also be considered an efficient diffusion method for reducing the transferring cost.
Many methods, including heuristic or hybrid, have been proposed for solving the TSP, but most of them were unable to avoid the stagnation problem, or they may have obtained good solutions but took a long execution time to do so [7]. In this work, we enhanced the ant colony optimization (ACO) algorithm based on imaginary ants that can fly. These ants deposit their pheromones with neighboring nodes while flying by injecting them from a distance. This allows not only the nodes on a good path to receive some pheromones but also the neighboring nodes. The algorithm also makes use of the 3-Opt algorithm to help avoid reaching a local minimum.
The main contributions of this work on the flying ACO (FACO) algorithm are as follows: (1) proposing a dynamic neighboring selection mechanism to balance between exploration and exploitation, (2) reducing the execution time of FACO by making flying ants equal to half the ants, and (3) adapting the flying process to work with the TSP problem.
The main contributions of this work in general are as follows: (1) obtaining better-quality solutions, (2) significantly reducing the execution time, and (3) avoiding getting stuck at a local minimum.
The paper is organized as follows: in Section 2, we discuss related works; Section 3 presents the proposed enhancement of the ACO algorithm; Section 4 shows the experimental results; and the conclusion is shown in Section 5.

3. Dynamic Flying Ant Colony Optimization (DFACO) Algorithm

Many methods, including heuristic or hybrid, have been proposed for solving the TSP, but most of them cannot avoid the stagnation problem, or they may obtain solutions but take a long execution time [7]. In this work, we proposed an enhanced ACO algorithm that finds better solutions in less computation time and a robustness mechanism to avoid the stagnation problem.
In this section, we present a modification of the FACO algorithm to make it suitable for the TSP problem in the following ways.
First, the number of neighboring nodes in FACO were static based on the experiments. However, the number of neighbors in DFACO that may be injected with pheromones was dynamic. The number of neighboring nodes varied in each iteration depending on the quality of the best solution reached so far compared to the other solutions. The number of neighbors was determined based on two cases: (1) If the best solution is slightly better than the other solutions, then the number of neighbors should be large to obtain more neighbors. This encourages exploration in future iterations. (2) If the best solution is considerably better than the other solutions, then the number of neighbors should be small to encourage exploitation in future iterations.
The number of nearest neighbors, NS, was determined according to the following formula:
N S = ( L g b ( t ) k S L g l ( t ) ) × N
where Lgb(t) is the tour length of the global best tour at time t, Lkl(t) is the tour length of ant k at time t, S is the number of ants, and N is the number of cities.
The second modification aimed to reduce the execution time. If all ants were flying ants (such as in FACO), we would have to determine many neighbors for each node on the best path, which may substantially increase the execution time. Therefore, here, only 50% of the ants were flying ants and the rest were normal walking ants.
The third modification aimed to encourage exploration at early stages and exploitation at later stages. This modification is similar to the FACO algorithm, but we modified the process to adapt it to the TSP problem. The intuition behind this is that at early stages, we have no idea about the location of the best solution in the search space; and therefore, ants should be encouraged to explore the search space. On the other hand, at later iterations, the region that contains the best solution is more likely to have been located, and therefore, exploitation should be encouraged. This was achieved by injecting pheromones at farther neighbors during early iterations, while at later iterations, we injected the pheromones at only the nearest neighbors. Equation (6) was used to determine the amount of pheromone for each neighbor.
As a final modification, we embedded the 3-Opt algorithm in FACO to reduce the chances of getting stuck at a local minimum.
Figure 4 shows the complete algorithm in detail, and Figure 5 illustrates the process of the DFACO algorithm.
Figure 4. The dynamic FACO (DFACO) algorithm.
Figure 5. DFACO flowchart.

4. Experimental Results

We conducted empirical experiments using TSP datasets from TSPLIB [34] to test the performance of the proposed algorithm. We performed three comparisons. First, we compared DFACO combined with the 3-Opt algorithm with ACO combined with the 3-Opt algorithm using 24 datasets. Second, we compared DFACO performance with PACO-3Opt [7] in detail using 21 datasets. Third, we compared DFACO performance with five more recent methods [8,9,18,20,21] using 24 datasets.
We implemented DFACO in Java and used the ACOTSPJava (http://adibaba.github.io/ACOTSPJava/) implementation of ACO.
We compared the methods with respect to the average of the best solutions for all runs (Mean), the standard deviation (SD), and the best solution for each run (Best). We also compared DFACO, PACO-3Opt, and ACO with respect to execution time in seconds.
Table 1 lists the parameter values of both algorithms, which were empirically determined. These values were also used to compare DFACO to all of the other methods.
Table 1. Control parameters for ACO and DFACO.
Both ACO and DFACO were run for 100 iterations (Z) and each dataset was used in 30 independent experiments. Table 2 lists the comparison results for DFACO and ACO. The first column shows the name of the TSP datasets. The second column shows the best-known solution (BKS) as reported on the TSPLIB website (http://comopt.ifi.uni-heidelberg.de/software/TSPLIB95/STSP.html). Bold font indicates the best results.
Table 2. Experimental results of ACO and DFACO.
Table 2 indicates that DFACO was able to find the BKS in all runs for 16 datasets with zero SD, while ACO was able to find the BKS in all runs for 15 datasets with zero SD. However, the proposed method obtained the BKS in all runs faster than ACO for four datasets (bier127, ch130, kroB150, and kroA200), while ACO obtained the BKS faster than DFACO for two datasets (eil101 and ch150).
Figure 6 shows the average of the best solutions, while Figure 7 presents the execution time in seconds for all datasets in the table. As can be seen in Figure 6, DFACO was better than ACO in terms of the average of the best solutions and obtains a shorter distance on average for all datasets. Also, DFACO was faster than ACO by 54.7 s for all datasets.
Figure 6. DFACO and ACO comparison in terms of Mean as shown in Table 2.
Figure 7. DFACO and ACO comparison in terms of execution time in seconds as shown in Table 2.
For the remaining eight datasets, DFACO found the best Mean solution for seven datasets, while ACO found the best Mean solution for one dataset (fl1400). These results were found to be statistically significant according to the Wilcoxon signed-rank test, with N = 8 and p ≤ 0.05. A t-test was used to see if the results were statistically significant in the 30 independent runs for each one of the eight datasets. The results indicate that the proposed method’s results were statistically significant for four out of eight datasets, namely, for the datasets rat575, rat783, rl1323, and d1655. We did not perform a t-test for the remaining 16 datasets because both algorithms achieved the BKS.
For the second set of comparisons, we followed the comparison method of PACO-3Opt [7]. In the PACO-3Opt experiments, the TSP datasets were divided based on the problem size into small-scale and large-scale datasets (the size of the large-scale datasets was between 400 and 600). The small-size datasets used 10 TSP datasets (shown in Table 3), and for the large size, 11 TSP datasets were used (shown in Table 4). Table 3 presents the experimental results of comparing DFACO with PACO-3Opt for small-scale TSP instances. The table shows the best, worst, and average values of both algorithms for comparison. Boldface text indicates the better results for both algorithms. The results reveal that the DFACO obtained the optimum distances for all datasets in terms of best, worst, and average values. Meanwhile, PACO-3Opt obtained the optimum distances for only six datasets, one dataset, and one dataset in terms of best, worst, and average, respectively. With regard to execution time, Table 3 shows that DFACO significantly reduced the execution time and obtained better results faster than PACO-3Opt for all TSP instances.
Table 3. DFACO and PACO-3Opt comparison on small-scale datasets.
Table 4. DFACO and PACO-3Opt comparison for large-scale datasets.
Table 4 presents the results of comparing DFACO with PACO-3Opt for large-scale TSP instances. It also shows the best, worst, and average values of both algorithms for comparison. Boldface text indicates the better results for both algorithms. The results reveal that DFACO obtained better distances for all datasets in terms of best, worst, and average values, except for rat783. With regard to execution time, Table 4 shows that DFACO significantly reduced the execution time and obtained better results faster than PACO-3Opt for all TSP instances except for rat783, where DFACO was faster than PACO-3Opt but did not obtain better results.
Table 5 shows the third type of comparison between DFACO and five recent methods [5,6,7,8,9]. In this set of experiments, we used 24 TSP datasets and 30 independent runs.
Table 5. Experimental results of proposed method compared with recent research (NA indicates that results were not reported in the original source).
The table reveals that the proposed algorithm achieved the best results for all datasets except one (rat783), for which Deng’s method [9] achieved the best result. Also, DFACO found the BKS for 18 datasets, and for one dataset (rat575), it found an even better solution than the BKS.
Figure 8, Figure 9, Figure 10, Figure 11, Figure 12, Figure 13, Figure 14, Figure 15, Figure 16, Figure 17, Figure 18, Figure 19, Figure 20, Figure 21, Figure 22, Figure 23, Figure 24, Figure 25, Figure 26, Figure 27, Figure 28, Figure 29, Figure 30 and Figure 31 show the average of the best solutions for each TSP instance obtained by different algorithms. These figures visualize the results of the 24 datasets shown in Table 5. From these figures, it is clear that DFACO’s performance was better than that of the other algorithms for most datasets except rat783.
Figure 8. Average of best solutions obtained for instance eil51 by all algorithms.
Figure 9. Average of best solutions obtained for instance eil76 by all algorithms.
Figure 10. Average of best solutions obtained for instance eil101 by all algorithms.
Figure 11. Average of best solutions obtained for instance berlin52 by all algorithms.
Figure 12. Average of best solutions obtained for instance bier127 by all algorithms.
Figure 13. Average of best solutions obtained for instance ch130 by all algorithms.
Figure 14. Average of best solutions obtained for instance ch150 by all algorithms.
Figure 15. Average of best solutions obtained for instance rd100 by all algorithms.
Figure 16. Average of best solutions obtained for instance lin105 by all algorithms.
Figure 17. Average of best solutions obtained for instance lin318 by all algorithms.
Figure 18. Average of best solutions obtained for instance kroA100 by all algorithms.
Figure 19. Average of best solutions obtained for instance kroA150 by all algorithms.
Figure 20. Average of best solutions obtained for instance kroA200 by all algorithms.
Figure 21. Average of best solutions obtained for instance kroB100 by all algorithms.
Figure 22. Average of best solutions obtained for instance kroB150 by all algorithms.
Figure 23. Average of best solutions obtained for instance kroB200 by all algorithms.
Figure 24. Average of best solutions obtained for instance kroC100 by all algorithms.
Figure 25. Average of best solutions obtained for instance kroD100 by all algorithms.
Figure 26. Average of best solutions obtained for instance kroE100 by all algorithms.
Figure 27. Average of best solutions obtained for instance rat575 by all algorithms.
Figure 28. Average of best solutions obtained for instance rat783 by all algorithms.
Figure 29. Average of best solutions obtained for instance rl1323 by all algorithms.
Figure 30. Average of best solutions obtained for instance fl1400 by all algorithms.
Figure 31. Average of best solutions obtained for instance d1655 by all algorithms.
Table 6 compares DFACO with five recent methods with respect to the percentage deviation of the average solution to the BKS value (PDav) and the percentage deviation of the best solution to the BKS value (PDbest) in the experimental results. PDav and PDbest were calculated using Equations (9) and (10), respectively:
P D a v = M e a n B K S B K S × 100 ,
P D b e s t = B e s t B K S B K S × 100 .
Table 6. Results of PDav and PDbest for DFACO compared with recent methods.
The results revealed that the values of PDav and PDbest for DFACO were better than those for the other methods on all datasets except one (rat783), for which Deng’s method [9] was better.

5. Conclusions

This paper proposed a modified FACO algorithm for the TSP. We modified FACO in several ways to reduce the execution time and to better balance exploration and exploitation. For example, the number of neighboring nodes receiving pheromones varied depending on the quality of the solution compared to the other solutions. This helped to balance the exploration and exploitation strategies. We also embedded the 3-Opt algorithm to improve the solution by mitigating the effect of the stagnation problem. Moreover, the colony contained a combination of normal and flying ants. These modifications aimed to achieve better solutions with less processing time and to avoid getting stuck in local minima.
DFACO was compared with (1) ACO and five recent methods for the TSP [8,9,18,20,21] using 24 TSP datasets and (2) PACO-3Opt using 22 TSP datasets. Our empirical results showed that DFACO achieved the best results compared to ACO and the five different methods for most datasets (23 out of 24) in terms of solution quality. Also, DFACO achieved the best results compared with PACO-3Opt for most datasets (20 out of 21) in terms of solution quality and the execution time. Furthermore, for one dataset, DFACO achieved a better solution than the best-known solution.

Author Contributions

F.D. and K.E.H. proposed the idea and wrote the manuscript. All authors took part in designing the solution. F.D. implemented the algorithms and compared them. H.M. and H.A. validated the implementation and analyzed the results. All authors reviewed and proofread the final manuscript.

Funding

The authors extend their appreciation to the Deanship of Scientific Research at King Saud University for funding this work through research group No. (RG-1439-35).

Acknowledgments

The authors thank the Deanship of Scientific Research and RSSU at King Saud University for their technical support.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; and in the decision to publish the results.

References

  1. Lenstra, J.K.; Kan, A.R.; Lawler, E.L.; Shmoys, D. The Traveling Salesman Problem: A Guided Tour of Combinatorial Optimization; John Wiley & Sons: Hoboken, NJ, USA, 1985. [Google Scholar]
  2. Papadimitriou, C.H. The Euclidean travelling salesman problem is NP-complete. Theor. Comput. Sci. 1977, 4, 237–244. [Google Scholar] [CrossRef]
  3. Garey, M.R.; Graham, R.L.; Johnson, D.S. Some NP-complete geometric problems. In Proceedings of the 8th Annual ACM Symposium on Theory of Computing, Hershey, PA, USA, 3–5 May 1976; pp. 10–22. [Google Scholar]
  4. Tito, J.E.; Yacelga, M.E.; Paredes, M.C.; Utreras, A.J.; Wójcik, W.; Ussatova, O. Solution of travelling salesman problem applied to Wireless Sensor Networks (WSN) through the MST and B&B methods. In Proceedings of the Photonics Applications in Astronomy, Communications, Industry, and High-Energy Physics Experiments, Wilga, Poland, 3–10 June 2018; p. 108082F. [Google Scholar]
  5. Kravchuk, S.; Minochkin, D.; Omiotek, Z.; Bainazarov, U.; Weryńska-Bieniasz, R.; Iskakova, A. Cloud-based mobility management in heterogeneous wireless networks. In Proceedings of the Photonics Applications in Astronomy, Communications, Industry, and High Energy Physics Experiments, Wilga, Poland, 28 May–6 June 2017; p. 104451W. [Google Scholar]
  6. Farrugia, L.I. Wireless Sensor Networks; Nova Science Publishers, Inc.: New York, NY, USA, 2011. [Google Scholar]
  7. Gülcü, Ş.; Mahi, M.; Baykan, Ö.K.; Kodaz, H. A parallel cooperative hybrid method based on ant colony optimization and 3-Opt algorithm for solving traveling salesman problem. Soft Comput. 2018, 22, 1669–1685. [Google Scholar] [CrossRef]
  8. Chen, S.-M.; Chien, C.-Y. Solving the traveling salesman problem based on the genetic simulated annealing ant colony system with particle swarm optimization techniques. Expert Syst. Appl. 2011, 38, 14439–14450. [Google Scholar] [CrossRef]
  9. Deng, W.; Zhao, H.; Zou, L.; Li, G.; Yang, X.; Wu, D. A novel collaborative optimization algorithm in solving complex optimization problems. Soft Comput. 2017, 21, 4387–4398. [Google Scholar] [CrossRef]
  10. Eskandari, L.; Jafarian, A.; Rahimloo, P.; Baleanu, D. A Modified and Enhanced Ant Colony Optimization Algorithm for Traveling Salesman Problem. In Mathematical Methods in Engineering; Springer: Berlin/Heidelberg, Germany, 2019; pp. 257–265. [Google Scholar]
  11. Hasan, L.S. Solving Traveling Salesman Problem Using Cuckoo Search and Ant Colony Algorithms. J. Al-Qadisiyah Comput. Sci. Math. 2018, 10, 59–64. [Google Scholar]
  12. Mavrovouniotis, M.; Müller, F.M.; Yang, S. Ant colony optimization with local search for dynamic traveling salesman problems. IEEE Trans. Cybern. 2017, 47, 1743–1756. [Google Scholar] [CrossRef]
  13. de S Alves, D.R.; Neto, M.T.R.S.; Ferreira, F.d.S.; Teixeira, O.N. SIACO: A novel algorithm based on ant colony optimization and game theory for travelling salesman problem. In Proceedings of the 2nd International Conference on Machine Learning and Soft Computing, Phu Quoc Island, Viet Nam, 2–4 February 2018; pp. 62–66. [Google Scholar]
  14. Han, X.-C.; Ke, H.-W.; Gong, Y.-J.; Lin, Y.; Liu, W.-L.; Zhang, J. Multimodal optimization of traveling salesman problem: A niching ant colony system. In Proceedings of the Genetic and Evolutionary Computation Conference Companion, Kyoto, Japan, 15–19 July 2018; pp. 87–88. [Google Scholar]
  15. Pintea, C.-M.; Pop, P.C.; Chira, C. The generalized traveling salesman problem solved with ant algorithms. Complex Adapt. Syst. Model. 2017, 5, 8. [Google Scholar] [CrossRef]
  16. Xiao, Y.; Jiao, J.; Pei, J.; Zhou, K.; Yang, X. A Multi-strategy Improved Ant Colony Algorithm for Solving Traveling Salesman Problem. In Proceedings of the IOP Conference Series: Materials Science and Engineering, Shanxi, China, 18–20 May 2018; p. 042101. [Google Scholar]
  17. Zhou, Y.; He, F.; Qiu, Y. Dynamic strategy based parallel ant colony optimization on GPUs for TSPs. Sci. China Inf. Sci. 2017, 60, 068102. [Google Scholar] [CrossRef]
  18. Mahi, M.; Baykan, Ö.K.; Kodaz, H. A new hybrid method based on particle swarm optimization, ant colony optimization and 3-opt algorithms for traveling salesman problem. Appl. Soft Comput. 2015, 30, 484–490. [Google Scholar] [CrossRef]
  19. Khan, I.; Maiti, M.K. A swap sequence based Artificial Bee Colony algorithm for Traveling Salesman Problem. Swarm Evol. Comput. 2019, 44, 428–438. [Google Scholar] [CrossRef]
  20. Ouaarab, A.; Ahiod, B.; Yang, X.-S. Discrete cuckoo search algorithm for the travelling salesman problem. Neural Comput. Appl. 2014, 24, 1659–1669. [Google Scholar] [CrossRef]
  21. Osaba, E.; Yang, X.-S.; Diaz, F.; Lopez-Garcia, P.; Carballedo, R. An improved discrete bat algorithm for symmetric and asymmetric Traveling Salesman Problems. Eng. Appl. Artif. Intell. 2016, 48, 59–71. [Google Scholar] [CrossRef]
  22. Choong, S.S.; Wong, L.-P.; Lim, C.P. An artificial bee colony algorithm with a modified choice function for the Traveling Salesman Problem. Swarm Evol. Comput. 2019, 44, 622–635. [Google Scholar] [CrossRef]
  23. Civicioglu, P.; Besdok, E. A conceptual comparison of the Cuckoo-search, particle swarm optimization, differential evolution and artificial bee colony algorithms. Artif. Intell. Rev. 2013, 39, 315–346. [Google Scholar] [CrossRef]
  24. Lin, S. Computer solutions of the traveling salesman problem. Bell Syst. Tech. J. 1965, 44, 2245–2269. [Google Scholar] [CrossRef]
  25. Dorigo, M.; Stützle, T. Ant colony optimization: Overview and recent advances. In Handbook of Metaheuristics; Springer: Berlin/Heidelberg, Germany, 2019; pp. 311–351. [Google Scholar]
  26. Reinelt, G. The Traveling Salesman: Computational Solutions for TSP Applications; Springer: Berlin/Heidelberg, Germany, 1994. [Google Scholar]
  27. Blazinskas, A.; Misevicius, A. combining 2-opt, 3-opt and 4-opt with k-swap-kick perturbations for the traveling salesman problem. In Proceedings of the 17th International Conference on Information and Software Technologies, Kaunas, Lithuania, 27–29 April 2011; pp. 50–401. [Google Scholar]
  28. Dorigo, M.; Gambardella, L.M. Ant colony system: A cooperative learning approach to the traveling salesman problem. IEEE Trans. Evol. Comput. 1997, 1, 53–66. [Google Scholar] [CrossRef]
  29. Dorigo, M.; Gambardella, L.M. Ant colonies for the travelling salesman problem. BioSystems 1997, 43, 73–81. [Google Scholar] [CrossRef]
  30. Gambardella, L.M.; Dorigo, M. Solving Symmetric and Asymmetric TSPs by Ant Colonies. In Proceedings of the International Conference on Evolutionary Computation, Nagoya, Japan, 20–22 May 1996; pp. 622–627. [Google Scholar]
  31. Deepa, O.; Senthilkumar, A. Swarm intelligence from natural to artificial systems: Ant colony optimization. Networks (Graph-Hoc) 2016, 8, 9–17. [Google Scholar]
  32. Aljanaby, A. An Experimental Study of the Search Stagnation in Ants Algorithms. Int. J. Comput. Appl. 2016, 148. [Google Scholar] [CrossRef]
  33. Dahan, F.; El Hindi, K.; Ghoneim, A. An Adapted Ant-Inspired Algorithm for Enhancing Web Service Composition. Int. J. Semant. Web Inf. Syst. 2017, 13, 181–197. [Google Scholar] [CrossRef]
  34. Reinelt, G. TSPLIB—A traveling salesman problem library. Orsa J. Comput. 1991, 3, 376–384. [Google Scholar] [CrossRef]

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.