Toward a Robust Multi-Objective Metaheuristic for Solving the Relay Node Placement Problem in Wireless Sensor Networks

During the last decade, Wireless sensor networks (WSNs) have attracted interest due to the excellent monitoring capabilities offered. However, WSNs present shortcomings, such as energy cost and reliability, which hinder real-world applications. As a solution, Relay Node (RN) deployment strategies could help to improve WSNs. This fact is known as the Relay Node Placement Problem (RNPP), which is an NP-hard optimization problem. This paper proposes to address two Multi-Objective (MO) formulations of the RNPP. The first one optimizes average energy cost and average sensitivity area. The second one optimizes the two previous objectives and network reliability. The authors propose to solve the two problems through a wide range of MO metaheuristics from the three main groups in the field: evolutionary algorithms, swarm intelligence algorithms, and trajectory algorithms. These algorithms are the Non-dominated Sorting Genetic Algorithm II (NSGA-II), Strength Pareto Evolutionary Algorithm 2 (SPEA2), Multi-Objective Evolutionary Algorithm based on Decomposition (MOEA/D), Multi-Objective Artificial Bee Colony (MO-ABC), Multi-Objective Firefly Algorithm (MO-FA), Multi-Objective Gravitational Search Algorithm (MO-GSA), and Multi-Objective Variable Neighbourhood Search Algorithm (MO-VNS). The results obtained are statistically analysed to determine if there is a robust metaheuristic to be recommended for solving the RNPP independently of the number of objectives.


Introduction
Over the last years, Wireless Sensor Networks (WSNs) have attracted a great interest in both industry and academy. This fact is because of the open possibilities for monitoring systems and environments with reduced deployment and maintenance costs. As a result, many practical applications were rolled out with the advancement of technologies. For instance, they were successfully applied for smart grids, smart farming, and smart buildings [1][2][3].
A Traditional Wireless Sensor Network (T-WSN) is composed of many sensors capturing information and a single sink node (or several ones) collecting all data. The sensors have some

•
A formal statement is provided for the two formulations of the RNPP addressed. The first formulation optimizes Average Energy Cost (AEC) and Average Sensitivity Area (ASA). The second formulation optimizes the two previous objectives and Network Reliability (NR).

•
The previously introduced MO metaheuristics are applied for solving the two formulations of the RNPP while optimizing a freely available dataset proposed by ourselves in [21]. This is because standard data sets do not exist for the problem definition. Instead, authors in the recent scientific literature consider non-public or randomly generated data sets. Thus, the results obtained in this paper could be replicated or improved by other authors in future works.

•
The results obtained are analysed through an accepted statistical methodology to determine if there is any metaheuristic which provides a significantly better performance for each formulation.
As a result, we could conclude if there is any metaheuristic which provides a robust performance independently of the problem complexity and the number of objectives.
The rest of this paper is structured as follows. Section 2 discusses the related work. Section 3 describes the WSN model considered. Section 4 defines the two optimization problems. Section 5 discusses the MO metaheuristics considered. Section 6 includes the solving strategy. Section 7 exposes the experimental results. Finally, conclusions and future lines of research are left for Section 8.

Background
This section reviews related works within the RNPP in WSNs, focusing on the two sets of techniques mainly considered in the literature, which are heuristics and metaheuristics.
Starting with heuristics, we may cite the following relevant works. Hou et al. [7] deployed RNs in TT-WSNs while minimizing the network geometric deficiencies and maximizing the network lifetime. Tang et al. [22] applied two polynomial time approximation algorithms to determine which was the minimum number of RNs to ensure fault-tolerance and connectivity in TT-WSNs. Wang et al. [23] optimized the network cost in TT-WSNs with constraints on the lifetime and connectivity, while considering two models: all nodes were energy limited and the RNs were not energy limited. Lloyd and Xue [24] optimized the network lifetime and preserved connectivity in ST-WSNs by following two approaches: between each pair of sensors there was a path composed of RNs and/or sensors, and on the other hand, the path included only RNs. Han et al. [25] maximized fault-tolerance in ST-WSNs while assuming sensors with different transmission radius. Xu et al. [26] studied the impact of random device placement on connectivity and lifetime in TT-WSNs. Misra et al. [27] ensured connectivity in ST-WSNs by deploying a minimum number of RNs, where the RNs were constrained to be placed at a subset of candidate locations, this is the so-called Constrained RNPP (C-RNPP). Nigam and Agarwal [28] designed a branch-and-cut algorithm to deploy the minimum number of RNs in ST-WSNs, such that there was a prespecified delay bound between the sensors and the sink node. Misra et al. [29] addressed the C-RNPP in ST-WSNs minimising the number of RNs needed, while keeping connectivity, in an energy-harvesting network, in which the energy harvesting potential of the candidate locations was known a priori. Ma et al. [30] proposed a connectivity-aware algorithm for RN placement in TT-WSNs. Concretely, the authors considered a local search approximation algorithm to solve the problem. Djenouri and Bagaa [31] proposed a heuristic method to prolong the network lifetime by deploying additional sensors and RNs in both ST and TT-WSNs by following a C-RNPP. Ranga et al. [32] proposed a method to heal the network partition problem in ST-WSNs focused on connectivity. Izadi et al. [33] proposed a fuzzy-based self-healing coverage scheme for randomly deployed mobile sensor nodes in ST-WSNs. The proposed scheme determined the uncovered sensing areas and then selected the best mobile nodes to be moved to minimize the coverage hole. Sitanayah et al. [34] proposed two algorithms (Greedy-MSP and GRASP-MSP) for solving the problem of multiple sink placement to minimise the deployment cost while ensuring that each sensor node in the network was double-covered. They also proposed two algorithms (Greedy-MSRP and GRASP-MSRP) for solving the problem of deploying sinks and RNs minimise the deployment cost and guarantee that all sensor nodes in the network were double-covered. Bagaa et al. [35] tackled the deployment of RNs in ST-WSNs by following a C-RNPP. The authors focused on minimising the outage probabilities when constructing the routing tree by adding a minimum number of RNs that guaranteed connectivity.
Following with metaheuristics, we may cite the following relevant works. Zhao and Chen [36] implemented a Particle Swarm Optimization (PSO) to minimize the energy expenditure in ST-WSNs. Perez et al. [37] assumed an MO-EA to optimize the number of RNs deployed and the energy cost in ST-WSNs by following a C-RNPP. Peiravi et al. [38] considered an MO-GA to optimize the network lifetime for several delay values in TT-WSNs. Gupta et al. [39] proposed two algorithms for relay node placement in ST-WSNs, providing k-connectivity of the sensor nodes, where the first algorithm is a Genetic Algorithm (GA) and the second one is based on a greedy approach. Hashim et al. [40] proposed an enhanced deployment algorithm based on Artificial Bee Colony (ABC) to extend the lifetime in TT-WSNs. George and Sharma et al. [41] considered a GA to deploy RNs in ST-WSNs by following a C-RNPP approach to minimise the number of RNs while providing maximum connectivity. Yu et al. [42] studied how to deploy RNs in ST-WSNs while optimizing energy cost and reliability, using to this end three MO metaheuristics.
The proposal in this work differs from the papers introduced before in the following: (i) the network model considers a Single-Tiered (ST) approach, which is usual in medium-size networks assuming low-cost devices, such as occurs in intensive agriculture [43]. (ii) In this model, the RNs are similar to sensors, but without having electronics for capturing data, instead, they are specialized in communication. Moreover, RNs have harvested capabilities and then, they can be deployed without taking into account the existence of power sources, such as plugs. This fact results in an unconstrained RNPP, meaning that the search space is greater than in the C-RNPP focus. (iii) We address the RNPP through MO metaheuristics, providing a trade-off between the conflicting objectives, not as with heuristics, which is useful in the decision-making of network designers. As introduced before, many heuristics were proposed for addressing the ST-RNPP [24,25,[27][28][29][31][32][33]35]. However, only a few papers considered metaheuristics to this end [36,37,39,41,42]. Regarding these latest works, the authors in [36,39,41] considered a single-objective focus and our proposal is MO. The authors in [37] considered a MO focus for the number of NRs and energy cost, but following a C-RNPP. Finally, the authors in [42] considered a similar focus to ours, but assuming three MO metaheuristics and an optimization problem with two objectives (reliability and energy cost). In our proposal, we consider a wide range (seven) of MO metaherustics to solve two formulations of the RNPP, with two and three objectives, to identify a robust metaheuristic to solve the problem.
This work is partially inspired by some early papers. The bi-objective RNPP was addressed in [44][45][46] for NSGA-II, SPEA2, MO-ABC, MO-GSA, MO-VNS, and MO-VNS*. The three-objective RNPP was addressed in [47] for NSGA-II, SPEA2, MOEA/D, MO-ABC, MO-VNS, and MO-FA. This paper presents a more complete development with an intensive statistical study comparing the performance of the metaheuristics solving the two RNPPs. As expected, this work includes experimental results never published before, such as the performance of MO-FA and MOEA/D solving the bi-objective problem and MO-GSA and MO-VNS* for the three-objective case.

Network Model
This section describes the WSN model considered, including general assumptions and specific details about energy cost, sensitivity area, network lifetime, and network reliability. For clarity, the mathematical notation used in this work is described in the abbreviation part of this document.

General Assumptions of the WSN Model
1. The network is composed of a sink node,s s sensors, ands r RNs, which can be linked following an ST approach if they are at a distance lower than the communication radius r c . All these devices are placed on a same outdoor unrestricted 2D-surface with size d x × d y , where there is not any relevant obstacle nor external interference. 2. Sensors are powered by batteries. The sink node and RNs are energy-harvesting devices, having enough energy capacity for operating over network lifetime. 3. Initially, at time t = 0, all sensors start with the same energy capacity iec in the batteries. If a sensor is exhausted during operation (t > 0), it cannot be linked again, i.e., lifetime after battery replacement is not considered. 4. Sensors capture information about the environment on a regular basis with a sensitivity radius r s , i.e., a sensor covers a circumference of radius r s . All information captured is immediately sent to the sink node. 5. The sink node is the only connection point of the network to the outside. 6. RNs are low-cost devices with a similar conception that sensors, but without capturing data. Thus, they only send all the information received to the sink node. 7. RNs have the necessary computational resources to manage network traffic while maintaining a low-power consumption to facilitate the harvesting design and reduce the cost of the solution. This fact is usually addressed by using a low-power microprocessor, which generally has a higher computing capacity than the one used in the sensors. 8. The routing protocol for all devices is the one provided by Dijsktra's Algorithm [48] for minimum path length among devices. 9. A perfect synchronization among devices and an efficient MAC protocol are supposed, reducing energy cost because of retransmissions and idle time.

Energy Cost
We consider the energy model proposed by [49] to simulate the energy cost of sensors during operation time. Note that sensors are the only devices which are subject to energy limitations in our model. In this formulation, a sensor i = (x, y) ∈ S s (t) where x ∈ [0, d x ] and y ∈ [0, d y ] sends a number of packets P i (t) at time t > 0 given by This formulation considers the number of packets captured by i, a packet per instant time, and the number of packets relayed by the sensor i, R p i (t), because of the multi-hop routing protocol, which is given by where z c j,i (t) is a variable assuming 1 if i ∈ S s (t) is in the minimum path between j ∈ S s (t) and the sink node c at t > 0, and 0 otherwise.
The energy cost Ee i (t) of a sensor i at time t is given by where || · || d is the Euclidean distance between two points, β > 0 is the transmission quality parameter, amp > 0 is the energy cost per bit of the power amplifier, k > 0 is the information packet size in bits, w c i (t) is the variable providing the next device in the minimum path between i ∈ S s (t) and the sink node at time t > 0, and α ∈ [2,4] is the path loss exponent. Thus, the energy charge Ec i (t) of a sensor i at time t is given by where iec denotes the initial energy charge of the sensors, for iec > 0. Hence, if Ec i (t) equals zero, the sensor is out of energy, and then it cannot be linked. Otherwise, it is active.

Sensitivity Area
As stated before, a sensor covers a circumference of radius r s and area πr 2 s . Therefore, at time t, the WSN sensitivity area is calculated as the union of the areas of the active sensors at time t with a path to the sink node, s s (t). The intersection calculation is a known complex problem, where computational effort increases exponentially with the number of circles [50]. As a usual approach to approximate this calculation, the set of binary demand pointsD p (t) is uniformly distributed on the surface [51][52][53] . Then, the number of demand pointsd p (t) with an active sensor at a distance lower than r s are counted. According to this approximation, the sensitivity area A(t) provided by a WSN at time t > 0 is given by where a p (t) is the indicator function defined as i.e., a p (t) equals 1 if there is an active sensor i at a distance lower than r s from the demand point p.
Within this approach, we consider that demand points follow a grid distribution with a distance between neighbouring points of d pn . Hence, the number of demand points at time t > 0 is

Network Lifetime
The network lifetime t n is defined as the number of time periods over which the information provided by the network is useful for an application. We formulate this concept based on a threshold sensitivity area co th , that is where || · || is the cardinal of a set and τ is the set of time periods.

Network Reliability
We consider the network reliability formulation in [54], which is defined based on the number of disjoint paths between a given sensor and the sink node through Suurballe's Algorithm [55]. Thus, the reliability re i of a sensor i of the initial sensor setsS s is where djp c i denotes the number of disjoint paths between the sensor i ∈S s and the sink node, err ∈ [0, 1] is the local channel error, and h i,c l is the number of hops in the l-th disjoint path between i ∈S s and the sink node.

Optimization Problems
Let f 1 ∈ R + be the AEC of the sensors over the network lifetime defined as This objective is related to the energy efficiency problem in WSNs [56], whose goal is to reduce energy cost while balancing energy distribution and increasing network lifetime.
Let f 2 ∈ [0, 1] be the ASA provided by the WSN over the network lifetime defined as This objective is related to the coverage problem in WSNs [57], whose goal is to optimize diversity and the amount of information provided by the network.
Let f 3 ∈ [0, 1] be the NR based on the connectivity of sensors defined as This objective is related to the reliability problem in WSNs [58], whose goal is to get trustable networks.
On this basis, the authors define the bi-objective optimization problem as follows. Given a previously deployed T-WSN, i.e.,s s initial sensors and a sink node, the objective is to deploys r RNs assuming an ST network model to subject to ∀r ∈S r , r = (x, y) : That means, all the RNs deployed are in the limits of the scenario. The three-objective optimization problem is similar to the bi-objective one stated before, but including an additional objective, while maintaining the same constraints. That is, the goal is to Note that f 1 , f 2 , and f 3 are conflicting with each other as shown in [47]. As is well-known, this fact is a fundamental requirement, which any MO optimization problem should fulfil.

Metaheuristics
This section describes the MO metaheuristics considered for solving the two problems. As introduced before, some of the metaheuristics were exposed in prior works by ourselves and then, we chose not to include the whole implementing information in the present proposal to avoid duplicity. Instead, we next detail some key aspects adapting the meheuristics to solve the problem, for further information we recommend readers going to the specific works [44][45][46][47]: • NSGA-II: It considers two populations P g n and Q g n with ps n individuals each, where P g n saves the parents of the current iteration g and Q g n saves the offspring generated based on P g n . The individuals in the populations follow the same chromosome structure, where each individual has many genes as RNs should be deployed in the solution. Note that a gene includes the 2D-coordinates of an RN. This structure is considered for the remaining algorithms exposed in this section. Each individual in Q g n is generated by applying crossover and mutation operators based on two previously selected solutions from P g n . The crossover operator is the usual one-point-crossover with a crossover probability cro n . The mutation operator applies random changes in the genes of the solution generated by the crossover operator according to a mutation probability mut n . The populations for the next iteration are generated as follows. P g+1 n is generated by selecting the best ps n solutions combining P g n and Q g n according to the crowded-comparison operator ≺ n [14] and Q g+1 n is initialized to empty. • SPEA2: It considers a regular population P g s of size ps s and an auxiliary population P g n of size ps s , saving the best individuals found so far. The methodology followed by SPEA2 is similar to NSGA-II, but considering a different selection strategy to ≺ n when generating P g+1 s . We consider the same crossover and mutation operators as for NSGA-II with probabilities cro s and mut s . • MO-ABC: It is an MO approach of ABC, which was adapted by ourselves based on the ≺ n concept. The algorithm considers a population P g a with size ps a . The parameter se a determines the percentage of solutions in P g a managed by employed forager bees and the remaining ones are managed by onlooker bees. An employed forager bee tries to improve the solution by looking in its surrounding, i.e., RN coordinates are lightly modified. Note that the new solution is only accepted if it is better in fitness value; otherwise, it is discarded. If an employed forager bee tries to improve the solution limit a times without any improvement in fitness value, then the solution is supposed exhausted, being mandatory be managed by an scout bee. An onlooker bee tries to improve the solution by looking in the surrounding of a randomly selected employed forager bee. As before, the solution is only accepted it it is better in fitness value. An scout bee generates a new solution based on a randomly selected onlooker bee solution from the two first Pareto fronts in P g a . Next, the Euclidean distance between the solution selected and all other solutions in P g a is calculated. The new solution is obtained by combining the k a -nearest solutions to the selected one, being k a a random value in 2, . . . , 11. On the contrary that for employed forager and onlooker bees, the solution generated by the scout bee is directly accepted without analyzing the improvement in fitness value. P g+1 a is generated by including the solutions generated by the corresponding bees.
• MO-FA: It is an MO approach of the Firefly Algorithm (FA), which was adapted by ourselves based on the ≺ n concept. In this algorithm, a firefly is a possible solution to the problem and its brightness is defined by its solution quality. The attractiveness that a brighter firefly causes in a less bright one implies a movement in its RNs controlled by 1], and Each structure determines how different could be the new solution compared to the initial one in terms of maximum displacement of the RNs, which is limited by the ns v ∈ [1, ∞) parameter. Thus, the neighborhood structures are iteratively applied from higher to lower displacement, generating new solutions to be included in P g+1 v if they fulfill the non-dominated requirement.
• MO-VNS*: It considers the same focus as for MO-VNS but including a perturbation mechanism at the end of each iteration. This mechanism is performed for each solution in P g+1 v by applying the mutation operator discussed for NSGA-II and SPEA2 with mutation probability mut v .
• MO-GSA: It is an MO approach of the Gravitational Search Algorithm (GSA), which was adapted by ourselves based on the ≺ n concept. In this algorithm, an object is a possible solution to the problem and its mass is defined by its solution quality. All objects are mutually attracted by the Newtonian gravity force, causing a global movement of all objects towards heavier masses, corresponding the better solutions. The algorithm considers two populations P g ga and Q g ga with ps ga individuals, where P g ga saves the objects at the beginning of g, before acting gravitational forces, and Q g ga contains the resulting objects after applying the forces in P g ga . P g+1 ga is generated by selecting the best ps ga solutions combining P g ga and Q g ga . Finally, in case that the percentage of non-dominated solutions in Q g ga , regarding P g ga is lower than when_sc ga , then the mutation operator for NSGA-II is applied with mutation probability mut ga to each solution in P saves the non-dominated solutions found until g. Over generations and for each solution in P g m , two neighbouring solutions are selected (based on the neighbouring structure previously generated) to then produce a solution based on the crossover and mutation operators defined for NSGA-II, with cro m and mut m , respectively. The solution generated replaces the previous one only if it is better in fitness value for the corresponding single-objective sub-problem.
Some of the previously exposed metaheuristics were not considered in prior works for solving any of the two problems addressed in the present proposal. These algorithms are: • MO-FA and MOEA/D for the bi-objective approach. • MO-GSA and MO-VNS* for the three-objective approach.
Without loss of generality, MO-FA, MO-GSA, and MO-VNS* can be implemented with minimal changes independently of the number of objectives. Most changes are related to the implementation of ≺ n as detailed in [45][46][47].
On the contrary, the implementation of MOEA/D for the bi-objective case requires further explanation. Suppose a bi-objective optimization problem maximizing f 1 and f 2 . Let F 1 = (maxF( f 1 ), minF( f 2 )) and F 2 = (minF( f 1 ), maxF( f 2 )) be the two extreme points delimiting the objective space, where maxF(·) and minF(·) denote the upper and lower bounds of a fitness function. Let Υ = {r 1 , . . . , r ps m } be a set of points evenly distributed on the plane I, for r i = (r m 1 , r m 2 ) ∈ Υ, m ∈ 1, . . . , ps m , where ps m is the cardinal of Υ and F 1 , F 2 ∈ Υ. Let − → v n = (n 1 , n 2 ) be a normal vector to the plane I. On this basis, the bi-objective optimization problem is decomposed into ps m single-objective minimization subproblems, where the m-th subproblem optimizes the function g(x : r m , − → v n ) given by where x is a solution to the optimization problem.
In addition of how to decompose the optimization problem, another important aspect in MOEA/D is the distribution of the reference points Υ in the plane I. Algorithm 1 shows the procedure considered for distributing such points for the bi-objective approach by following a straight line defined by F 1 and F 2 . This algorithm considers as input F 1 , F 2 , crow m , and CH I Minc m . Initially, at step 0 (lines 1-3), the new extreme points E 1 and E 2 depending on F 1 and F 2 are calculated solving the expression given by where d(·) provides the Euclidean distance between two points. At step 1 (line 4), the number of divisions n E 1 ,E 2 in the segment E 1 E 2 is calculated. Finally, at step 2 (lines 5-8), reference points in the segment E 1 E 2 are generated. From this point, MOEA/D can be implemented with minimal changes independently of the number of objectives as detailed in [47].
Algorithm 1 Distribution of the reference points for two objectives.
ps m ← ps m + 1 8: end for

Solving Strategy
This section discusses the dataset used for analyzing the algorithms, the experimental methodology, and the parametric sweep task.

Dataset Description
We consider the freely available dataset in [21] composed of four scenarios. For each of them a T-WSN was deployed considering that (i) the sink node was placed on the middle of the surface and (ii) the number of sensors was the lower bound to cover the whole surface defined as In this dataset, two r c values, 30 m and 60 m, are considered to simulate different communication conditions. Hence, two homogeneous instances are defined for each scenario following the notation d x × d y _r c . Table 1 shows additional details about the dataset, including scenario size, fitness values without deploying any RN (s r = 0) for both r c values, hypervolume reference points, and test cases (RNs to be deployed using the metaheuristics, while keeping a ratio of number of devices to RNs lower than 20%). Note that for r c = 60 m, reliability is not necessary to optimize because the number of disjoint paths is high between any sensor and sink node. Thus, instances with r c = 60 m will not be optimized using the three-objective approach. The remaining parameters of the network model take values α = 2.00, β = 1.00, co th = 0.70, k = 128 KB, r s = 15 m, iec = 5 J, and amp = 100 pJ/bit/m 2 from the literature [51,60].

Experimental Methodology
The dataset in Section 6.1 is optimized using the previously introduced metaheuristics for each of the two optimization problems. To this end, we perform 31 independent runs for each metaheuristic, instance, test case, and optimization problem. Five stop conditions are considered based on the number of evaluations to study convergence, i.e., 50,000, 100,000, 200,000, 300,000, and 400,000 evaluations. The results obtained are evaluated using hypervolume and set covering.

Parametric Sweep
The metaheuristics were configured for solving each problem as follows. (i) Starting from default values, (ii) a parameter of the algorithm is selected to be tuned. (iii) Then, 31 independent runs are performed for each value of the parameter in a range, while the others remain fixed. (iv) The configuration providing the best average behavior based on hypervolume is selected, overwriting the default parameter value. (v) Next, a non-configured parameter is selected going to step (ii). This configuring methodology ends when all parameters were selected in step (v). Table 2 shows the configurations selected (2Obj and 3Obj fields for the bi-objective and three-objective approach, respectively) and the range studied.

Experimental Results
This section includes the experimental results obtained by solving both the bi-objective and the three-objective RNPPs. Table 3 shows average hypervolume for MO-VNS*, MO-ABC, MO-VNS, MO-FA, MO-GSA, and MOEA/D, solving the bi-objective RNPP for each test case and stop condition. In this table, hypervolumes in bold correspond to results never before published. Note that the hypervolumes for NSGA-II and SPEA2 were not shown to simplify the table because they reported significantly lower hypervolumes than the other algorithms, instead, we refer readers to [46]. If we analyze how hypervolumes change over stop conditions, we check that most algorithms show an homogeneous growth, reaching an asymptotic trend for 400,000 evaluations. That means that the set of stop conditions selected is representative to study convergence. On the other hand, if we analyze the tables focusing on shaded cells showing higher (better) hypervolumes, we note that some algorithms seem to outperform others. To check if the differences are significant, we analyze the data using the statistical methodology as follows.

Bi-Objective Approach
First, we remove possible outliers from the hypervolume distributions. Next, we analyze if data follow a normal distribution or not through Kolmogorov-Smirnov-Lilliefor's and Shapiro-Wilk's tests with hypothesis H 0 : data follow a normal distribution and H 1 : otherwise. As both tests provided p-values lower than 0.05 for all cases, we should consider a non-parametric test to compare the algorithms two by two. Specifically, as samples are independent, we consider Wilcoxon-Mann-Whitney's test with hypothesis H 0 : Hyp i ≤ Hyp j , ∀i, j ∈ {1 = NSGA, 2 =  Table 4 shows the percentage of test cases where a metaheuristic is significantly better than any other based on the p-values obtained before analyzed with a significance level of 0.05 and according to instance size. Table 3. Median hypervolume obtained solving the bi-objective RNPP. Analyzing Table 4, for 50 × 50 instances, we check that MO-ABC, MO-FA, MO-GSA, and MOEA/D provide a similar behavior with up to 9.71%, followed by MO-VNS and MO-VNS* with up to 6.31% and 4.85%, respectively. For 100 × 100 instances, MO-VNS provides the best behavior with up to 11.37%, followed by MO-FA and MO-ABC with up to 9.51% and 9.41%, respectively. For 200 × 200 instances, MO-FA provides the best behavior with up to 10.77%, followed by MO-VNS and MO-VNS* with up to 10.03%. For 300 × 300 instances, MO-FA provides the best behavior with up to 13.71% with up to 13.71%, followed by MO-ABC and MO-VNS* with up to 8.75% and 7.45%, respectively. For all instances, MO-FA provides the best behavior with up to 11.49%, followed by MO-ABC and MO-VNS* with up to 8.88% and 8.40%, respectively. From this hypervolume analysis, we conclude that MO-FA provides the best behavior solving the problem in general. Focusing on instance size, MO-FA is also the best algorithm solving small and large instances (50 × 50, 200 × 200, and 300 × 300) and MO-VNS is better suited for medium size instances (100 × 100).

Evaluations (Stop Condition) Evaluations (Stop Condition) d x × d y _r c (s
In Table 4, the behavior of MO-VNS and MO-VNS* is significantly different although their focus is almost the same, i.e., MO-VNS is better for small instances, while MO-VNS* is better for large ones. This fact is due to, for small instances, the perturbation mechanism in MO-VNS* penalizes the number of evaluations available for exploitation, while search space is not of concern, resulting in a better performance of MO-VNS. On the other hand, the perturbation mechanism in MO-VNS* is useful for exploring bigger search spaces in large instances, without being of concern the number of evaluations consumed by the process, resulting in a better performance of MO-VNS*. Table 5 shows the average set coverage metric of an algorithm compared to the others according to instance size. The metric was calculated using the median Pareto front obtained for each algorithm solving a given instance. Analyzing this table, for 50 × 50 instances, we check that MO-ABC, MO-FA, and MOEA/D provide a similar behavior with up to 100.00%, followed by MOEA/D and MO-VNS with up to 90.00% and 88.28%, respectively. For 100 × 100 instances, MO-VNS provides the best behavior with up to 87.56%, followed by MO-VNS* and MO-FA with up to 83.11% and 75.93%. For 200 × 200 instances, MO-FA provides the best behavior with up to 77.52%, followed by MO-VNS* and MO-VNS with up to 68.92% and 66.62%. For 300 × 300 instances, MO-FA provides the best behavior with up to 86.47%, followed by MO-VNS* and SPEA2 with up to 51.86% and 45.46%, respectively. For all instances, MO-FA is the best algorithm with up to 81.47%, followed by MO-VNS* and MO-ABC with up to 63.82% and 58.92%. From this set coverage analysis, we reach similar conclusions as for hypervolume. MO-FA provides the best behavior solving the problem in average term. If we focus on instance size, MO-VNS is the best algorithm for medium size instances and MO-FA is better suited for small and large ones.   Table 6 shows average hypervolume for MO-VNS*, MO-VNS, MO-FA, MO-GSA, and MOEA/D, solving the three-objective RNPP for each test case and stop condition. In this table, hypervolumes in bold correspond to results never before published. Note that the hypervolumes for NSGA-II, SPEA2, and MO-ABC were not shown to simplify the table because they reported significantly lower hypervolumes than the other algorithms, instead, we refer readers to [47].

Three-Objective Approach
Analyzing Table 6, we verify that the algorithms show a homogeneous growth with an asymptotic trend in 400,000 evaluations, as for the bi-objective approach. Hence, the stop condition is representative to analyze the performance of the algorithms. Higher hypervolumes in Table 6 are shaded, reaching that some algorithms seem to outperform others. The differences observed are analyzed following the same methodology as for the bi-objective study. After removing possible outliers, we checked that data do not follow a normal distribution, and then we considered the same hypothesis as before for the Wilcoxon-Mann-Whitney's test. As a result, Table 7 shows the percentage of test cases where a metaheuristic is significantly better than any other according to the p-values obtained with a significance level of 0.05. Analyzing Table 7, for 50 × 50 instances, we check that MO-FA provides the best behavior with up to 13.36%, followed by MOEA/D and MO-ABC with up to 9.92% and 9.54%, respectively. For 100 × 100 instances, we check that MO-VNS provides the best behavior with up to 11.55%, followed by MO-VNS* and MO-FA with up to 10.92% and 8.61%, respectively. For 200 × 200 instances, we check that MO-FA provides the best behavior with up to 12.50%, followed by MOEA/D and SPEA2 with up to 11.00% and 7.00%, respectively. For 300 × 300 instances, we check that MO-FA provides the best behavior with up to 11.90%, followed by MOEA/D and MO-ABC with up to 11.17% and 8.46%, respectively. For all instances, MO-FA provides the best behavior with up to 11.62%, followed by MOEA/D and MO-VNS with up to 9.74% and 6.37%. From this analysis based on hypervolume, we check that MO-FA is the best algorithm in general. For small and large instances, MO-FA is also the best algorithm. For medium size instances, MO-VNS is the best algorithm. Table 8 shows the average set coverage metric of an algorithm compared to the others for the median Pareto front. Analyzing this table, for 50 × 50 instances, MO-GSA is the best algorithm with up to 81.78% followed by MO-FA and MO-VNS* with up to 72.00% and 67.86%, respectively. For 100 × 100 instances, MO-VNS* is the best algorithm with up to 73.19%, followed by MO-VNS and MO-FA with up to 72.36% and 65.68%, respectively. For 200 × 200 instances, MO-FA is the best algorithm with up to 83.71%, followed by MO-VNS* and MO-VNS with up to 49.37% and 48.77%, respectively. For 300 × 300 instances, MO-ABC is the best algorithm with up to 63.12%, followed by MO-FA and MO-VNS with up to 51.12% and 40.80%, respectively. For all instances, MO-FA is the best algorithm with up to 67.52% followed by MO-VNS and MO-VNS* with up to 50.80% and 49.61%, respectively. From this study, we check that MO-FA is the best algorithm on average term, followed by MO-VNS and MO-VNS*. Focusing on instance size, MO-GSA is the best algorithm for small instances, MO-VNS* is the best algorithm for medium size instances, and MO-FA is the best algorithm for large instances.

Bi-Objective vs. Three-Objective Approaches
Comparing the results obtained for the bi-objective and three-objective approaches in the two previous subsections, we verify that MO-FA could be recommended as a general solving method for the two approaches studied. Focusing on instance size, MO-FA provides robust performance in small and large instances, while MO-VNS provides robust performance in medium ones. This conclusion is supported by Figure 1, which shows the average performance for each metaheuristic according to the set of instances addressed. This figure was generated by combining hypervolume and set coverage values through a 1-2 standardization for the case studied. Thus, Figure 1a-d show the average performance of the metaheuristics for small, medium, large, and all instances.
The difference in performance between these two metaheuristics, as well as with the others, could be due to the movement operator used in MO-FA. This operator successfully fits the RNPP, defining a way of producing new solutions by moving the RNs in an individual according to promising solutions, resulting in that the population evolves towards better solutions. This is the reason why MO-FA provides a robust performance independently of the instance size.
For the case of MO-VNS, the algorithm defines a successfully way of generating new solutions based on an incremental local search, which is a search in the vicinity. This local search fits the RNPP, defining a way of improving a previous solution by boundedly moving the RNs. However, the performance of MO-VNS is mainly due to the application of this local search. Thus, MO-VNS is less competitive to other algorithms in large instances because the search space is large and the algorithm fails in exploration. For the case of small instances, MO-VNS is also less competitive because the search space is reduced and then, it is relatively simple for a metaheuristic to find good solutions to the problem. As a result, MO-VNS provides good performance in medium instances.

Final Remarks
This work addresses the RNPP in WSNs from two MO formulations with the purpose of searching for robust methods solving this deployment problem within a realistic perspective. The first formulation includes two objectives, energy cost and average sensitivity, and the second formulation includes three objectives, the two previous ones as well as network reliability. On this basis, the authors propose to study how performs a wide range of MO metaheuristics from the three main groups in the field: evolutionary algorithms (NSGA-II, SPEA2, and MOEA/D), swarm intelligence algorithms (MO-ABC, MO-FA, and MO-GSA), and trajectory algorithms (MO-VNS and MO-VNS*).
The eight MO metaheuristics were applied for solving four deployment scenarios, in both optimization problems, of increasing complexity, while considering a different number of RNs and communication conditions. The experimental results were analyzed through an accepted statically methodology, where two standard MO metrics were considered, i.e., hypervolume and set coverage. As a result, we concluded that MO-FA provided a robust performance independently of the number of objectives and instance size, and then MO-FA could be recommended as a general solving method for this problem. Additionally and focusing on instance size, we concluded that MO-FA provided a robust performance in small and large instances, while MO-VNS provided the best performance in medium instances.
As future lines of research, it could be interesting to extend the network model considered. For instance, simulating additional MAC and routing protocols. Moreover, it could be interesting to try to extend the results obtained to a real WSN deployment.

Conflicts of Interest:
The authors declare no conflict of interest. The founding sponsors had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, and in the decision to publish the results.   initial energy charge of the sensors, iec > 0. k information packet size in bits, k > 0. k a random value representing the number of nearest solution used to obtain a new solution by an scout bee in the MO-ABC algorithm, k a = 2, . . . , 11. λ f parameter determining the performance of the movement operator based on attractiveness in MO-FA, λ ∈ (0, ∞). limit a threshold determining when a solution managed by an employed bee should be managed by an scout bee in MO-ABC. maxF(·) lower bounds of a fitness function. minF(·) upper bounds of a fitness function. mut f mutation probability in the MO-FA algorithm. mut ga mutation probability in the MO-GSA algorithm. mut m mutation probability in the MOEA/D algorithm. mut n mutation probability in the NSGA-II algorithm. mut s mutation probability in the SPEA2 algorithm. mut v mutation probability in the MO-VSN* algorithm. n 1 , n 2 coordinates of the normal vector − → v n to the plane I. n E 1 ,E 2 number of divisions on the segmentĒ 1 E 2 . neigh m number of neighbouring reference points associated to a reference point in MOEA/D. neigh v number of neighbourhood structures in the MO-VSN algorithm. ns v parameter limiting how different could be a solution according to the neighbourhood structure selected in MO-VNS, ns v ∈ [1, ∞). P i (t) number of packets sent by the sensor i ∈ S s (t) at time t > 0. set of sensor coordinates, holding that the energy charge is greater than 0 and that there is any path to the sink node, both at time t > 0, S s (t) ⊆S s , ∀i ∈ S s , i = (x, y), where x ∈ [0, d x ] and y ∈ [0, d y ]. s s (t) number of sensors, holding that the energy charge is greater than 0 and that there is any path to the sink node, both at time t > 0. It is the cardinal of S s (t), s s (t) ≤s s . S g v population of unlimited size saving the solutions from P g v considered to explore the search space during the iteration g in the MO-VNS algorithm. τ set of time periods, τ = {0, 1, 2, . . .}. t time instant, t > 0. t n network lifetime of the WSN based on the coverage threshold co th . − → v n normal vector (n 1 , n 2 ) to the plane I. when_sc f threshold defining when the anti-stagnation mechanism is performed in MO-FA, when_sc f ∈ [0, 1]. when_sc ga threshold defining when the anti-stagnation mechanism is performed in MO-GSA, when_sc ga ∈ [0, 1]. w c i (t) variable which provides the next device in the minimum path between i ∈ S s (t) and the sink node at t > 0, w c i (t) ∈ {S s (t) ∪S r } + c − i. Y number of points evenly distributed in the plane I, Y = {r 1 , . . . , r ps m }. z c j,i (t) variable assuming 1 if i ∈ S s (t) is in the minimum path between j ∈ S s (t) and the sink node c at t > 0, and 0 otherwise.