A Population-Based Iterated Greedy Algorithm for Maximizing Sensor Network Lifetime

Finding dominating sets in graphs is very important in the context of numerous real-world applications, especially in the area of wireless sensor networks. This is because network lifetime in wireless sensor networks can be prolonged by assigning sensors to disjoint dominating node sets. The nodes of these sets are then used by a sleep–wake cycling mechanism in a sequential way; that is, at any moment in time, only the nodes from exactly one of these sets are switched on while the others are switched off. This paper presents a population-based iterated greedy algorithm for solving a weighted version of the maximum disjoint dominating sets problem for energy conservation purposes in wireless sensor networks. Our approach is compared to the ILP solver, CPLEX, which is an existing local search technique, and to our earlier greedy algorithm. This is performed through its application to 640 random graphs from the literature and to 300 newly generated random geometric graphs. The results show that our algorithm significantly outperforms the competitors.


Introduction
The field of wireless sensor networks (WSNs) has been enjoying a lot of attention in the last 20 years, both in research and in industry. This is certainly due to a multitude of different applications, including environmental monitoring, medical and health applications, security surveillance, transportation applications, structural applications, and emergency operations [1,2], just to name a few. WSNs are generally composed of a number of small devices equipped with one or more sensors, limited storage capacity, a limited power supply, and a radio communication system. As the weight of sensor devices often plays an important role, power supply-for example, by means of a battery-is generally limited, and battery-saving techniques are often used. The lifetime of a sensor device (in hours) may be computed by a division of the battery capacity (in Watt hours) and the average power drain (in Watts). However, the estimation of the lifetime of a sensor node is not a trivial task (see [3]) because energy consumption is the result of various factors, including, for example, the environmental temperature.
For these reasons, one of the principal research topics concerning WSNs is about network lifetime extension, while at the same time, providing sufficient communication reliability and sensing coverage. Note that in this context, the term network lifetime refers to the time during which the network is fully operational with respect to its tasks. In other words, the network lifetime is the time duration in which the overall sensing coverage is maintained. The lifetime of a WSN, therefore, heavily depends on the energy consumption of the individual sensor devices. Real-world examples of mechanisms for maximizing the network lifetime are manifold. They include, but are not limited to, smart agriculture monitoring [4], structural health monitoring [5], human activity monitoring [6], and road traffic monitoring [7]. Power-saving strategies such as the ones found in these examples can-according to [8]-be classified as belonging to one of the following groups: • Sleep-wake cycling, also referred to as duty cycling. Here, sensor devices alternate between active and sleep mode; • Power control through the adjustment of the transmission range of the radio communication systems; • Routing and data gathering in an energy efficient way; • Reduction of the amount of data transmissions and avoidance of useless activity.
In this paper, we will provide a technique for WSN lifetime extension that falls into the first category. More precisely, our technique makes use of the so-called communication graph. The nodes of this graph are the sensor devices belonging to the network (together with their locations). Two such nodes are connected by an edge if the corresponding sensor devices can communicate with each other via their radio communication systems. Note that sensor nodes have at least two tasks, also known as functionalities: (1) sensing data and (2) data processing and forwarding data to a base station. Between these two tasks, the latter one is by far more energy-intensive than the first one. In order to keep energy spending to a minimum, the nodes in a sensor network may be organized in terms of dominating sets in which the dominators (that is, the sensor nodes that form part of the dominating set) assume the task of cluster heads that take care of data processing and forwarding. However, sensor nodes are never switched off in this model. Those sensor nodes that do not form part of the (current) dominating set save energy by not having to perform data processing and forwarding. In contrast, data sensing is performed by all sensor nodes at all times. Such a model (or similar models) have been used in a wide range of papers in the literature; for example, see refs. [9][10][11]. For the above-mentioned reasons, our technique organizes the sensor nodes into a number of disjoint dominating sets, which are used-one after the other-for data processing and data forwarding.

Necessary Graph Theoretic Concepts
This paper makes use of some basic definitions and notations from graph theory. The most important ones are outlined in the following. (For a more profound introduction, the interested reader may refer to [12].) First, the communication graph is modeled by means of an undirected graph, G = (V, E), where V is the set of nodes, and E is the set of edges connecting (some of) the nodes. Hence, two nodes, v = u ∈ V, which are connected by an edge, (v, u) ∈ E, are called neighbors. They may also be called adjacent nodes. The open neighborhood of a node, v ∈ V, is defined as N(v) := {u ∈ V | (v, u) ∈ E}. Sometimes, however, it is necessary to refer to the closed neighborhood N[v] of a node, v ∈ V, which is defined by adding v to N(v), that is, N[v] := N(v) ∪ {v}. Next, the degree deg(v) of a node, v ∈ V, is defined as the cardinality of N(v), that is, deg(v) := |N(v)|. The concept of neighborhood can also be extended, from nodes to sets of nodes, in the following way. The open neighborhood, In this context, we also formally introduce the terms: dominating set, domatic partition, and domatic number of a graph. First, a subset, D ⊆ V, in which each node, v ∈ V \ D, has at least one neighbor that forms part of D is called a dominating set of G. A node, v ∈ D-where D is a dominating set-is said to cover all its neighbors, in addition to itself. A trivial example of a dominating set of an undirected graph, G = (V, E), is V. Next, a set D = {D 1 , D 2 , · · · , D k } of subsets D i ⊆ V is called a domatic partition of a given, undirected graph, G = (V, E), if the following two conditions are fulfilled: (1) D i (i = 1, . . . , k) is a dominating set of G, and (2) all sets of D are pairwise disjoint, that is, is not a dominating set, a domatic partition D of G can easily be obtained by adding all vertices from V \ D i ∈D D i , for example, to D k . The domatic number of an undirected graph, G = (V, E), is defined as the size of the largest domatic partition of G, that is, the domatic number of G is |D * |, where D * := argmax{|D| | D is a domatic partition of G}. It was shown in the literature that the domatic number of a graph, G, can be at most δ + 1, where δ := min{deg(v) | v ∈ V}. The problem of identifying a domatic partition of an undirected graph, G, is sometimes called the maximum disjoint dominating sets (MDDS) problem.

Graph Problems Used to Model WSN Lifetime Maximization
The related literature offers different approaches for the maximization of the sensor network lifetime. Most of these approaches have modeled this problem either in terms of the set K-cover problem (also known as the target coverage problem) or as the MDDS problem. Modeling the problem as a K-cover problem was performed for the first time in [13]. In the same work, the problem was shown to be NP-hard. In this context, note that the set K-cover problem is defined on the basis of a bipartite graph in which the first set of nodes are the sensor devices and the second set of nodes are the sensing targets. The aim of the problem is to partition the sensor devices into a maximum number of disjoint sets, with each one covering all targets. As mentioned before, these disjoint sets are then activated one after the other in order to keep the network alive for the maximum period of time. Due to the NP-hardness of the problem, a range of approximate algorithms have been proposed in the literature in order to solve it. Examples, which also include algorithms for closely related problem formulations, are a greedy heuristic [13], some memetic algorithms [14][15][16], a cuckoo search approach [17], and finally, a genetic algorithm [18].
As already indicated above, the problem of maximizing sensor network lifetime is also frequently modeled as an MDDS problem, the goal of which is to identify a partition of the sensor devices into a maximum number of disjoint dominating sets of the corresponding communication graph. The MDDS problem, which belongs to the important family of dominating set problems [19][20][21], was shown to be NP-hard for general graphs [22]. Cardei et al. [23] proved the NP-completeness of a special case of the MDDS problem known as the 3-disjoint dominating sets problem. This variant deals with the question of whether or not a given graph contains three disjoint dominating sets. Nguyen and Huynh [9] proved that the 3-disjoint dominating sets problem remains NP-complete even for the special cases of planar unit disk graphs. Moreover, they introduced and evaluated the performance of four greedy heuristics for the general MDDS problem. In [23], it was also proved that unless P = NP, the MDDS problem has no polynomial-time approximation with a performance guarantee better than 1.5. Finally, the same authors introduced a graph coloring-based heuristic approach. Next, Feige et al. [24] showed that any graph with n nodes, a maximum degree of ∆, and a minimum degree of δ has a domatic partition with a size of (1 − o(1))(δ + 1)/ ln ∆. Note that the term o(1) tends to zero with increasing n. Moreover, the same authors were able to show the non-existence of an approximation algorithm with an approximation ratio of (1 + o(1)) ln n unless NP ⊆ DTI ME(n O(log log n) ). Finally, they also introduced a centralized algorithm generating a domatic partition with a size of Ω(δ/ ln ∆). Moscibroda and Wattenhöfer [25] regarded the MDDS problem as one with a maximizing cluster lifetime. They introduced a randomized, distributed algorithm having-with high probability-a performance ratio of O(log(n)). Finally, a greedy heuristic for the MDDS problem, with a time complexity of O(n 3 ), was described in [26].

Existing Work for the MWDDS Problem and Our Contribution
Recently, a weighted variant of the MDDS problem, in which the weights of the nodes of a given undirected graph, G = (V, E), indicate the remaining lifetime of single sensor devices, was introduced in [27]. The authors labeled this problem as the maximum weighted disjoint dominating sets (MWDDS) problem. The lifetime of a dominating set in G is hereby defined as the minimum of the lifetimes of the nodes that form part of the set. The MWDDS problem asks to identify a domatic partition that maximizes the sum of the lifetimes of the corresponding dominating sets.
In addition, three algorithms based on a local search were provided in [27]. Each of these local search methods takes a solution generated by a greedy heuristic from [26] as the initial solution. The proposed local search methods make use of swap neighborhoods, trying, for example, to exchange nodes between different dominating sets and to incorporate nodes not belonging to any dominating set of the current solution. The three local search methods differ in the type of swaps that are considered. The current state-of-the-art approach for the MWDDS problem is, surprisingly, a greedy heuristic that was introduced in [28]. This algorithm generates one dominating set after the other by adding one node at a time to the partial dominating set under construction.
Given that the greedy heuristic from [28] seems to be very powerful, in this work, we propose a metaheuristic extension of this greedy heuristic. More specifically, we propose a population-based iterated greedy (PBIG) algorithm on the lines of [29,30]. Just as with any other iterated greedy (IG) approach [31], our algorithm iteratively generates a new solution to the problem by partially destroying incumbent solutions and re-constructing the obtained partial solutions by means of a randomized greedy technique. As we will show in the section on the experimental results, our proposed approach clearly outperforms all existing algorithms for the MWDDS problem. In addition to 640 problem instances from the related literature, we likewise evaluate our algorithm-in comparison to the competitors-on 300 random geometric graphs.

Paper Organization
The remainder of this paper is organized as follows. The MWDDS problem is formally introduced, together with notations and basic definitions, in Section 2. This also includes a graphical example and an ILP model for the MWDDS problem. In Section 3, we introduce our algorithmic proposal, a population-based iterated greedy algorithm for solving the MWDDS problem. Finally, Section 4 presents a comprehensive experimental evaluation and a comparison to the current state of the art, while Section 5 summarizes our paper and offers directions for future lines of work.

The MWDDS Problem
Let G = (V, E, lifetime) be an undirected, node-weighted graph. As already mentioned before, V refers to the set of nodes, while E ⊆ V × V is the set of edges. Moreover, lifetime : V → R + is a weight function defined over the set of nodes, assigning a positive weight, lifetime(v) > 0, to each node, v ∈ V. The maximum weighted disjoint dominating sets (MWDDS) problem is then defined, such that any domatic partition D of G is a valid solution. The objective function value of a valid solution, D = {D 1 , . . . , D |D| }, is defined as follows: In other words, the quality of a subset, D i , is determined as the minimum lifetime of all its nodes. The objective is to find a valid solution, D * , that maximizes objective function f . Figure 1 shows a graphical example of the MWDDS problem. In particular, Figure 1a shows an undirected graph on seven nodes. The labels printed within the nodes have the format x, y, where x is the nodes' ID and y is the nodes' lifetime. These lifetime values are normalized to the range [0, 1] for the sake of simplicity. Furthermore, the graphic in Figure 1b Figure 1c demonstrates the optimal solution, D * := {D 1 = {1, 3, 5}, D 2 = {2, 4, 6}}, to this problem instance. Hence, the lifetime of D 1 is 0.7, while the lifetime of D 2 is 0.1. Therefore, the objective function value f (D * ) of D * is 0.8. Since this small sample graph contains a node with a degree of 1, any valid solution can contain at most two disjoint dominating sets.

ILP Model for the MWDDS Problem
The following integer linear programming (ILP) model for the MWDDS problem was introduced in [28]. First, we describe the sets of variables and their domains utilized by this model:

1.
A binary variable, x ij , for each combination of a node, v i (i = 1, . . . , n), and a possible disjoint set, D j (j = 1, . . . , δ(G) + 1), indicates whether or not node v i forms part of the dominating set D j . That is, when x ij = 1, node v i is assigned to the dominating set D j . In this context, remember that (1) δ(G) := min{deg(v) | v ∈ V}, and (2) the number of disjoint dominating sets in a graph is bounded from above by δ(G) + 1.

3.
Finally, a real-valued variable, z j ∈ [0, M], is used to store the weight of the j−th dominating set. In our implementation of the model, we used M : The MWDDS can then be stated in terms of an ILP model in the following way: The objective function is the sum of the weight values of all dominating sets. Constraints (3) ensure that each node is assigned to one dominating set at most. In this way, the chosen dominating sets are disjoint. Next, constraints (4) are the usual dominating set constraints, that is, they make sure that the set of nodes assigned to the j-th set (if utilized) form a dominating set of G. Furthermore, constraints (5) make sure that nodes can only be assigned to utilized dominating sets. Constraints (6) correctly determine the lifetimes of the utilized dominating sets. This is accomplished by setting the value of the variable z j of the j-th dominating set (if utilized) to the minimum of the lifetime values of all nodes assigned to it. Next, note that the objective function (Equation (2)) is only correct if the weight value of the dominating sets not utilized is set to zero. This is ensured by constraints (7). The remaining two sets of constraints, (8) and (9), are not required for the correctness of the ILP. They were introduced for tie-breaking purposes that have the following effect: (1) if k dominating sets are utilized, they are assigned to sets 1, . . . , k, and (2) the utilized dominating sets are ordered according to a non-increasing weight value.

Proposed Algorithm
Our population-based iterated greedy (PBIG) algorithm is a population-based extension of the well-known iterated greedy (IG) metaheuristic [31], that is, it produces a sequence of solutions by iterating over a constructive greedy heuristic in the following way. At each iteration, first, some part of the current/incumbent solution is removed, and second, the greedy heuristic is applied to the resulting partial solution in order to again obtain a complete solution. The first of these phases is called the destruction phase, while the second one is known as the reconstruction phase. A high-level description of our PBIG algorithm for solving the MWDDS problem is given in Algorithm 1. Apart from a problem instance, PBIG requires seven input parameters: (1) the population size (p size ), (2) the lower bound of the degree of greediness during solution construction (det min ), (3) the upper bound of the degree of greediness during solution construction (det max ), (4) the lower bound of the degree of solution destruction (destr min ), (5) the upper bound of the degree of solution destruction (destr max ), (6) the maximum number of iterations without the improvement of the best-so-far solution D bsf before applying a restart (max noimpr ), and (7) the degree of partial solution removal (r del ). Moreover, note that in our algorithm, each solution D has two solution-specific parameters: the individual destruction rate, destr D , and the individual degree of greediness, det D . The use of all parameters will be carefully described below.

Algorithm 1 PBIG for the MWDDS problem.
Input: A problem instance G = (V, E, lifetime) and values for parameters p size , destr min , destr max , det min , det max , max noimpr , and r del . P ← SelectNextPopulation(P, P new , cnt) 15: end while 16: The algorithm works as follows. A set of p size solutions for the initial population are generated in the function GenerateInitialPopulation() (see line 1 of Algorithm 1). Moreover, the best-so-far solution, D bsf , and the cnt counter are initialized. Afterwards, the main loop of the algorithm starts. Each iteration consists of the following steps. Each solution, D ∈ P, is partially destroyed using procedure DestroyPartially(D) (see line 7 of Algorithm 1), resulting in a partial solution, D p . On the basis of D p , a complete solution,D, is then constructed using the procedure GreedyMWDDS(D p ) (see line 8 of Algorithm 1). Each newly obtained complete solution is stored in an initially empty, new population, P new . Moreover, the individual destruction rates and the individual degree of greediness of D andD are adapted in the function, AdaptParameters(D,D). Then, the iteration-best solution, D ib , is determined, and in case this solution improves over D bsf , the non-improvement counter cnt is set back to zero. Otherwise, this counter is incremented. As a last step in each iteration procedure, SelectNextPopulation(P, P new , cnt) chooses the solutions for the population of the next iteration, maintaining the population size constant at p size at all times. Finally, the algorithm terminates when a given CPU time limit has been reached, and the best found solution is returned. The five procedures mentioned above are described in more detail below.
GenerateInitialPopulation(p size ): Each of the p size solutions of the initial population is constructed by applying the procedure GreedyMWDDS(·), with the empty solution D 0 = ∅ as input. Note that this procedure depends on the degree of greediness, which is the same for all solutions because each empty partial solution, D 0 , is initialized with det D 0 = det max , that is, the upper bound for the greediness of solution construction. Moreover, it is also initialized with destr D 0 = destr min , that is, the lower bound of the destruction rate is set as the initial value.
GreedyMWDDS(D p ): The reconstruction procedure follows the general principle of a greedy algorithm, which builds a complete solution step-by-step, selecting one additional node at each construction step. In this work, we adopt the recent greedy heuristic presented in [28]. However, we extend this greedy heuristic (1) in order to allow for randomized steps and (2) to be able to take a partial (possibly non-empty) solution as input. In other words, our randomized greedy mechanism takes as input a partial solution, D p , which might be empty. Note that such a partial solution is composed of independent, partially destroyed dominating sets. Now, the construction of a complete solution, is performed by consecutively dealing with the generation of D i starting from D p i for all i = 1, . . . , m. In this process, whenever i > |D p | or D p = ∅, D i is initialized with the empty set. In the following, V rem denotes the set that includes all nodes that are not yet added to a dominating set of a current (partial) solution D p . That is, when receiving a partial solution D p as input, V rem := V\ In the following, we describe the way to obtain a dominating set, D i , starting from a partial dominating set, D p i (possibly being empty). At the start, all nodes in V can be divided into three disjoint subsets with respect to D i :

•
Black nodes: node from D i ; • Gray nodes: nodes that are not in D i but are dominated by black nodes, that is, all White nodes: all nodes from V that are neither black nor gray.
With this classification of the nodes, we can define the white degree of a node, v ∈ V remwith respect to D i -as the number of white nodes from the closed neighborhood of v: To be able to choose the next node to be added to set D i at the current construction step, all nodes from V rem are evaluated using a greedy function denoted by score(·), which is calculated as follows: Then, the randomization incorporated in our greedy heuristic is implemented using a quality-based restricted candidate list (RCL). The size of the RCL is controlled by the solution-specific parameter, det D ∈ [0, 1], called the degree of greediness. Its value is adaptive and depends on the quality of the generated solution, as explained further below.
Let score min := min{score(v)|v ∈ V rem } and score max := max{score(v)|v ∈ V rem }. The RCL then contains all nodes, v ∈ V rem , whose scoring value is greater than or equal to score min + det D (score max − score min ). Note that when det D = 1, our solution construction procedure behaves similar to a deterministic greedy heuristic. On the other hand, setting det D = 0 leads to pure random construction. Finally, a node is selected at random from the RCL to be incorporated into the partial dominating set, D i .
Once D i is a dominating set, it might contain redundant nodes which-after identification-can be safely removed. In this context, note that a node is redundant if and only if any node from its closed neighborhood is dominated by at least two nodes from D i . If, by removing redundant nodes, the node with the lowest lifetime in D i can be removed, the overall lifetime of D i is improved. After removing redundant nodes, D i is placed in the solution, D, under construction. Afterwards, the set V rem is updated accordingly before moving to the construction of the next dominating set, D i+1 . This solution construction process ends once no further dominating set can be generated from the nodes in V rem . This occurs when it is impossible to complete the partial dominating set under construction because either (1) V rem is empty or (2) no node from V rem has a white closed neighbor. The pseudo-code of the complete procedure is shown in Algorithm 2.
DestroyPartially(D): Let D = {D 1 , D 2 , · · · , D m } be the valid solution given as input to the destruction procedure. In the following, we outline the three strategies for destruction that are conducted sequentially: partial solution removal, worst node removal, and random node removal. In this context, it is important to note that, on the one hand, the best solution does not necessarily correspond to the solution with the maximum number of disjoint dominating sets; on the other hand, the size of a re-constructed solution after performing the destruction and reconstruction phases may be different to the size of the solution that served as input to these two phases. With this in mind, some disjoint dominating sets should be completely removed as a first step of partial solution destruction. For this purpose, the max{1, r del · |D| } randomly chosen dominating sets are removed from D, resulting in a partial solution Then, since the quality of a subset, D i ∈ D, is determined as the minimum lifetime of all its nodes, keeping the node with the smallest lifetime during the partial destruction makes its further improvement impossible. For this reason, the removal of the node, v worst := argmin{lifetime(v) | v ∈ D p i }, from each subset, D p i ∈ D p , i = 1, · · · , r, becomes necessary. Afterwards, a set of destr D × |D  if V rem = ∅ then 13: stopping_condition := true 14: else 15: score max := max{score(v)|v ∈ V rem } 16: if score max = 0 then 17: stopping_condition := true { No node from V rem has a white closed neigh-bor} 18: else 19: score min := min{score(v)|v ∈ V rem } 20: Choose v * uniformly at random from RCL for each node u ∈ N(v * ) do 25: if ( color(u) = WHITE ) then 26: color(v) := GRAY 27: end if 28: end for 29: color(v * ) := BLACK 30: end if 31: end if 32: end while 33: if not stopping_condition then destr D := destr D + destr max − destr min 9 Once the value of det D falls below the lower bound, det min , it is set back to the upper bound, det max . In the same way, once the value of destr D exceeds the upper bound, destr max , it is set back to the lower bound, destr min .
Note that that the constants 0.1 and 9 were fixed after preliminary experiments. In contrast, the values of seven important parameters will be determined by scientific tuning experiments (see Section 4.2). The denominator in the case of the adaptation of destr D was set to 9 in order to have 10 different values between destr min and destr max . The motivation behind this adaptive scheme for the degree of greediness and the destruction rate is to use a higher degree of greediness and a smaller solution destruction as this leads to better solutions, and to move towards a lower degree of greediness and a higher destruction once no more improving solutions are found.
SelectNextPopulation(P, P new , cnt): This last function concerns the selection/generation of the solutions for the population of the next iteration. If cnt < max noimpr , the new population, P, is simply filled with the p size best solutions from P ∪ P new . In case cnt = max noimpr , all solutions, apart from the best one, are deleted from P, and p size − 1 new solutions are added via the use of the GreedyMWDDS(D) procedure (used with D = ∅ as input). Note that in this case, the RCL parameter, det D , is each time randomly picked from {0.5, 0.6, 0.7, 0.8, 0.9, 1}. This set of values was chosen after preliminary experiments. The variation in this set potentially ensures some diversification in the search space, with the hope of covering unexplored areas of the search space. Moreover, cnt is set to zero. In summary, this function implements a restart procedure which is performed once max noimpr iterations have been performed without the improvement of the best-so-far solution, S bsf .

Experimental Evaluation
We implemented the proposed PBIG algorithm in ANSI C++ using GCC 10.2.0 for the compilation of the software. Moreover, we decided to compare PBIG with the following algorithmic approaches: (1) the best of three available local search approaches from the literature, called VD (Variable Depth) [27]; (2) our own greedy heuristic, labeled GH-MWDDS + [28]; (3) application of the ILP solver ILOG CPLEX 20.01 in a single-threaded mode to all problem instances. Note that, surprisingly, GH-MWDDS + is currently the state-of-the-art method used to solve the MWDDS problem, outperforming both the local search method (VD) and CPLEX. In order to conduct a fair comparison to the VD algorithm, we used the original source code provided by the authors of [27].
The time limit for each application of CPLEX was set to 2 CPU hours. Moreover, the experiments were performed on a cluster of machines with two Intel ® Xeon ® Silver 4210 CPUs, with 10 cores of 2.20 GHz and 92 Gbytes of RAM.

Problem Instances
All considered techniques were applied to two sets of benchmark instances. The first set, consisting of 640 random graph instances, was already used for the evaluation of GH-MWDDS + in [28]. In particular, this set-henceforth called SET1-contained graphs with n ∈ {50, 100, 150, 200, 250} nodes. For each value of n, there were graphs of different densities, expressed by the average degree, d. In the case of n = 50, for example, Set1 contained graphs with average degrees of d ∈ {15, 20, 25, 30, 35}. For each combination of n and d, SET1 contained 20 randomly generated graphs.
In order to test our algorithms on graphs that are also commonly used to model sensor networks, we generated an additional set of benchmark instances (SET2) consisting of random geometric graphs (RGGs). These graphs were generated by scattering n ∈ {100, 500, 1000} nodes randomly on the square, [0, 1] 2 . This means that each node, i, had its location in (x i , y i ) ∈ [0, 1] 2 . Two nodes, i and j, were then connected by an edge if and only if the Euclidean distance between i and j was smaller or equal to a predefined threshold value, r > 0. For each n ∈ {100, 500, 1000}, we considered five different threshold values. Moreover, for each combination of n and r, we randomly generated 20 graphs. Accordingly, SET2 consists of 300 problem instances.
Finally, note that-both in the case of SET1 and SET2-each node (sensor) of the network was given a random real value between 0 and 1 as node weight (lifetime). Both benchmark sets can be obtained at https://www.iiia.csic.es/~christian.blum/research. html#Instances (accessed on 16 January 2022).

Algorithm Tuning
PBIG requires well-working parameter values for the following seven parameters:
Lower bound of the determinism rate (det min ); 3.
Upper bound of the determinism rate (det max ); 4.
Lower bound of the destruction rate (destr min ); 5.
Upper bound of the destruction rate (destr max ); 6.
For the the purpose of parameter tuning, we used the scientific tuning software irace [32]. More specifically, we tuned PBIG separately for SET1 and SET2. For this purpose, we generated specific tuning instances as follows. For SET1, exactly one instance was randomly generated for the following combinations of n (number of nodes) and . In other words, 10 instances were specifically generated in order to tune PBIG for its application to instances from SET1. In this case, the irace software was run with a budget of 5000 algorithm applications. Regarding SET2, we randomly generated one tuning instance for each of the following combinations of n and r (threshold value for connecting two nodes): (n = 100, r = 0.2), (n = 100, r = 0.3), (n = 500, r = 0.1), (n = 500, r = 0.2), (n = 1000, r = 0.05), and (n = 1000, r = 0.15). In this case, as irace is applied with only six tuning instances, the budget was limited to 3000 algorithm applications. The obtained parameter value settings are shown in Table 1.

Results and Discussion
First of all, note that PBIG was applied exactly once to each problem instance, with a computation time limit of n/2 CPU seconds. Table 2 presents the results of all competing methods-that is, CPLEX, VD, GH-MWDDS + , and PBIG-for the instances of SET1. The first two columns indicate the problem instance type in terms of: (1) the number of nodes (n) and (2) the average degree (d). Naturally, the density of the networks grows with an increasing average degree, d. Each table row provides the average result of each competing algorithm for the 20 generated problem instances concerning the corresponding combination of n and d. Table columns 3 and 4 show the results of CPLEX. The first of these columns (with the heading "Value") provides the average quality of the best solutions generated by CPLEX for 20 instances, while the second column presents the average gap (in percent) between the objective function value of the solutions obtained by CPLEX and the best upper bounds identified by CPLEX. The results of the other three competitors are shown by the columns with the headings "Value" and "Time". The first column providesas in the case of CPLEX-the average quality of the generated solutions, while the second one provides the computation time. In the case of VD and GH-MWDDS + , the computation time corresponds to the time at which the algorithm terminated, while in the case of PBIG, the computation time is the time at which the best solution of a run was found on average. The corresponding standard deviation in the case of PBIG is provided in an additional column with the heading "σ Time ". Finally, note that the best result in each row is indicated in bold font.
The results displayed in Table 2 allow for the following observations: • As already mentioned in [28], solving the MWDDS problem by means of an ILP solver such as CPLEX is only useful in the context of the smallest of all problem instances. In fact, even though CPLEX obtains the best results in the case of (n = 50, d = 15), the gap information indicates that-even in this case-CPLEX is far from being able to prove optimality. • For all instances, apart from (n = 50, d = 15), PBIG outperforms the remaining approaches. In particular, the current state-of-the-art method, GH-MWDDS + , is consistently outperformed. This shows that our way of extending the solution construction mechanism of GH-MWDDS + into a PBIG algorithm was successful. Next, we study the results obtained by CPLEX, GH-MWDDS + , and PBIG for the new RGG instances from SET2. These results are shown in Table 3, which has the same structure as Table 2. The only exception is the second table column, which provides the threshold value, r, used to generate the RGGs, instead of the average degree, d. The following conclusions can be drawn based on the obtained results: • First of all, CPLEX does seem to have fewer problems in solving RGG instances in comparison to random graphs. In fact, CPLEX is able to solve all 60 instances with n = 100 and r ∈ {0.2, 0.225, 0.25} to proven optimality. Furthermore, 19 out of 20 instances with n = 100 and r = 0.275 are solved to optimality, as well as 18 out of 20 cases with n = 100 and r = 0.3. This is in contrast to the case of RGs, for which CPLEX was not even able to solve problem instances with 50 nodes to optimality. Nevertheless, for the larger instances (with n ∈ {500, 1000}) CPLEX was, with very few exceptions, only able to derive the trivial solutions that do not contain any dominating sets. • PBIG obtains the same results as CPLEX in those cases in which CPLEX is able to provide optimal solutions. Moreover, PBIG is able to do so in very short computation times of less than 10 s. • As in the case of the instances in SET1, PBIG also consistently outperforms the other competitors for the RGG instances in SET2. While GH-MWDDS + obtains an average solution quality of 2.911, PBIG achieves one of 3.134.
Finally, we decided to study the structure of the solutions provided by GH-MWDDS + and PBIG in more detail. In particular, this was performed for two cases: the first (out of 20) RGG graphs with 100 nodes and a threshold value of r = 0.2, and the first (out of 20) RGG graphs with 100 nodes and a threshold value of r = 0.3. In both cases, we graphically present the solutions of GH-MWDDS + and PBIG in Figures 2 (for the graph with r = 0.2) and Figure 3 (for the graph with r = 0.3). Note that the node color in all the graphics indicates the dominating set to which a node belongs; the purple color indicates that the corresponding node is not assigned to any of the disjoint dominating sets. The four solutions shown in these figures are additionally provided in textual form in Table 4. The four sub-tables provide all disjoint dominating sets, the color in which these sets are shown in the graphics of Figures 2 and 3, the number of nodes contained in all dominating sets, their lifetime, and the lists of nodes belonging to the dominating sets.    3. The lifetime of each node is provided as the node label. Moreover, the node colors indicate to which dominating set a node belongs. In both cases, the color purple indicates that the respective node is not chosen for a dominating set.
The following interesting observations can be made. First, in both cases, the structure of the PBIG solution is quite different to the structure of the GH-MWDDS + solution. This means that PBIG does not just locally improve the GH-MWDDS + solutions; it often seems to perform a profound restructuring. Second, in the case of the graph with (n = 100, r = 0.2), PBIG comes up with a solution that contains one more dominating set than the GH-MWDDS + solution. This leads to the fact that the PBIG solution leaves less nodes unused in comparison to the GH-MWDDS + solution (44 nodes unused vs. 58 nodes). Third, the solutions of PBIG and GH-MWDDS + , in the case of the graph with n = 100 and r = 0.3 (Figure 3), indicates that making use of more nodes does not always lead to better solutions. More specifically, both algorithms generate solutions with six disjoint dominating sets. While the GH-MWDDS + solution makes use of 47 nodes, the PBIG solution only makes use of 46 nodes. Nevertheless, the PBIG solution is better, with an objective function value of 3.106, in comparison to a quality of 2.908 for the solution generated by GH-MWDDS + . This is because the dominating sets in the PBIG solutions have a longer lifetime on average than those in the GH-MWDDS + solution. Finally, we decided to show the evolution of the quality of the solutions produced by PBIG over time. This was performed for two of the hardest cases. In particular, we chose the first random graph, with n = 250 nodes and an average node degree of d = 50, and the first random geometric graph, with n = 1000 nodes and a threshold of r = 0.15. Remember that the computation time limit is n/2 CPU seconds in both cases. The graphics in Figure 4 show PBIG's evolution for 10 runs per problem instance. The shaded area around the average behavior line indicates the variance of the algorithm. Moreover, the horizontal, dashed lines indicate the quality of the solutions produced by GH-MWDDS + . Note that in both cases, PBIG outperforms GH-MWDDS + very early in each run. In the case of Figure 4a, most of the runs even start off with a solution better than the one of GH-MWDDS + . Additionally, note that the given computation time is sufficient in both cases as PBIG shows clear signs of convergence before reaching the computation time limit of 125 CPU seconds in the case of Figure 4, and 500 CPU seconds in the case of Figure 4b. Solution of GH-MWDDS + for the first RGG graph with 100 nodes and r = 0.3; see Figure 3a.  Best solution of PBIG for the first RGG graph with 100 nodes and r = 0.3; see Figure 3b.

Conclusions
This paper dealt with lifetime maximization in wireless sensor networks by means of solving an optimization problem known as the maximum weighted disjoint dominating sets problem. As shown by the weak results of the ILP solver, CPLEX, this problem is a challenging combinatorial optimization problem. In this work, we extended an existing high-quality greedy algorithm from the literature towards a population-based iterated greedy algorithm. This algorithm worked on a population of solutions. At each iteration, it applied partial destruction to each solution in the population. Subsequently, the obtained partial solutions were subjected to re-construction, often resulting in different and improved solutions. We compared our approach to three competitors: the application of CPLEX, the best greedy algorithm from the literature, and the best available local search method. This comparison was based on two benchmark sets. The first set, consisting of 640 random graphs, was taken from the related literature, while the second set, consisting of 300 random geometric graphs, was newly generated for this work. In summary, we can say that our population-based iterated greedy algorithm consistently outperformed all competitors. This algorithm can therefore be called the new state-of-the-art approach for the tackled problem.
Given the weak performance of CPLEX for this problem, one promising line of future work might be that of dealing with the development of specialized exact approaches. Another line of research might focus on hybrid techniques such as construct, merge, solve, and adapt (CMSA) [33]. In particular, CMSA allows users to take profit from ILP solvers such as CPLEX, even in the context of large problem instances for which a direct application of CPLEX is currently not beneficial.