1. Introduction
In the field of combinatorial optimization, many classical problems have been extensively studied due to their wide applicability, including the vertex cover problem [
1], the facility location problem [
2], the machine scheduling problem [
3], the cloud–edge collaborative computation offloading problem [
4], and so on. These problems often serve as foundational models for more complex and application-driven variants. The power cover problem arises as such a variant, motivated by practical needs in wireless sensor networks (WSNs), where the coverage area of each sensor depends on its transmission power.
The power cover problem involves assigning transmission power to sensors in order to serve a set of clients. When the coverage area of each sensor is modeled as a disk, the power cover problem can be viewed as a disk cover problem tailored to WSNs. The rapid advancement of wireless sensor technologies has enabled WSNs to play a critical role in domains such as military operations, healthcare, and environmental monitoring [
5,
6]. They are also widely used in applications such as industrial inspection [
7], battlefield monitoring [
8], and wildlife habitat observation [
9].
A typical WSN [
10,
11] is composed of a large number of low-cost, multifunctional sensor nodes, which are typically deployed to monitor areas of interest. The coverage area (monitoring area) of sensors is one of the most important performance indicators for measuring sensor networks, and it is closely related to the service quality of WSNs [
12]. Therefore, coverage is a fundamental requirement for WSNs. It has been widely studied from various perspectives, including connectivity [
13], fault tolerance [
14], aggregation delay [
15], and lifespan extension via sleep wake scheduling [
16]. In many cases, we can interpret the concept of coverage as a non-negative mapping between the spatial points of the sensor field and the sensor nodes of the sensor network.
In a typical WSN, the coverage area of the sensor is a disk centered around the sensor, where the radius of the disk is determined by the power of the sensor. The relationship between the power
of a sensor node
s and its radius of coverage area
is
, where
and
are constants. We usually refer to
as the attenuation coefficient. The schematic diagram of the power cover problem in WSNs, as shown in
Figure 1, illustrates how sensors (e.g.,
) serve clients (
) within their coverage areas, indicated by radii (e.g.,
,
) shown in blue. The diagram also depicts communication links between sensors and other network components, such as the cloud.
Sensor nodes are typically powered by small batteries, and when the power at the node is depleted, the sensor node fails. In sensor networks, if multiple sensor nodes monitor the same sensing area simultaneously, unnecessary coverage redundancy may occur, resulting in power waste. From the relationship between sensor power and coverage area, it can be inferred that a larger coverage area will result in greater power loss and waste for the sensor, which may lead to higher economic costs and reduce the lifespan of sensors. In other words, power consumption is in conflict with service quality and economy. Therefore, the power cover problem has received widespread attention from scholars.
The power cover problem is a geometric constraint set cover problem. In the set cover problem, we are given a universe
U of
n elements, and a collection of subsets
of the universe; the goal is to pick the smallest number of sets from among this collection so that their union is the whole universe. For a set cover problem with geometric constraints, it is very challenging to obtain the optimal solution, even when considering its simpler instances. A case in point is the unit disk cover (UDC) problem, which involves a collection of unit disks spread across a plane and a set of points in two-dimensional space. For the UDC problem, we need to find a minimum cardinality subset of unit disks to cover all points on the plane. Feder and Greene [
17] have proven that this specific problem is classified as NP-hard. This demonstrates that finding an optimal solution for such geometric problems can be exceedingly difficult, particularly when the problem’s constraints are inherently complex. Thus, finding polynomial-time approximation algorithms is the main objective for such problems. Consequently, in this paper, we will focus on presenting approximation algorithms of the power cover problem and its related variants.
The two core criteria for measuring approximation algorithms are time complexity and approximation ratio. The time complexity reflects the efficiency of algorithm execution, that is, the speed at which the algorithm processes data and the computational resource requirements that increase with the input scale. The approximation ratio measures the degree of closeness between the solution obtained by the approximation algorithm and the optimal solution. The closer its value is to 1, the closer the solution obtained by the algorithm is to the optimal solution, thus reflecting the accuracy of the algorithm. These two standards together determine the performance and applicability of approximation algorithms in practical applications, helping developers balance the speed and quality of algorithms. Next, several important definitions will be introduced, such as PTAS, QPTAS, etc., which will be mentioned in the following sections.
Definition 1 (approximation ratio [
18])
. An algorithm for an optimization problem Π maps every instance to a feasible solution . For a minimization problem Π and some , an algorithm is called a β-factor approximation algorithm if for every instance , the solution returned by satisfies Definition 2 ([
19])
. An approximation scheme for minimization problem ∏ is a family of -approximation algorithms for problem ∏ over all . Definition 3 ([
19,
20])
. Suppose ∏ is a minimization problem; we have the following definitions:An algorithm for a problem ∏ is said to run in polynomial time, or said to be a polynomial-time algorithm, with respect to a particular model of computer if the number of instructions executed by the algorithm can be bounded by a polynomial in the size of the input.
A polynomial-time approximation scheme (PTAS) for problem ∏ is an approximation scheme whose time complexity is polynomial in the input size.
If for such a PTAS, the running time of algorithm can be bounded by for some computable function f, where denotes the input length, then the PTAS is called an efficient polynomial-time approximation scheme (EPTAS).
Definition 4 ([
21])
. Suppose ∏ is a minimization problem; we have the following definitions:An algorithm is called quasi-polynomial-time if its worst case running time is at most for some fixed c.
A quasi-polynomial-time approximation scheme (QPTAS) for a problem ∏ is an approximation scheme whose time complexity is quasi-polynomial in the input size.
Definition 5 (nearly-linear-time algorithm [
22])
. An algorithm is called nearly-linear-time if its running time is , where ϵ is a precision parameter. Definition 6 (fixed parameter tractable, FPT [
23])
. A parameterized problem instance is created by associating an input instance with an integer parameter q. A problem ∏ is fixed parameter tractable (FPT) if any instance of the problem ∏ can be solved in time , where f is an arbitrary computable function of q. Throughout this paper, we adopt the convention that the parameter q in all FPT algorithms for the power cover problem and its variants denotes the number of sensors involved in the instance.
Given a metric distance function , for any two points s and c in the planar space, denotes the distance between the two points. It is worth noting that the power cover problem can be studied under two different settings, depending on whether the sensor locations are specified or unspecified.
When the problem is configured with a specified sensor, we are dealing with a fixed landscape. Here, the sensor deployment spots are already set in stone, forming a finite group . This means you are limited to choosing from a finite number of available vertices. The entire problem boils down to a series of purely discrete binary decisions: deciding which sensors to activate (), which client gets served by which sensor (), and which client subset A unfortunately remains unserved and incurs a penalty (). The sheer number of combinations these discrete decisions create significantly ramps up the problem’s difficulty, making it inherently NP-hard.
Moreover, in this “specified sensor” setup, to minimize the power cost related to the radius’s power in the objective function, the selection of sensor coverage radii also becomes discretized. Specifically, for any activated sensor , its optimal coverage radius will not be an arbitrary continuous value. To conserve power cost while still serving all assigned clients, sensor ’s radius must be equal to the distance from sensor to its farthest served client . This means that for each sensor , its effective coverage radius can only be chosen from the set of distances between that sensor and all clients in set C. Thus, each sensor’s radius can be chosen from at most candidate values, and the total number of possible radius values across all sensors in the entire problem is at most . This inherent discreteness, while narrowing the continuous radius selection space, does not simplify the problem’s core; instead, it reinforces its combinatorial optimization complexity. This setup aligns more closely with many real-world constrained scenarios and typically requires approximation algorithms or various heuristic algorithms for solving.
When the problem is formulated with an unspecified sensor, we are essentially operating on a true blank canvas. In this setting, sensor placement is not limited to a finite, predefined set of candidate sites. Instead, any point in a metric space (e.g., the Euclidean plane , or more generally, a space ) can be chosen as a sensor location. This means the number of possible placement options is uncountably infinite, granting the model a high degree of spatial flexibility.
Such flexibility enables the sensor layout to be finely tuned to the spatial distribution of clients, potentially minimizing the total system cost—including power cost and penalties for unserved clients. To capture this freedom, the model must include continuous decision variables that specify the exact location of each sensor, in addition to its coverage radius and client assignments. Unlike with specified sensors, where sensor positions are fixed and only activation decisions are required, here the positions themselves are part of the optimization. This geometric flexibility, combined with the continuous nature of both location and radius variables, introduces a unique set of nontrivial computational challenges.
This setting is particularly relevant in greenfield deployment scenarios, where infrastructure does not yet exist, and sensors can be freely positioned. Typical examples include large-scale IoT networks [
24], or environmental monitoring in remote regions [
25].
To better understand the differences between the two settings, we will list them in
Table 1.
For the power cover problem with specified sensors, we are given a set of sensors
, and a set of clients
in the metric space. The unit cost of power for each sensor is
. Each sensor
can adjust its power
to form a coverage area with radius
, such that a subset of clients within this area can be served, where
. The goal of the power cover problem with specified sensors is to minimize the total power cost of all sensors serving all clients. For convenience, we abbreviate
as
afterwards. The nonlinear integer program of the power cover problem with specified sensors is shown in the following:
In the PC-NLP, we have a variable for each sensor that indicates whether is in the solution () or not (). For each sensor and each client , there is a variable that indicates whether serves () or not (). The first set of constraints of the PC-NLP ensures that if a client is served by a sensor, the sensor must be selected in the solution. The second set of constraints ensures that each client is served by exactly one sensor. The third set of constraints ensures that served clients must be within the coverage area supported by the sensor.
The power cover problem was proven NP-hard by Bilò et al. [
26] for the case of unspecified sensors, when the attenuation coefficient
. Subsequently, as the research deepened, Alt et al. [
27] proved that the problem with specified sensors is NP-hard when the attenuation coefficient is
in
.
When the attenuation coefficient
, i.e., solving for the sum of radii, Charikar and Panigrahy [
28] presented a primal-dual 3.504-approximation algorithm running in polynomial time for the case of unspecified sensors in a metric space, which coupled with an application of Lagrangian relaxation. Gibson et al. [
29] presented a QPTAS for the case with
unspecified sensors, using a randomized algorithm whose running time is
, where
N is the number of points in the metric space.
For the case of
specified sensors, Lev-Tov and Peleg [
30] gave a PTAS for the power cover problem with any constant dimensional space, based on a dynamic programming method. It achieved optimality in 1-dimension with a running time of
, and utilizing a modified variant of the hierarchical grid-shifting technique of [
31], provided a
-approximation in
d dimensions for
.
Friggstad and Jamshidian [
32] improved the result of [
28] and gave a 3.389-approximation algorithm based on the bi-point rounding method [
28] for the case with
unspecified sensors in
. Chen et al. [
33] proposed an FPT
-approximation algorithm for the power cover problem with
, unspecified sensors, and parameter
q in a metric space, running in
.
Freund and Rawitz [
34] gave a 9-approximation algorithm for the case of specified sensors, when
and the unit cost of power for each sensor varies in
, based on weight manipulation in the spirit of the local ratio technique. Bandyapadhyay and Varadarajan [
35] utilized the partitioning scheme to obtain a quasi-polynomial-time
-approximation algorithm for the case of specified sensors and
in a metric space; under the assumption of standard complexity theory, they proved the polynomial (or quasi-polynomial) time algorithm of the power cover problem cannot achieve better results than
for
.
Friggstad et al. [
36] got a 11.078-approximation algorithm for the case with
and unspecified sensors. Bilò et al. [
26] obtained a PTAS for the case of unspecified sensors, by conducting deeper research on geometry for fixed
and any fixed-dimensional Euclidean space.
Cao [
37] introduced the two-stage stochastic power cover (TSSPC) problem for the case of specified sensors in
, and obtained an
-approximation algorithm by using the primal-dual method. Li et al. [
38] introduced a new problem, the power-constrained maximum power cover (PCMPC) problem. This is a maximization model that differs from the power cover problem. The PCMPC problem aims to compute an power assignment that maximizes the total benefit of the served clients, subject to a total power constraint; for the case with specified sensors and fixed
, where the coverage area of each sensor is modeled as a disk in
, they gave a
-approximation algorithm, based on a greedy strategy.
To clearly distinguish between different settings of the power cover problem, we divide our summary into two separate tables. We tabulate the known results of the power cover problem with specified sensors in
Table 2.
We tabulate the known results of the power cover problem with unspecified sensors in
Table 3.
Despite significant progress, current approximation algorithms for the power cover problem still exhibit important limitations. Most known PTAS results apply only when the attenuation coefficient is fixed, and their efficiency typically degrades as increases. Moreover, many algorithms rely on geometric assumptions (e.g., Euclidean space) and are not easily generalizable to more complex or constrained variants. A key algorithmic challenge lies in the nonlinearity of the objective: the power cost of each sensor grows as the -th power of its coverage radius. This structure complicates the design of approximation algorithms, especially when is not fixed. Therefore, we propose the following open problem.
Open problem 1. Can we design a PTAS for the power cover problem whenis not fixed?
This paper systematically investigates the power cover problem and several of its important variants, including the prize-collecting power cover (PCPC) problem, the partial power cover (PPC) problem, and the capacitated power cover (CPC) problem, and also introduces the universal power cover (uni-PC) problem that unifies these variants. We focus exclusively on modeling the specified sensor setting, where sensor locations are drawn from a finite set, due to its discrete structure and practical relevance. In contrast, while the unspecified-sensor variant is more general and allows sensor placement anywhere in the metric space, it typically involves continuous geometric variables and nonconvex optimization, making it difficult to express under a unified integer programming framework. Therefore, for clarity and technical coherence, this survey summarizes approximation results for problems with unspecified sensors but does not present formal mathematical models for them. To better understand the power cover problem and its variants discussed in this survey,
Figure 2 illustrates a taxonomy of power cover problems, categorized by modeling assumptions, optimization objectives, and coverage constraints.
The rest of the survey is organized as follows. In
Section 2, we review foundational problems closely related to power cover, such as the set cover problem, the vertex cover problem, and the geometric cover problem.
Section 3 focuses on the uni-PC problem.
Section 4 examines the PCPC problem, highlighting relevant algorithms and theoretical results. In
Section 5, we describe the PPC problem and give the related results.
Section 6 addresses the CPC problem and its research developments.
Section 7 introduces several additional variants of the power cover problem and their relevant work. In
Section 8, we summarize our survey and propose a generalized power cover (GPC) problem.
2. Related Work
The power cover problem is a special case of the set cover problem. The set cover problem is a classic NP-hard problem in the field of combinatorial optimization. The set cover problem can be applied in various real-life scenarios [
39]. The set cover problem is one of the first 21 problems proved to be NP-complete in Karp’s [
40] seminal paper. Feige [
41] proved that unless
, there does not exist a polynomial-time algorithm with an approximation factor less than
for any
for the set cover problem. Dinur and Steurer [
42] proved that unless
, the set cover problem cannot achieve an approximation algorithm of
.
The partial set cover problem is a variation of the set cover problem, in which only partial elements are required to be covered. Kearns [
43] first proposed the partial set cover problem and used a greedy strategy to achieve a
approximation algorithm. Slavík [
44] improved the performance ratio to
using an improved greedy algorithm, where
l is the percentage that elements are required to be covered and
is the maximum cardinality of a set. Gandhi [
45] used the primal-dual method to achieve an
f-approximation algorithm, where
f denotes the maximum number of sets any element appears in. For the generalized version of the partial set cover problem, where each element in the set has a profit and the total profit of the covered elements needs to reach a threshold, Bar-Yehuda [
46] provided an
f-approximation algorithm using the local ratio method. Könemann et al. [
47] presented a Lagrangian relaxation framework and obtained an approximation ratio of
for the partial set cover problem. Inamdar and Varadarajan [
48] devised an LP-rounding method for the partial set cover problem when subset costs vary, having an approximation factor
(
being the integrality gap of the natural LP for the weight set cover problem). Chekuri et al. [
49] enhanced the result to
.
The capacitated set cover problem generalizes the classical set cover problem by introducing capacity constraints on the selected sets, thereby limiting the number of elements each set can cover. This means that, unlike the standard version, each set can only cover a limited number of elements. Wolsey [
50] gave an
-approximation algorithm.
The power cover problem is closely related to the vertex cover problem. In the vertex cover problem, given a graph
, we look for a minimum size subset of vertices such that for every edge
, at least one of
u,
v belongs to this subset. The vertex cover problem has also been proven by Karp [
40] to be an NP-complete problem. Papadimitriou and Yannakakis [
51] demonstrated that the vertex cover problem is APX-complete. Dinur et al. [
52] indicate that unless
, the vertex cover problem cannot have a better approximation than 1.36. If the hypothesis of the power of unique two-prover-one-round games proposed by Khot [
53] holds, then Khot’s [
54] proof shows that it is hard for the vertex cover problem to have a constant approximation algorithm better than a factor of 2. Halperin [
55] relaxed the problem using semi-definite programming and obtained an approximation ratio of
, where Δ is the maximum degree of the vertices in the graph. Karakostas [
56] improved the result by using semi-definite programming relaxation to achieve an approximation ratio of
in polynomial time.
The partial vertex cover problem is a generalization of the vertex cover problem, where the goal is to select a subset of vertices of minimum total cost that covers at least
k edges of the graph. Caskurlu [
57] proved that even on bipartite graphs, the partial vertex cover problem is NP-hard. Bar-Yehuda [
58] proposed two 3-approximation algorithms for separable and non-separable requirements under soft capacity constraints, which rely on a combination of complex charging schemes and local ratio techniques. Mkrtchyan et al. [
59] proposed two pseudo-polynomial DP-based algorithms for the case where edges and points have weights in the tree.
The capacitated vertex cover problem is an extended version of the vertex cover problem, as it includes capacity constraints for all vertices. Guha et al. [
60] proposed a 2-approximation algorithm based primal-dual approach for the capacitated vertex cover problem with soft capacities. Chuzhoy and Naor [
61] demonstrated a 3-approximation based on randomized rounding with alterations for the capacitated vertex cover problem with hard capacities, which was improved to 2 by Gandhi et al. [
62]. Kao [
63] gave a rounding-based polynomal time
f-approximation algorithm for the capacitated vertex cover problem with hard capacities on hypergraphs, where
f is the maximum edge size in
.
In a metric space, the coverage performance of sensors can be quantified using concepts like distance and angle, enabling the comparison and evaluation of different deployment schemes. Building on this, there is a close relationship between the power cover problem and the geometric cover problem. In the geometric cover problem, given a set G of m geometric objects such as disk, ball, and so on, and a set P of n points in the metric space, the goal is to find a minimum sized subset and an assignment of the points in P to the geometric objects in such that each point is covered by a geometric object in . The geometric cover problem typically focuses on whether points or areas are completely covered, while the power cover problem introduces the concept of sensor signal strength decay (i.e., power attenuation). When sensor transmission power relates to radius of coverage area as , the value of is crucial. For instance, when is large, power attenuates rapidly, drastically reducing the effective coverage range. In this scenario, the power cover problem approaches the geometric cover problem, which only considers the sensor’s direct touching range, similar to how, under an infinite norm, only the closest sensor can provide effective coverage. Both problems aim to optimize resource allocation for the best coverage, but the power cover problem offers a more refined model that aligns better with actual physical limitations.
When the set of geometric objects is a set of disks, the geometric cover problem can be characterized as a disk cover problem. This means that under the specific condition where the geometric objects are in the form of disks, we can view the geometric cover problem from the perspective of a disk cover problem, and further analysis and solutions might be carried out based on the characteristics and rules of the disk cover problem. For the disk cover problem, Hochbaum and Maass [
64] developed a polynomial-time
-approximation algorithm, utilizing a shifting strategy approach. Liao et al. [
65] employed a local-neighborhood-based scheme to propose a PTAS algorithm. Agarwal and Pan [
66] designed
expected-time randomized
-approximation algorithm for the disk cover problem, where
in
. Mustafa and Ray [
67] gave the first PTAS for the geometric cover problem when the geometric objects are half-spaces on
and when they are
r-admissible set regions in the plane, based on local search.
The unit disk cover (UDC) problem is a specialized form of the disk cover problem. In the UDC problem, all disks have identical radii, which sets it apart from the general disk cover problem where disks can have varying radii. This uniformity in disk size simplifies the problem, making it easier to apply certain algorithms and geometric principles to find efficient solutions for covering a set of points or an area with the minimum number of unit disks. For the UDC problem, Călinescu [
68] presented a 102-approximation algorithm, based on rounding the optimal solution of a linear programming relaxation. Narayanappa and Vojtechovsky [
69] improved the result using the same methods and closer observation, obtaining an approximation factor of 72. Afterwards, Carmi et al. [
70] improved the approximation ratio of 72 to 38 by solving a subproblem where the nodes to be covered lie below a line and all disks lie above the line. Biniaz [
71] proposed a 4-approximation algorithm for the UDC problem on the plane, employing the plane sweep technique in the Euclidean norm, whose running time is
; they also demonstrated how to extend the algorithm to other metrics and
. Fu et al. [
72], based on the shifting strategy of [
64], gave a
deterministic time 2.8334-approximation on the plane. Gao et al. [
73] designed a 5-approximation algorithm whose running time is
. When the set of geometric objects is a set of fat triangles of the same size, the geometric cover problem can be characterized as a unit fat triangle cover (UFTC) problem. Har-Peled [
74] proved the UFTC problem is APX-hard.
In real-life situations we commonly experience, the cost of each disk may not be the same, so evidently, the weighted disk cover (WDC) problem is more practical in real-world scenarios. The WDC problem is an extended or more generalized version of the disk cover problem, and the difference lies in whether the cost of the disk is the same. The goal of the WDC problem is to minimize the total cost of a subset of disks that covers all given points. Huang [
75] devised a 7-approximation algorithm for the 2-dimension metric space, and a 21-approximation algorithm for the 3-dimension metric space. Gonzalez [
76] proposed an 8-approximation algorithm when the size of disks is fixed. The approximation ratio obtained by Varadarajan [
77] using quasi-uniform sampling was
. This result was further improved by Chan et al. [
78] to ratio
through improved quasi-uniform sampling, who optimized the quasi-uniform sampling technique and brought in the concept of shallow cell complexity, which extends the idea of union complexity to abstract set systems. Based on the dynamic programming technique, Zou et al. [
79] presented a
-approximation algorithm for the WDC problem when the disk of each radius is 1, based on the dynamic programming method. Chekuri et al. [
80] gave the first nearly-linear-time approximation schemes for the LP relaxations, using the LP-rounding method.
We consider the weighted unit disk cover (WUDC) problem, which is a special case or variant of the disk cover problem. Unlike the disk cover problem, in this case, all disks have the same radius, but their costs may differ. Ambühl et al. [
81] presented the first approximation algorithm for the WUDC problem. Applying enumeration and dynamic programming techniques, they devised a 72-approximation algorithm based on minimum spanning tree. Huang et al. [
82] obtained a
-approximation algorithm for the WUDC problem, using shifting strategy technique [
64]. Ding et al. [
83] further enhanced this result, yielding a
-approximation algorithm. Mustafa and Raman [
84] obtained a QPTAS for the WUDC problem, employing a separator framework [
85]. Li and Jin [
86] gave a PTAS for the WUDC problem, by the standard shifting strategy technique [
82], whose running time is
.
Unlike the geometric cover problem, which requires covering all points within a given set or region, the partial geometric cover (PGC) problem only needs to cover partial points in P. This means that the PGC problem has a more relaxed requirement compared to the geometric cover problem. While the geometric cover problem aims at ensuring complete coverage, the PGC problem allows for a more selective approach by only needing to cover a subset of the points in P. This difference in coverage requirements can lead to different strategies and algorithms being used when solving these two problems. For example, when solving the geometric cover problem, one might need to devise more comprehensive and exhaustive methods to ensure that no point is left uncovered, whereas for the PGC problem, the focus could be on optimizing the selection of points to cover based on certain criteria or constraints, since not all points need to be included in the coverage solution. Additionally, the PGC problem might offer more flexibility in terms of resource allocation and optimization, as it does not have the strict requirement of covering every single point like the geometric cover problem does.
The partial disk cover (PDC) problem is a special version of the PGC problem. We consider the PDC problem, which is a PGC problem with geometric objects of disks. When the radii of each disk are the same, Gandhi et al. [
45] introduced a dynamic programming method that utilizes the shifting strategy technique as [
64], achieving a PTAS for the PDC problem. Under a certain assumption on the density of the given disks, Glaßer et al. [
87] presented a PTAS for the PDC problem. Chan and Hu [
88] demonstrated a PTAS in the plane for the case where the set of geometric objects is a set of unit squares. Inamdar [
89] proposed a local search framework and obtained PTAS for various PGC problems, including partial covering of points by halfspaces in
and by
r-admissible regions in
, etc.
When each geometric object is constrained to cover only a limited number of points, the classical geometric cover problem is generalized into the capacitated geometric cover problem, which incorporates capacity constraints into the coverage model. When the collection of geometric objects specifically consists of a set of balls, we precisely define the capacitated geometric cover (CGC) problem as the capacitated ball cover (CBC) problem. This definition clarifies the feature of the problem within the CBC problem, where the elements to be used for covering are balls. By naming it the CBC problem, it becomes clear that the focus is on covering a given space or set of points using balls, with certain capacity constraints associated with each ball. This allows for a more targeted and in-depth analysis of the problem, enabling researchers and practitioners to develop appropriate algorithms and strategies tailored to the unique characteristics of ball-based geometric covering under capacity constraints.
When the radii of all balls are the same, Ghasemi and Razzazi [
90] designed the first
-approximation based on the shifting strategy and dynamic programming, whose running time is
in
for the CBC problem. When the demand for a point is no longer 1, please note that the demand at this time is not that the point needs to be covered multiple times by different balls, but that a ball can meet the demand for a point that demand is not 1 without violating the capacity constraint. Ghasemi and Razzazi [
91] combined several methods to obtain a PTAS for the case of the CBC problem with the same radii, including the shifting method, substitution method, and so on. When a disk can serve multiple points, but there are no more points within its coverage area, this will result in the remaining capacity of the disk. In practical applications, we can consider appropriately expanding the radius of the ball to expand its coverage range, thereby serving more points and better utilizing the current capacity of the ball. Thus, Bandyapadhyay et al. [
92] introduced the concept of bicriteria approximation.
Definition 7. An bicriteria approximation for the CBC problem is a solution where the balls can be expanded by a factor of β and the size of the solution is at most α times the optimum solution size (that does not expand the balls).
Under the assumption that the larger the radius, the larger the capacity, Bandyapadhyay et al. obtained the following series of results and conclusions. Bandyapadhyay et al. [
92] designed a bicriteria
-approximation algorithm based on the LP-rounding technique for the CBC problem. Bandyapadhyay [
93] later improved the algorithm and obtained a bicriteria approximation ratio of
for the same case, employing an improved LP-rounding technique, which reduces the expansion of the ball; based on this algorithm, a situation where the capacity of the ball is the same can also be considered, and a bicriteria approximation ratio of
can be obtained.
When the coverage requirement for each point
is no longer the simple value of 1, but rather a more flexible value
, meaning that each point must be covered by at least
geometric objects, the geometric cover problem becomes the geometric multi-cover (GMC) problem. Compared to the classical geometric cover problem, the geometric multi-cover (GMC) problem introduces a stronger requirement by demanding that each point be covered a specified number of times. This makes the GMC problem a natural generalization of the geometric cover problem, with broader applicability in scenarios where redundancy or fault tolerance is essential. We consider the disk-multi cover (DMC) problem, i.e., the set of geometric objects in the GMC problem is a set of disks. Chekuri et al. [
94] put forward an
-approximation algorithm for the DMC problem in
. In [
95], Raman and Ray obtained a
approximation by combining LP-rounding and local search for any
, and proved that local search yields a PTAS only when the demands are bounded above by a constant. Dai [
96] gave a 4-approximation algorithm based on the LP-rounding technique for the DMC problem with soft capacities. The unit disk multi-cover (UDMC) problem is a special case of the disk multi-cover (DMC) problem, in which all disks are required to have the same radius, typically normalized to one. The key difference between the two lies in the flexibility of disk radii: while the DMC allows variable radii to optimize coverage, the UDMC restricts all disks to be unit-sized, thereby simplifying the problem structure. Gao et al. [
73] gave a 5-approximation algorithm whose running time is
, where
represents the maximum covering requirement; they also designed a 4-approximation algorithm whose running time is
. When the cost of all disks is not necessarily the same, we consider the weighted disk multi-cover (WDMC) problem. Based on shallow cell complexity and quasi-uniform sampling [
77], Bansal and Pruhs [
97] showed an
-approximation algorithm for the WDMC problem in
. Another related problem is the interval multi-cover (IMC) problem, where the set G of geometric objects consists of intervals, and all points in
P lie on a line. Xu et al. [
98] proposed the maximum interval multi-cover (Max-IMC) problem, which seeks to select a subset of intervals with bounded cardinality in order to maximize the number of points whose coverage demands are satisfied; they gave a
-approximation algorithm for the Max-IMC problem based on dynamic programming.
Unlike the GMC problem, which requires satisfying the coverage demands of all points, the partial geometric multi-cover (PGMC) problem allows the coverage demands to be met for only a subset of the points. The points’ coverage requirement of this particular problem exhibits a notable difference from the PGC problem. It is no longer the previously assumed value of 1 but rather takes on a value of
for the point
, which introduces new elements and considerations into the problem-solving process. When the set
G of geometric objects is a set of unit squares, Ran [
99] used a strategy of shifted grids combined with a dynamic programming technique to obtain a
-approximate algorithm, under the assumption that for all
,
, where
. Another important study is the partial interval multi-cover (PIMC) problem, where the set
G of geometric objects is a set of intervals, and all points of
P are in a line. Ran et al. [
100] used dynamic programming techniques to obtain a 2-approximation algorithm for this problem.
4. Prize-Collecting Power Cover Problem
In the complex and ever-changing real business operation environment, communication operators will face huge cost expenditures if they want to achieve comprehensive coverage of all clients. Based on a comprehensive consideration of cost-effectiveness, operators often prioritize covering certain clients with high commercial value or densely populated areas. However, this strategy inevitably leads to the neglect of some regional clients, especially those in remote areas and signal blind spots. In the current highly informationized era, these unserved clients are unable to enjoy convenient communication services and express strong dissatisfaction with this. To resolve this contradiction, maintain market fairness and client satisfaction, researchers have proposed the concept of prize-collecting power cover (PCPC) problem, which compensates uncovered clients by punishing operators. In this section, we will introduce approximate algorithms for the PCPC problem and related works, and propose an open problem based on existing research.
In the PCPC problem with specified sensors, we are given a set of sensors, , and a set of clients in the metric space. A function , is defined over subsets of clients, where denotes the penalty incurred for not serving the subset of clients The unit cost of power for each sensor is . Each sensor can adjust its power to form a coverage area with radius , such that a subset of clients within this area can be served, where . The goal of the PCPC problem with specified sensors is to minimize the sum of the total power cost of all sensors and the total penalty of all clients. The PCPC problem with specified sensors can be viewed as a special case of the uni-PC problem with specified sensors in which a sensor can serve an unlimited number of clients, and all clients are allowed to remain unserved with an associated penalty.
We introduce the variable
for each sensor
, which indicates whether sensor
is selected (
) or not (
); the variable
for each sensor
and each client
, which indicates whether client
is served by sensor
(
) or not (
); and the variable
for each client subset
, which indicates whether the entire client subset
A is penalized (
) or not (
). The nonlinear integer program of the PCPC problem with specified sensors is shown in the following:
The first set of constraints of the PCPC-NLP ensures that every client is either punished or served. The second set of constraints ensures that if a client is served by a sensor, the sensor must be selected in the solution. The third set of constraints ensures that only clients within the coverage area can be served.
A penalty function
is called submodular [
101] if it satisfies the following
A penalty function
is called linear [
101] if it satisfies the following
Under the assumption that the submodular penalty function
is non-decreasing (i.e.,
for all
), and normalized, (i.e.,
), Liu et al. [
101] obtained a
-approximation algorithm for the case of specified sensors in a
d-dimensional space
, based on the plane subdivision technique; they also gave a PTAS for the case with linear penalty function and specified sensors in
. Dai [
102] obtained an approximation ratio of
for the PCPC problem with specified sensors in
, using an improved primal-dual algorithm for the case of a penalty function that is submodular.
For better comprehension, we tabulate the known results of the PCPC problem with specified sensors and fixed
in
Table 4.
Despite notable progress, approximation algorithms for the PCPC problem still face substantial limitations. Existing results often assume the attenuation coefficient is fixed and the penalty function is either linear or submodular. However, known approximation ratios (e.g., or ) deteriorate rapidly as increases, and no EPTAS is known even when is fixed. Moreover, most current algorithms rely on geometric properties of Euclidean space and do not extend naturally to higher-dimensional or general metric settings. A central challenge lies in the nonlinear and combinatorial nature of the objective, which combines the -th power of coverage radii with penalties over exponentially many client subsets. This complexity makes it difficult to balance sensor activation, radius selection, and client coverage decisions in a unified approximation framework. Therefore, we propose the following open problem.
Open problem 2. Can we design an EPTAS for the PCPC problem whenis fixed?
5. Partial Power Cover Problem
In numerous real-life practical scenarios, we often encounter various constraints. Particularly, due to the inherent constraints in power supply, which might stem from factors like battery capacity, power generation capabilities, or the high cost of maintaining a continuous and ample power source. Additionally, the specific geographical location of the points of clients and sensors plays a crucial role. These locations could be in remote areas, difficult terrains, or areas with complex infrastructure, making it difficult to extend coverage to every single point technically or economically. It is precisely because of such challenges and constraints that researchers and practitioners have been spurred into action. Inspired by this intricate and prevalent problem, the concept of the partial power cover (PPC) problem was ingeniously proposed, aiming to strike a balance between achieving satisfactory coverage and minimizing power consumption under these less-than-ideal conditions. In this section, we will introduce approximate algorithms for the PPC problem and related works, and propose an open problem based on existing research.
In the PPC problem with specified sensors, we are given a set of sensors, , a set of clients , and a positive integer k in the metric space. The unit cost of power for each sensor is . Each sensor can adjust its power to form a coverage area with radius , such that a subset of clients within this area can be served, where . The goal of the PPC problem with specified sensors is to minimize the total power cost of all sensors serving at least k clients as much as possible. When the penalty function for unserved clients subset tends to zero (), and each sensor is assumed to have unlimited capacity (), the uni-PC problem with specified sensors transforms into the PPC problem. The problem focuses exclusively on identifying the minimum power necessary to satisfy the mandatory requirement of covering at least k clients, reflecting the core challenge of the PPC problem: achieving a specified level of partial coverage with the utmost power efficiency.
We provide the nonlinear integer programming for the PPC problem with specified sensors as follows:
In the PPC-NLP, we have a variable for each sensor that indicates whether is in the solution () or not (). For each sensor and each client , there is a variable that indicates whether serves () or not (). The first set of constraints ensures that at least k clients are served. The second set of constraints ensures that clients can only be served by selected sensors. The third set of constraints ensures that served clients must be within the coverage area supported by the sensor.
By combining dual-primal techniques, Ahmadian and Swamy [
103] gave a
-approximation algorithm for the PPC problem with specified sensors in a metric space, when the attenuation coefficient
.
Buchem et al. [
104] proposed the PPC problem for the case with
and unspecified sensors in metric space and used a novel primal duality method to provide a
-approximate algorithm based on LP relaxation, binary search, and so on.
When attenuation coefficient
, Freund and Rawitz [
34] introduced the PPC problem with specified sensors in
, and used the 9-LMP algorithm in Könemann [
47] to obtain a
-approximation algorithm.
Li et al. [
105] designed a
-approximation algorithm for the case of specified sensors in
, based on the primal-dual method for the attenuation coefficient of
. Dai et al. [
106] generalize the method in Li et al. [
105] and obtain an
algorithm for PPC problem with specified sensors in
, using the concept of relaxed independent sets.
While the PPC problem assumes a unit profit for serving each client, the prize-collecting partial power cover (PCPPC) problem extends this framework by introducing specific profits for served clients and penalties for unserved clients. For the non-unit profit case, where the profit for serving each client can vary, Dai et al. [
107] developed a
-approximation algorithm for the PCPPC problem with specified sensors and submodular penalty in
. For the unit profit case, where each client has a unit profit, Liu et al. [
108] proposed a two-phase primal-dual algorithm, achieving an approximation ratio of at most
for the PCPPC problem with specified sensors and submodular penalty in
. In the unit profit case with a submodular penalty function, Liu et al. [
109] designed a polynomial-time
-approximation algorithm for the PCPPC problem with specified sensors in
, based on the primal-dual framework. Additionally, for the unit profit case, Wang et al. [
110] utilized properties of relaxed independent sets and combined guessing techniques with the primal-dual framework to develop an
-approximation algorithm for the PCPPC problem with specified sensors and submodular penalty in
.
The partial power cover problem with fairness constraint (PPCFC) is a generalization of the PPC problem. For the PPCFC, we denote
. The client set is divided into
T disjoint subcollections
. Each client subcollection
is associated with an integer
, such that
. Here,
represents a fairness constraint, i.e., at least
clients from subcollection
must be served. The objective of the PPCFC is to minimize the sum of the total power cost, such that at least
clients are served for each subcollection
. For the nonlinear mathematical programming of PPCFC, we only need to replace the second set of constraints in the PPC-NLP with inequality (
1).
It is worth noting that when
, the PPCFC is equivalent to the PPC problem [
111]. Ma and Zhang [
111] gave a polynomial-time
approximation algorithm for the PPCFC problem with specified sensors in
, based on primal-dual and binary search technology.
To clearly distinguish between different settings of the PPC problem, we divide our summary into two separate tables. We tabulate the known results of the PPC problem with unspecified sensors in a metric space in
Table 5.
We tabulate the known results of the PPC problem with specified sensors in
Table 6.
Despite progress on the PPC problem, the PPCFC remains largely unexplored in general settings. The PPCFC adds group-level service requirements on top of the already challenging PPC framework, which combines a nonlinear power cost objective with combinatorial client selection. So far, known approximation algorithms for PPCFC are limited to specific cases—for example, when is fixed and the setting is two-dimensional Euclidean space. Whether such results can be extended to higher dimensions or yield stronger approximations remains unclear. Therefore, we propose the following open problem.
Open problem 3. Does there exist a PTAS for the PPCFC in fixed-dimensional Euclidean space ?
6. Capacitated Power Cover Problem
Due to the performance and cost of sensors, the number of clients that sensors can serve is often constrained. In terms of performance, the signal transmission range and data processing capability are insufficient to meet the needs of a large number of clients. From a cost perspective, improving performance requires increasing hardware investment and raising costs. The combination of the two greatly constrains the number of clients that sensors can serve. In this section, we will introduce approximate algorithms for the CPC problem and related works, and propose an open problem based on existing research.
In the capacitated power cover (CPC) problem with specified sensors, we are given a set of sensors, , and a set of clients in the metric space. There is a capacity constraint for each sensor The unit cost of power for each sensor is . Each sensor can adjust its power to form a coverage area with radius , such that a subset of clients within this area can be served, where . The goal of the CPC problem with specified sensors is to minimize the total power cost of all sensors as much as possible without violating capacity constraints, such that all clients are covered. When the penalty for any unserved client subset approaches infinity (), and serve at least n clients, i.e., , the uni-PC problem transforms into the CPC problem. Under these combined conditions, the penalty term in the objective function becomes irrelevant (as all clients must be covered to avoid infinite cost), and the problem’s primary goal becomes minimizing the total power cost. Crucially, the capacity constraints () on each sensor remain active, ensuring that the activated sensors can only serve a limited number of clients.
The nonlinear integer program of the CPC problem with specified sensors is as follows:
In the CPC-NLP, we have a variable for each sensor that indicates whether is in the solution () or not (). For each sensor and each client , there is a variable that indicates whether covers () or not (). The first set of constraints ensures that each sensor can serve only a limited number of clients. The second set of constraints ensures that clients can only be served by selected sensors. The third set of constraints ensures that each client is served by exactly one sensor. The fourth set of constraints ensures that served clients must be within the coverage area supported by the sensor.
When the capacity of every single sensor is uniform, in other words, each sensor has an identical capacity value, we turn our attention to the uniform capacitated power cover (UCPC) problem. This particular problem framework is defined by this consistent sensor capacity. It implies that each sensor has the same ability to handle or contribute to the power cover problem. Conversely, if the capacities of the sensors are not uniform, meaning that there is a variance in the capacity values between one sensor and another, we then consider the non-uniform capacitated power cover (non-UCPC) problem. Here, the non-uniformity of the sensor capacities adds an extra layer of complexity. Different sensors with different capacities will interact in a more intricate manner within the power cover system, potentially affecting aspects such as how power is allocated, how the coverage area is shaped, and what strategies are most effective for achieving an optimal power cover.
There exist two distinct versions of capacity constraints within this context. One is the soft capacities version. In the case of soft capacities, the number of copies of each covering object is not restricted in any way. It can be expanded or remain at a certain level without any imposed upper constraint. On the other hand, there is the hard capacities version. When it comes to hard capacities, the copy number of each covering object has a definite bound. This bound restricts the number of copies that each such object can have, setting a clear upper constraint that cannot be exceeded.
Guan et al. [
112] designed a primal-dual
f-approximation algorithm for the non-UCPC problem with soft capacities and specified sensors in a plane, where
and
denotes the set of all disks centered at sensors in
S with radii equal to their distances to clients in
C, i.e.,
. Zhang et al. [
113] proposed an
m-approximation algorithm for non-UCPC problem with hard capacities and specified sensors in a fixed-dimensional space
, using the primal-dual method.
Inamdar and Varadarajan [
114] studied the UCPC problem with unspecified sensors and hard capacities in a metric space, and designed an FPT 28-approximation algorithm for the case with
and parameter
q, whose running time is
, where
N is the number of points in a metric space. When
, Bandyapadhyay et al. [
115] gave two FPT constant-factor approximation algorithms for the case of unspecified sensors in a metric space, with parameter
q and hard capacities; for non-UCPC problem under constraints of hard capacities, they gave an FPT
-approximation algorithm, whose running time is
, where
N is the number of points in a metric space; they proposed an FPT
-approximation algorithm, using random sampling technique for UCPC problem, whose running time is
; they also proposed better result in Euclidean space, namely the FPT
-approximation algorithm, if the capacity is allowed to violate
factor based on random sampling technique. For the case of
, Jaiswal et al. [
116] improved the result and obtained two algorithms for the case of unspecified sensors in a metric space, namely the FPT 3-approximation for UCPC problem and the FPT
-approximation algorithm for non-UCPC problem, with parameter
q and hard capacities. Filtser and Gadekar [
117] proved that, in the case with
and unspecified sensors, the problem is APX-hard with parameter
q and hard capacities in a metric space; they also designed a 5.83-approximation randomized algorithm running in FPT time for the non-UCPC problem in a metric space and obtained a
-approximation randomized algorithm in FPT time for the UCPC problem, both with parameter
q and hard capacities in a metric space.
To better differentiate the various versions of the CPC problem, we organize our summary into two distinct tables. We tabulate the known results of the UCPC problem and non-UCPC problem with specified sensors and fixed
in
Table 7.
We tabulate the known results of the UCPC problem and non-UCPC problem with unspecified sensors,
and hard capacities in a metric space in
Table 8.
The CPC problem presents substantial challenges due to the combination of full cover, nonlinear power costs, and sensor capacity constraints. Each sensor can only serve a limited number of clients, and increasing coverage radius comes at a rapidly increasing power cost, making it difficult to balance efficiency and feasibility. Additionally, the interaction between sensor–client assignments and capacity limits significantly complicates the problem structure. Although some progress has been made in specific settings, the general problem remains difficult to approximate, and effective solutions for the non-uniform, hard-capacity case are still lacking. These limitations underscore the need for continued research. Therefore, we propose the following open problem.
Open problem 4. Can we design a polynomial-time algorithm for the UCPC problem and the non-UCPC problem with a better approximation ratio?
7. Power Multi-Cover Problem and Its Variants
In real life, since the client’s requirements may not be singular, they may need to be served by multiple sensors simultaneously. For example, in security applications, multiple sensors are often required to detect various potential security events, or for the purpose of system fault tolerance, redundant sensors are needed to ensure the reliability of the service. Therefore, the concept of the multi-cover problem has gradually attracted widespread attention from scholars.
7.1. Power Multi-Cover Problem
In this section, we will introduce approximate algorithms for the power multi-cover (PMC) problem and related problems, and propose an open problem based on existing research.
For the PMC problem with specified sensors, we are given a set of sensors, , and a set of clients in the metric space. The unit cost of power for each sensor is . Each sensor can adjust its power to form a coverage area with radius , such that a subset of clients within this area can be served, where . Given a mapping , indicating that a client in C is mapped to a positive integer , , where represents the coverage requirement of the client . The goal of the PMC problem with specified sensors is to minimize the total power cost of all sensors satisfying all clients’ coverage requirements. The biggest difference between the PMC problem with specified sensors and the power cover problem with specified sensors lies in the client’s coverage requirements. In more detail, the coverage requirements of clients play a crucial role in differentiating these two problems. When the coverage requirement of all clients is precisely 1, we then consider the PMC problem with specified sensors to be a power cover problem with specified sensors. This implies that under this particular scenario, where the coverage requirement is standardized at a value of 1 for all clients. We abbreviate as .
The nonlinear integer program of the PMC problem with specified sensors is as follows:
In the PMC-NLP, we have a variable for each sensor that indicates whether is in the solution () or not (). For each sensor and each client , there is a variable that indicates whether serves () or not (). Observing the above nonlinear integer programming, the second set of constraints is different from the PC-NLP. Here, the second set of constraints represents that each client needs to be served times.
There are two versions of the PMC problem: one is the uniform power multi-cover (UPMC) problem, when all clients have the same coverage requirements b, and the other is the non-uniform power multi-cover (non-UPMC) problem, when the coverage requirements of all clients are not necessarily the same.
When
, Abu-Affash et al. [
118] showed an
-approximation algorithm, using a geometric approach for the UPMC problem with specified sensors in a plane. Bar-Yehuda and Rawitz [
119] investigated the non-UPMC problem with specified sensors within a plane and presented a local-ratio algorithm, attaining an approximation ratio of at most
, where
represents the maximum covering requirement. Bhowmick et al. [
120] proposed a polynomial-time
-approximation algorithm for non-UPMC with specified sensors in the plane based on primal-dual technique and the extended concept of outer cover [
118]; this result can also be extended to higher dimensional spaces, obtaining an approximation ratio of
in
, where
d is any constant. At this point, the approximation ratio no longer depends on
, but on the dimension
d. Bhowmick et al. [
121] later provided an
-approximation algorithm for the UPMC problem with specified sensors in a fixed dimensional metric space, based on analysis of outer cover; when the coverage requirements of all clients are not necessarily the same, they gave a
-approximation algorithm for the non-UPMC problem with specified sensors in a fixed dimensional metric space; if the number of open sensors is constrained, they provided a
-approximation algorithm for the case of
for the UPMC problem with specified sensors in a fixed dimensional metric space. If all clients and sensors are on the same line, Liang et al. [
122] provided a polynomial-time algorithm for the non-UPMC problem with specified sensors, using LP methods, whose running time is
, where
is the maximum coverage requirement among all clients. Huang et al. [
123] demonstrated a
-approximation algorithm in
time for the non-UPMC problem with specified sensors and
in a fixed-dimensional space
, based on bubble charging and balanced recursive realization procedure.
For better comprehension, we tabulate the known results of the PMC problem with specified sensors in
Table 9.
The PMC problem is fundamentally challenging due to the interplay between power minimization and multi-cover constraints. Compared to the standard power cover problem, the requirement that each client must be covered multiple times significantly enlarges the solution space and introduces more complex dependencies. These difficulties are further compounded when coverage radii and power costs are nonlinearly related. While several approximation algorithms exist under fixed parameters such as the dimension d or the power exponent , the general case remains computationally intractable. Therefore, we propose the following open problem.
Open problem 5. Can we design a PTAS for the PMC problem when and d are not fixed?
7.2. Partial Power Multi-Cover Problem
In this section, we will introduce approximate algorithms for the partial power multi-cover (PPMC) problem and related problems, and propose an open problem based on existing research.
For the PPMC problem with specified sensors, we are given a set of sensors, , a set of clients and a postive integer k in the metric space. The unit cost of power for each sensor is . Each sensor can adjust its power to form a coverage area with radius , such that a subset of clients within this area can be served, where . Given a mapping , represents the coverage requirement of the client . The goal of the PPMC problem is to minimize the total power cost of all sensors satisfying at least k clients’ coverage requirements as much as possible. Unlike the PMC problem with specified sensors, the PPMC problem with specified sensors only needs to meet the coverage requirements of partial clients. We abbreviate as .
By introducing the variable
for each sensor
, which indicates whether sensor
is open (
) or not (
); the variable
for each sensor
and each client
, which indicates whether client
is served by sensor
(
) or not (
); and the variable
for each client
, which indicates whether client
’s coverage requirement is satisfied (
) or not (
). The nonlinear integer program of the PPMC problem with specified sensors is shown in the following:
If all clients and sensors are on the same line and each client needs to be covered multiple times, Liang et al. [
122] proposed partial power multi-cover problem on a line (PPMC-Line) and found an optimal solution for the case of specified sensors, based on dynamic programming when the maximum coverage requirement for points is upper bounded by a constant; its running time is
, and
denotes the maximum covering requirement of elements. If the coverage area of the sensors is a disk, we refer to the PPMC problem as the PPMC problem. Under the assumption that the maximum covering requirement is upper bounded by a constant, Ran et al. [
124] proposed a PTAS for the PPMC problem with any constant
and specified sensors in Euclidean space
, whose running time is
, based on the shifted quadtree technique combined with dynamic programming.
For better comprehension, we tabulate the known results of the PPMC problem with specified sensors and fixed
in
Table 10.
The PPMC problem poses significant theoretical and computational challenges due to the interplay of power control, multi-coverage requirements, and partial satisfaction constraints. These features introduce intricate dependencies among clients and sensors, making it difficult to capture or decompose the problem effectively. At present, the structural complexity of the problem is not well understood, and existing results are limited to special cases. Exploring its fundamental properties and developing general approximation frameworks remain important directions for future research. Therefore, we propose the following open problem.
Open problem 6. Can the PPMC problem apply the results of the PPC problem and the PMC problem to obtain a better approximation ratio?
7.3. Capacitated Power Multi-Cover Problem
In this section, we will introduce approximate algorithms for the capacitated power multi-cover (CPMC) problem and related problems, and propose an open problem based on existing research.
In the CPMC problem with specified sensors, we are given a set of sensors, , and a set of clients in the metric space. Given a mapping , indicating that a client in C is mapped to a positive integer , , where represents the coverage requirement of the client . There is a capacity constraint for each sensor . The unit cost of power for each sensor is . Each sensor can adjust its power to form a coverage area with radius , such that a subset of clients within this area can be served, where . The goal of the CPMC problem with specified sensors is to minimize the total power cost of all sensors as much as possible without violating capacity constraints, such that the coverage requirements of all clients are satisfied. Unlike the PMC problem with specified sensors, the CPMC problem with specified sensors includes a capacity constraint for each sensor. We abbreviate as .
The nonlinear integer program of the CPMC problem with specified sensors is as follows:
In the CPMC-NLP, we have a variable for each sensor that indicates whether is in the solution () or not (). For each sensor and each client , there is a variable that indicates whether serves () or not ().
The CPMC problem introduces a complex trade-off between power efficiency and service feasibility: each sensor must not only decide how far to cover (which affects power cost nonlinearly) but also respect strict capacity limits, while collectively meeting the coverage requirements of all clients. These intertwined constraints make algorithm design particularly difficult. Currently, there is a lack of general algorithmic frameworks or constant-factor approximation algorithms that can handle this level of complexity without relying on strong structural assumptions. Therefore, we propose the following open problem.
Open problem 7. Can we design a polynomial-time constant approximation algorithm for the CPMC problem?
8. Conclusions
In this paper, we provide a comprehensive introduction to the power cover problem and its related variants, both specified and unspecified sensor settings, such as the PPC problem, the CPC problem, and the PCPC problem. We formulated nonlinear integer programming for these problems with specified sensors and analyzed their interrelationships. We conducted a comprehensive investigation and summary of their algorithms and related results, and raised open problems for each problem. The main objective of this paper is to serve as a valuable reference for researchers and practitioners, facilitating the development of effective solutions for real-world WSN deployment scenarios.
While our study covers the power cover problem and several of its important variants, under either specified or unspecified sensor settings, there remain inherent limitations in existing models. In particular, most current formulations assume that power cost is purely a function of coverage radius and neglect the processing cost associated with the number of clients served. Although this abstraction simplifies analysis, it deviates from practical scenarios where power usage also scales with workload. To address this gap and further generalize the modeling of real-world wireless sensor networks, we introduce the generalized power cover (GPC) problem.
For the GPC problem with specified sensors, we are given a set of sensors, , and a set of clients in the metric space. The unit power cost associated with the coverage radius of each sensor is . Each sensor can adjust its power to form a coverage area with radius , such that a subset of clients within this area can be served, where . Let be the power cost of allocating clients at sensor , indicating the power cost required for sensor to process signals, where satisfies
Non-decreasing. for every .
Normalization. .
Left continuous. , for every .
The power cost of processing signals and serve clients for sensor is . The goal of the GPC problem is to minimize the sum of power costs for sensors to serve all clients and process signals.
By introducing the variable
for each sensor
, which indicates whether sensor
is open (
) or not (
); the variable
for each sensor
and each client
, which indicates whether client
is served by sensor
(
) or not (
); and the variable
for each sensor
, which indicates the number of signals processed by sensor
(
). The nonlinear integer program of the GPC problem with specified sensors is shown as follows:
The first set of constraints of the GPC-NLPP ensures that a client can only be served if the sensor is opened. The second set of constraints ensures that each client is served by exactly one sensor. The third set of constraints ensures that sensors only process signals for serving clients. The fourth set of constraints guarantees that each assigned client lies within the coverage radius of the assigned sensor. Compared to the PC-NLP: the generalized model GPC-NLP allows each sensor to be opened multiple times and introduces a load-dependent cost function reflecting processing power.
We believe that the GPC problem represents a promising direction for future research. Potential extensions, such as incorporating partial coverage, client penalties, or capacity constraints, could further improve its applicability to heterogeneous and resource-constrained environments. Investigating tractable subclasses or developing approximation schemes for the GPC problem remains an open and meaningful research challenge.