Keywords

1 Introduction

With the rapid development of wireless sensor networks (WSNs), intensive studies on WSNs have emerged, especially on the coverage problem. In a coverage problem, the most basic requirement is to keep all points under monitoring. In a typical WSN, the service area of a sensor is a disk centered at the sensor whose radius is determined by the power of the sensor. A typical relation between the power p(s) of sensor s and the radius r(s) of its service area is

$$\begin{aligned} p(s)=c\cdot r(s)^\alpha , \end{aligned}$$
(1)

where c and \(\alpha \ge 1\) are some constants (\(\alpha \) is usually called the attenuation facor). So, the greater power a sensor possesses, the larger service it can provide. In other words, the consumption of energy and the quality of service are two conflicting factors. The question is how to balance these two conflicting factors by adjusting power at the sensors so that the desired service can be accomplished using the minimum total power. This question is motivated by the intention to extend the lifetime of WSN under limited energy supply, and we call it the minimum power coverage problem (MinPowerCov).

In the real world, it is often too costly to satisfy the covering requirement of every point. So, it is beneficial to study the minimum power partial coverage problem (MinPowerPartCov), in which it is sufficient to cover at least k (instead of all) points. The problem is motivated by the purpose of further saving energy while keeping an acceptable quality of service.

The MinPowerPartCov problem can be viewed as a special case of the minimum weight partial set cover problem (MinWPSC). Given a set E of elements, a collection of sets \(\mathcal S\), a weight function , and an integer \(k\le |E|\), the MinWPSC problem is to find the minimum weight sub-collection of sets \(\mathcal F\subseteq \mathcal S\) such that at least k elements are covered by \(\mathcal F\), i.e., \(|\bigcup _{S\in \mathcal F}S|\ge k\) and \(w(\mathcal F)=\sum _{S\in \mathcal F}w(S)\) is minimum. Notice that in a MinPowerParCov problem, the power at a sensor can be discretized by assuming that there is a point on the boundary of the disk supported by the assigned power. We call such a disk as a canonical disk. So, if we associate with each sensor |X| canonical disks, where X is the set of points, each disk corresponds to the set of points contained in it, and the weight of the disk equals the power supporting the disk which is determined by Eq. (1), then the MinPowerParCov problem is reduced to the MinWPSC problem.

It is known that the MinWPSC problem has an f-approximation [2], where f is the maximum frequency of an element, that is, the maximum number of sets containing a common element. For the MinWPSC problem obtained by the above reduction from a MinPowerParCov problem, f equals the number of sensors, which is too large to be a good approximation factor. So, the main purpose of this paper is to explore geometric properties of the MinPowerParCov problem to obtain a better approximation.

1.1 Related Works

The minimum weight set cover problem (MinWSC) is a classic combinatorial problem. It is well-known that MinSC admits approximation ratio \(H(\varDelta )\) [7, 14], where \(H(\varDelta )=1+\frac{1}{2} +...+\frac{1}{\varDelta }\) is the Harmonic number and \(\varDelta \) denotes the size of the largest set. It is also known that a simple LP-rounding algorithm can achieve an approximation ratio of f, where f is the maximum number of sets containing a common element (see for example Chapter 12 of the book [23]).

For the minimum weight partial set cover problem (MinWPSC), Slavík [21] obtained an \(H(\min \{\lceil k\rceil ,\varDelta \})\)-approximation using a greedy strategy, Bar-Yehuda [2] obtained an f-approximation using local ratio method, Gandhi [10] also obtained f approximation using a primal-dual method. Very recently, Inamdar et al. [13] designed an LP-rounding algorithm, obtaining approximation ratio \(2\beta +2\), where \(\beta \) is the integrality gap for the natural linear program of the minimum weight (full) set cover problem.

For the geometric minimum weight set cover problem, much better approximation factors can be achieved. Using a partition and shifting method, Hochbaum et al. [12] obtained a PTAS for the minimum unit disk cover problem in which the disks are uniform and there are no prefixed locations for the disks. For the minimum disk cover problem in which disks may have different sizes, Mustafa et al. [15] designed a PTAS using a local search method. This PTAS was generalized by Roy et al. [20] to non-piercing regions including pseudo-disks. These are results for the cardinality version of the geometric set cover problem. Considering weight, Varadarajan [22] presented a clever quasi-uniform sampling technique, which was improved by Chan et al. [8], yielding a constant approximation for the minimum weight disk cover problem. This constant approximation was generalized by Bansal et al. [4] for the minimum weight disk multi-cover problem in which every point has to be covered multiple times. Using a separator framework, Mustafa et al. [16] obtained a quasi-PTAS for the minimum weight disk cover problem.

To our knowledge, there are two papers studying the geometric minimum partial set cover problem. The first paper is [10], in which Gandhi et al. presented a PTAS for the minimum (cardinality) partial unit disk cover problem using a partition and shifting method. Notice that this result only works for the case when the centers of the disks are not prefixed. Another paper is due to Inamdar et al. [13], in which a \((2\beta +2)\)-approximation was obtained for the general minimum weight partial set cover problem, where \(\beta \) is the integrality gap of the natural linear program for the minimum weight (full) set cover problem. As a consequence, for those geometric set cover problems (including the disk cover problem) in which \(\beta \) is a constant, the approximation ratio for the partial version is also a constant (but the constant is large).

Recently, there are a lot of works studying the minimum power multi-cover problem (MinPowerMC), in which every point p is associated with a covering requirement \(cr_p\), and the goal is to find a power assignment with the minimum total power such that every point p is covered by at least \(cr_p\) disks. Let \(cr_{max}\) be the maximum number of times that a point requires to be covered. Using a local ratio method, Bar-Yehuda et al. [3] presented a \(3^\alpha \cdot cr_{max }\)-approximation algorithm. The dependence on \(cr_{\max }\) was removed by Bhowmick et al. [5], achieving an approximation ratio of \(4\cdot (27\sqrt{2})^\alpha \). This result was further generalized to any metric space in [6], the approximation ratio is at most \(2\cdot (16\cdot 9)^\alpha \). For the minimum power (single) cover problem, the best known ratio is \(3^{\alpha }\) (as a consequence of [3] and the fact \(cr_{\max }=1\) in this case).

There is only one paper [9] studying the minimum power partial (single) cover problem (MinPowerPartCov), and the study is on the special case when \(\alpha =2\). The approximation ratio obtained in [9] is \((12+\varepsilon )\), where \(\varepsilon \) is an arbitrary constant greater than zero, by a reduction to a prize-collecting coverage problem.

1.2 Contribution

In this paper, we show that the MinPowerPartCov problem can be approximated within factor \(3^{\alpha }\), which coincides with the best known ratio for the MinPowerCov problem (the full version of the minimum power coverage problem). When applied to the case when \(\alpha =2\), our ratio is 9, which is better than \(12+\varepsilon \) obtained in [9].

Our algorithm is inspired by the local ratio method used in [3] to study the MinPowerCov problem. New ideas have to be explored to surmount the difficulty.

2 The Problem and a Preprocessing

The problem studied in this paper is formally defined as follows.

Definition 1

(Minimum Power Partial Cover (MinPowerPartCov)). Suppose X is a set of n points and S is a set of m sensors on the plane, k is an integer satisfying \(0\le k\le n\). A point \(x\in X\) is covered by a sensor \(s\in S\) with power p(s) if x belongs to the disk supported by p(s), that is \(x\in Disk(s,r(s))\), where Disk(sr(s)) is the disk centered at s whose radius r(s) is determined by p(s) through equation \(p(s)=c\cdot r(s)^\alpha \). A point is covered by a power assignment if it is covered by some disk supported by p. The goal of MinPowerPartCov is to find a power assignment p covering at least k points such that the total power \(\sum _{s\in S}p(s)\) is as small as possible.

In an optimal solution, we may assume that for any sensor s, there is at least one point that is on the boundary of the disk Disk(sp(s)), since otherwise we may reduce p(s) to cover the same set of points, resulting in a lower power. Therefore, at most mn disks need to be considered. We denote the set of such disks by \(\mathcal {D}\). In the following, denote by \((X,\mathcal D,k)\) an instance of the MinPowerPartCov problem, and use \(opt(X,\mathcal D,k)\) to denote the optimal power for the instance \((X,\mathcal D,k)\). To simplify the notation, we use D to represent both a disk in \(\mathcal D\) and the set of points covered by D, and use r(D) and p(D) to denote the radius and the power of disk D, where \(p(D)=c\cdot r(D)^{\alpha }\). For a set of disks \(\mathcal D\), we shall use \(\mathcal C(\mathcal D)=\bigcup _{D\in \mathcal D}D\) to denote the set of points covered by \(\mathcal D\).

In order to control the approximation factor of our algorithm, we need a preprocessing step: guessing the maximum power of a sensor (or equivalently, the radius of a maximum disk) in an optimal solution. Suppose \(D_{\max }\in \mathcal D\) is the guessed disk. Denote by \(\mathcal D_{\le r(D_{\max })}\) the subset of disks of \(\mathcal D\) whose radii are no greater than the radius of \(D_{\max }\) (excluding \(D_{\max }\)), and denote by \((X\setminus D_{\max },\mathcal D_{\le r(D_{\max })},k-|D_{\max }|)\) the residual instance after guessing \(D_{\max }\). The following lemma is obvious.

Lemma 1

Suppose \(D_{\max }\) is the correctly guessed disk with the maximum power in an optimal solution of instance \((X,\mathcal {D},k)\). Then

$$ opt(X,\mathcal {D},k)=opt(X\setminus D_{\max },\mathcal D_{\le r(D_{\max })},k-|D_{\max }|)+p(D_{\max }). $$

3 A Local Ratio Algorithm

In this section, we first present an algorithm for the MinPowerPartCov problem on the instance \((X\setminus D_{\max },\mathcal D_{\le r(D_{\max })},k-|D_{\max }|)\). And then show how to make use of it to find a power assignment for the original MinPowerPartCov problem.

3.1 Algorithm After the Preprocessing

For simplicity of notation in this section, we still use \((X,\mathcal D,k)\) to denote the residual instance, assuming that every disk in \(\mathcal D\) has radius at most \(r(D_{\max })\).

The algorithm consists of three steps.

(i) In the first step, a local ratio method is employed to find a minimal partial cover \(\bar{\mathcal D}\), that is, \(\bar{\mathcal D}\) covers at least k points, while for any disk \(D\in \bar{\mathcal D}\), the number of points covered by \(\bar{\mathcal D}-\{D\}\) is strictly less than k.

(ii) Before going into the second step, remove a disk \(D_{rmv}\) from \(\bar{\mathcal D}\) which is chosen in the last call of the local ratio method in the first step. Then, in the second step, a maximal independent set of disks \(\mathcal I\subseteq \bar{\mathcal D}\setminus \{D_{rmv}\}\) is computed in a greedy manner, that is, disks in \(\mathcal I\) are mutually disjoint, while any disk \(D\in \bar{\mathcal D}\setminus \{D_{rmv}\}\) which is not picked into \(\mathcal I\) intersects some disk in \(\mathcal I\).

(iii) In the third step, every disk in \(\mathcal I\) has its radius enlarged three times. Such set of disks together with \(\{D_{\max },D_{rmv}\}\) are the output of the algorithm.

The first step is accomplished by Algorithm 1, in which the MinPowerPartCov instance \((X,\mathcal D,k)\) is viewed as an instance of the minimum weight partial set cover problem, where X serves as the set of elements to be covered, \(\mathcal D\) serves as the collection of sets, and the weight of each \(D\in \mathcal D\) is p(D). The local ratio method was first proposed by Bar-Yehuda and Even in [1]. The idea is to recursively peel off a special weight from the original weight. If the problem with the special weight admits an \(\alpha \)-approximation, then one can assemble an \(\alpha \)-approximate solution for the problem with respect to the original weight. In this paper, the special weight peeled off in each iteration (denoted by \(\bar{p}\)) is proportional to the number of uncovered points of a disk, and then the disks of residual weight zero are put into \(\bar{\mathcal D}\).

figure a

Algorithm 1 is in fact a function which will be recursively called. In the algorithm, after peeling off a special weight \(\bar{p}\), we use \(\mathcal D_{=0}\) to denote the set of disks with residual weight \(p-\bar{p}\) being zero. Since taking disks of zero cost seems to be a free meal, we take all of them temporarily and consider the residual instance, the goal of which is to satisfy the residual covering requirement using the residual disks. Line 6 of the algorithm is to construct the residual instance. Having found a minimal solution \(\bar{\mathcal D}'\) to the residual instance, the algorithm adds a minimal subset of disks of \(\mathcal D_{=0}\), denoted as \(\bar{\mathcal D}_{=0}\), into \(\bar{\mathcal D}'\) to cover at least k points. This step is to guarantee that the resulting set of disks \(\bar{\mathcal D}\) is minimal, which is very crucial to the control of the approximation factor.

Suppose the function LR is called \(t+1\) times. Denote by \(\bar{\mathcal D}^{(i)}\), \(p^{(i)}\), \(\bar{p}^{(i)}\) etc. those objects at the end of the i-th calling of function LR. Then we have the following relations.

(i) \(X^{(0)}=X\), \(\mathcal D^{(0)}=\mathcal D\), \(p^{(0)}=p\), and \(k^{(0)}=k\).

(ii) For \(i=1,\ldots ,t\),

$$\begin{aligned}&\gamma ^{(i)}=\min \{p^{(i-1)}(D)/|X^{(i-1)}\cap D|\}\ \text {for each}\ D\in \mathcal D^{(i-1)}\nonumber \\&\bar{p}^{(i)}(D)=\gamma ^{(i)}\cdot |X^{(i-1)}\cap D| \ \text {for each}\ D\in \mathcal D^{(i-1)}\end{aligned}$$
(2)
$$\begin{aligned}&p^{(i)}(D)=p^{(i-1)}(D)-\bar{p}^{(i)}(D) \ \text {for each}\ D\in \mathcal D^{(i-1)}\end{aligned}$$
(3)
$$\begin{aligned}&\mathcal D_{=0}^{(i)}=\{D\in \mathcal D^{(i-1)}:p^{(i)}(D)=0\}\nonumber \\&X^{(i)}=X^{(i-1)}\setminus \mathcal C(\mathcal D^{(i)}_{=0})\nonumber \\&\mathcal D^{(i)}=\mathcal D^{(i-1)}\setminus \mathcal D_{=0}^{(i)}\nonumber \\&k^{(i)}=\max \{0,k^{(i-1)}-|\mathcal C^{(i-1)}(\mathcal D_{=0}^{(i)})|\} \end{aligned}$$
(4)

Here \(\mathcal C^{(i-1)}(\mathcal D_{=0}^{(i)})=\mathcal C(\mathcal D_{=0}^{(i)})\cap X^{(i-1)}\). As a consequence of the above relations,

$$\begin{aligned} k^{(i)}=\max \{0,k-|\mathcal C(\bigcup _{j=1}^i\mathcal D_{=0}^{(j)})|\}. \end{aligned}$$
(5)

It should be noticed that in expressions (4) and (5), except for \(i=t\), the value of \(k^{(i)}\) equals the second term.

(iii) \(k^{(t)}=0\), \(\bar{\mathcal D}^{(t+1)}=\emptyset \). And for \(i=t,t-1,\ldots ,1\),

$$ \bar{\mathcal D}^{(i)}=\bar{\mathcal D}^{(i+1)}\cup \bar{\mathcal D}_{=0}^{(i)}. $$

As a consequence

$$\begin{aligned} \bar{\mathcal D}^{(i)}=\bigcup _{j=i}^t\bar{\mathcal D}_{=0}^{(j)}\subseteq \bigcup _{j=i}^t\mathcal D_{=0}^{(j)}. \end{aligned}$$
(6)

The above relation can be illustrated by the following figure.

Fig. 1.
figure 1

Illustration for the structure of \(\bar{\mathcal D}^{(i)}\).

Remark 1

If a disk D has its weight reduced to zero in the i-th call of LR, that is, if \(p^{(i-1)}(D)>0\) and \(p^{(i)}(D)=0\), then D does not play roles in the deeper calls of LR. In this case, we may view \(p^{(j)}(D)=\bar{p}^{(j)}(D)=0\) for any j with \(i+1\le j\le t\). By such a point of view, for any \(0\le i\le t\), we may extend the definition of functions \(p^{(i)}\) and \(\bar{p}^{(i)}\) on any disk \(D\in \mathcal D\).

Lemma 2

For any \(i=1,\ldots ,t+1\), the set \(\bar{\mathcal D}^{(i)}\) is a minimal set of disks covering \(k^{(i-1)}\) points of \(X^{(i-1)}\).

Proof

We prove the lemma by a backward induction on i. The base step when \(i=t+1\) is obvious, since \(k^{(t)}=0\) and \(\bar{\mathcal D}^{(t+1)}=\emptyset \).

For the induction step, suppose \(i\le t\) and \(\bar{\mathcal D}^{(i+1)}\) is a minimal set of disks covering \(k^{(i)}\) points of \(X^{(i)}\). By expression (4) and the remark below it, we have

$$\begin{aligned} k^{(i-1)}=k^{(i)}+|\mathcal C^{(i-1)}(\mathcal D_{=0}^{(i)})|. \end{aligned}$$
(7)

So \(\bar{\mathcal D}^{(i+1)}\cup \mathcal D_{=0}^{(i)}\) can cover \(k^{(i-1)}\) elements of \(X^{(i-1)}\), which implies that a minimal subset \(\bar{\mathcal D}_{=0}^{(i)}\subseteq \mathcal D_{=0}^{(i)}\) exists such that \(\bar{\mathcal D}^{(i+1)}\cup \bar{\mathcal D}_{=0}^{(i)}\) can cover \(k^{(i-1)}\) elements of \(X^{(i-1)}\) (Fig. 1).

What remains to show is that \(\bar{\mathcal D}^{(i+1)}\cup \bar{\mathcal D}_{=0}^{(i)}\) is minimal. By line 8 of Algorithm 1, no disk in \(\bar{\mathcal D}_{=0}^{(i)}\) can be removed without violating the covering requirement \(k^{(i-1)}\). For any disk \(D\in \bar{\mathcal D}^{(i+1)}\), by the minimality of \(\bar{\mathcal D}^{(i+1)}\), we have \(|\mathcal C^{(i)}(\bar{\mathcal D}^{(i+1)}\setminus \{D\})|<k^{(i)}\). Then by (7), we have \(|\mathcal C^{(i-1)}\big ((\bar{\mathcal D}^{(i+1)}\setminus \{D\})\cup \bar{\mathcal D}_{=0}^{(i)}\big )|<k^{(i-1)}\). The minimality of \(\bar{\mathcal D}^{(i)}\) is proved. \(\Box \)

The second step is realized by Algorithm 2. Given a set of disks \(\mathcal D\), Algorithm 2 finds a maximal independent set of disks by recursively choosing the disk with the maximum radius and deleting those disks intersecting it.

figure b

Algorithm 3 combines the above two algorithms to compute a feasible solution \(\mathcal M\) to the residual instance. We use c(D) and r(D) to denote the center and the radius of disk D, respectively. So, Disk(c(D), 3r(D)) represents the disk with center c(D) and radius 3r(D) (which a disk obtained from D by enlarging its radius by three times). Notice that \(\mathcal M\) is not a subset of \(\mathcal D\). Before calling Algorithm 2, a disk \(D_{rmv}\) is deleted from \(\bar{\mathcal D}\), where \(D_{rmv}\) belongs to the set of disks added in the deepest call of LR. This is to control the approximation ratio which will be clear in the latter proofs.

figure c

The next theorem shows that Algorithm 3 computes a feasible solution to the residual instance.

Theorem 1

The set of disks \(\mathcal M\) computed by Algorithm 3 covers at least k points.

Proof

The set of disks in \(\bar{\mathcal D}\) computed in line 1 of the algorithm cover at least k points. For any point x which is covered by \(\bar{\mathcal D}\), if x is covered by \(D_{rmv}\) or any disk in \(\mathcal I\), then it is also covered by \(\mathcal M\). Otherwise, x is covered by a disk D which is removed in line 6 of Algorithm 2. This disk D is removed because it intersects a disk \(D'\in \mathcal I\). Because of the greedy choice of disk \(D'\) in line 3 of Algorithm 2, we have \(r(D)\le r(D')\). Hence \(d(x,c(D'))\le d(x,c(D))+d(c(D),c(D'))\le r(D)+(r(D)+r(D'))\le 3r(D')\), where \(d(\cdot ,\cdot )\) denotes the Euclidean distance. So, x is covered by \(disk(c(D'),3r(D'))\in \mathcal M\). \(\Box \)

The following lemma is a key lemma towards the analysis of the approximation ratio.

Lemma 3

Suppose \(\mathcal {D^*}\) is an optimal solution for \((X,\mathcal D,k)\). Then the independent set of disks \(\mathcal I\) output by Algorithm 2 satisfies \(p(\mathcal {I})\le p(\mathcal {D^*})\).

Proof

We prove

$$\begin{aligned} p^{(i)}(\mathcal {I})\le p^{(i)}(\mathcal {D^*}) \end{aligned}$$
(8)

by a backward induction on \(i=t,t-1,\ldots ,0\). Since \(p^{(0)}=p\), what is required by the lemma is exactly \(p^{(0)}(\mathcal {I})\le p^{(0)}(\mathcal {D^*})\).

For the base step, we have \(p^{(t)}(\mathcal {I})=0\) because every disk \(D\in \mathcal I\subseteq \bar{\mathcal D}^{(1)}\) belongs to some \(\bar{\mathcal D}_{=0}^{(j)}\) (by (6)) and thus \(p^{(t)}(D)=0\). So (8) holds for \(i=t\).

For the induction step, suppose (8) is true for some \(i\le t\). We are going to prove

$$\begin{aligned} p^{(i-1)}(\mathcal {I})\le p^{(i-1)}(\mathcal {D^*}). \end{aligned}$$
(9)

By (3), inequality (9) is equivalent with

$$ p^{(i)}(\mathcal {I})+\bar{p}^{(i)}(\mathcal I)\le p^{(i)}(\mathcal {D^*})+\bar{p}^{(i)}(\mathcal D^*). $$

Combining this with the induction hypothesis, it suffices to prove

$$\begin{aligned} \bar{p}^{(i)}(\mathcal {I})\le \bar{p}^{(i)}(\mathcal {D^*}). \end{aligned}$$
(10)

By (6) and Remark 1,

$$\begin{aligned} \text {for any disk}\ D\in \bar{\mathcal D}^{(1)}\setminus \bar{\mathcal D}^{(i)}, \text {we have}\ \bar{p}^{(i)}(D)=0\text{. } \end{aligned}$$
(11)

Combining this with (2) and the fact \(\mathcal I\subseteq \bar{\mathcal D}^{(1)}\), we have

$$\begin{aligned} \bar{p}^{(i)}(\mathcal {I})&=\sum _{D\in \mathcal {I}}\bar{p}^{(i)}(D)=\sum _{D\in \mathcal {I} \cap \bar{\mathcal D}^{(i)}}\gamma ^{(i)}\cdot |X^{(i-1)}\cap D|\\&=\sum _{x\in { X^{(i-1)}}}\gamma ^{(i)}\cdot |\{D\in \mathcal {I}\cap \bar{\mathcal D}^{(i)}:x\in D\}|. \end{aligned}$$

Since no disks in \(\mathcal {I}\) can intersect, we have

$$ \sum _{x\in X^{(i-1)}}|\{D\in \mathcal {I}\cap \bar{\mathcal D}^{(i)}:x\in D\}|=|X^{(i-1)}\cap \mathcal C(\mathcal I\cap \bar{\mathcal D}^{(i)})|. $$

Since \(D_{rmv}\not \in \mathcal I\), we have \(\mathcal I\cap \bar{\mathcal D}^{(i)}\subseteq \bar{\mathcal D}^{(i)}\setminus \{D_{rmv}\}\). Combining this with Lemma 2 and the observation that \(D_{rmv}\in \bar{\mathcal D}_{=0}^{(t)}\subseteq \bar{\mathcal D}^{(i)}\), we have

$$ |X^{(i-1)}\cap \mathcal C(\mathcal I\cap \bar{\mathcal D}^{(i)})|<k^{(i-1)}. $$

Hence,

$$\begin{aligned} \bar{p}^{(i)}(\mathcal {I})\le \gamma ^{(i)}k^{(i-1)} \end{aligned}$$
(12)

On the other hand, because of (11),

$$ \bar{p}^{(i)}(\mathcal {D^*})=\sum _{D\in \mathcal {D^*}} \bar{p}^{(i)}(D)=\sum _{D\in \mathcal {D^*}\setminus \big (\bar{\mathcal D}^{(1)}\setminus \bar{\mathcal D}^{(i)}\big )}\gamma ^{(i)}\cdot |X^{(i-1)}\cap D|. $$

Combining the facts

$$\begin{aligned}&|\mathcal C(\mathcal D^*)|\ge k\\&\bar{\mathcal D}^{(1)}\setminus \bar{\mathcal D}^{(i)}\subseteq \bigcup _{j=1}^{i-1}\mathcal D_{=0}^{(j)}\ \text {by (6), and}\\&k^{(i-1)}=\max \{0,k-|\mathcal C(\bigcup _{j=1}^{i-1}\mathcal D_{=0}^{(j)})|\}\ \text {by (5),} \end{aligned}$$

we have

$$\begin{aligned} \sum _{D\in \mathcal {D^*}\setminus \big (\bar{\mathcal D}^{(1)}\setminus \bar{\mathcal D}^{(i)}\big )}&|X^{(i-1)}\cap D|\ge \ |X^{(i-1)}\cap \mathcal C(\mathcal {D^*}\setminus \big (\bar{\mathcal D}^{(1)}\setminus \bar{\mathcal D}^{(i)}\big ))|\\&\ge \left| X^{(i-1)}\cap \mathcal C\left( \mathcal {D^*}\setminus \bigcup _{j=1}^{i-1}\mathcal D_{=0}^{(j)}\right) \right| \ge \ k^{(i-1)}. \end{aligned}$$

Hence,

$$\begin{aligned} \bar{p}^{(i)}(\mathcal {D^*})\ge \gamma ^{(i)}k^{(i-1)}. \end{aligned}$$
(13)

Then inequality (10) follows from (12) and (13), and the lemma is proved. \(\Box \)

The next theorem estimates the approximation effect of Algorithm 3.

Theorem 2

Suppose \(\mathcal C^*\) is an optimal solution on instance \((X,\mathcal D,p,k)\), and \(\mathcal M\) is the output of Algorithm 3. Then

$$ p(\mathcal M)\le 3^{\alpha }p(\mathcal C^*)+p(D_{rmv}). $$

Proof

For each disk \(D\in \mathcal M\setminus \{D_{rmv}\}\), it comes from a disk \(D'\in \mathcal I\) by expanding the radius by three times. Hence by (1), \(p(D)=3^{\alpha }p(D')\). So \(p(\mathcal M)\le 3^{\alpha }p(\mathcal I)+p(D_{rmv})\), and the theorem follows from Lemma 3. \(\Box \)

By Theorem 2, the approximate effect is related with \(p(D_{rmv})\). The reason why we should guess a disk \(D_{\max }\) with the largest radius in an optimal solution is now clear: to control the term \(p(D_{rmv})\) to be not too large. The algorithm combining the guessing technique is presented as follows.

3.2 The Whole Algorithm

Algorithm 4 is the whole algorithm for the MinPowerPartCov problem. It first guesses a disk \(D_{\max }\) with the maximum radius in an optimal solution, takes it, and then uses Algorithm 3 on the residual instance. For a guessed disk D, the residual instance consists of all those disks \(\mathcal D_{\le r(D)}\) whose radii are no larger than r(D) (excluding D itself), and the goal is to cover the remaining elements \(X\setminus D\) beyond the remaining covering requirement \(\max \{0,k-|D|\}\). The weight function, denoted as \(p_{D}\), is determined by (1). If for a guessed disk D, Algorithm 3 does not return a feasible solution, then we regard the solution to have cost \(\infty \). Algorithm 4 returns the best solution among all the guesses.

figure d

Theorem 3

Algorithm 4 is an \(3^{\alpha }\)-approximation algorithm for the MinPowerPartCov problem.

Proof

Suppose \(D_{\max }\) is the disk with the maximum radius in an optimal solution. By Theorem 2 and the fact \(p(D_{max,rmv})\le p(D_{max})\), we have

$$\begin{aligned} p(\mathcal F_{D_{\max }})&=p(\mathcal M_{D_{\max }})+p(D_{\max })\le 3^{\alpha }p(\mathcal C^*_{D_{\max }})+2p(D_{\max })\\&\le 3^{\alpha }\left( p(\mathcal C^*_{D_{\max }})+p(D_{\max })\right) =3^{\alpha }opt, \end{aligned}$$

where opt is the optimal power. Since the set \(\mathcal F_{\widetilde{D}}\) computed by Algorithm 4 satisfies \(p(\mathcal F_{\widetilde{D}})\le p(\mathcal F_{D_{\max }})\), the theorem is proved. \(\Box \)

4 Conclusion

In this paper, we presented an approximation algorithm for the minimum power partial cover problem achieving approximation ratio \(3^{\alpha }\), using a local ratio method. This ratio improves the ratio of \((12+\varepsilon )\) in [9], and matches the best known ratio for the minimum power (full) cover problem in [3].

Recently, there are a lot of studies on the minimum power multi-cover problem [5, 6]. A problem which deserves to be explored is the minimum power partial multi-cover problem (adding partial covering requirement). According to current studies on the minimum partial set multi-cover problem [17,18,19], it seems that studying the combination of multi-cover and partial cover in a general setting is very difficult. An interesting question is whether geometry can make the situation better?