Keywords

1 Introduction

The minimum power cover (MPC) problem is a classical combinatorial optimization problem that can be defined as follows. Given a set U of n users and a set S of m sensors on the plane. Each sensor \(s \in S\) can adjust the power it produces by changing its radius and the relationship between the radio and its power satisfies the following power equation

figure a

where the coefficient \(c>0\) and the attenuation coefficient \(\alpha \ge 1\) are constants. We call a user \(u \in U\) is covered by a sensor \(s \in S\) if the distance between u and s is no more than r(s), where r(s) is the radius of Disk(sr(s)) which is the disk centered at s with radius r(s). A user is covered by a power assignment \(p: S\mapsto \mathbb {R}^{+} \) if it belongs to some disk supported by p. The minimum power cover problem is to find a power assignment p covering all users such that the total power \( {\textstyle \sum _{s\in S}^{}p(s)} \) is as small as possible. Here, we assume that there is no limit on the power at a sensor. For the MPC problem, when \(\alpha >1\), Alt et al. [20] proved that this problem is NP-hard. Charikar and Panigrahy [1] presented a primal-dual algorithm to obtain a constant approximation. Biló et al. [2] presented a polynomial time approximation scheme (PTAS) based on a plane subdivision and shifted the quad-tree technique.

By applying relevant constraints to MPC problem, we can obtain some variational problems, such as the prize-collecting cover problem [7, 21], which needs to pay the penalty if a user is not covered; the cover problem with submodular/linear penalties [9,10,11, 13, 14]; the capacitated cover problem [12, 22, 23] in which each sensor has a capacity; the partial cover problem [8, 16], which requires covering a specified number of elements and the stochastic cover problem etc. Among them, the stochastic power cover problem is an important problem in stochastic optimization problem and deserves careful study.

In recent years, an increasing number of people have focused on the stochastic optimization problem [17], which is a basic method of dealing with uncertainty for combinatorial optimization problems by building models of uncertainty using the probability distributions of the input instances. The two-stage stochastic optimization model is a popular stochastic model that can solve many combinatorial optimization problems such as stochastic matching [15], stochastic facility location [17], and stochastic set cover problem [18] etc. Additionally, there are many useful techniques for designing approximation algorithms for stochastic combinatorial optimization problems, including the linear programming relaxation approach, boosted sampling [24, 25], contention resolution schemes [26], Poisson approximation [4, 19] etc.

In the field of stochastic optimization problem, there are many studies on stochastic set cover problems. In this problem, we do not know the points we need to cover at first, but the scenarios of uncertainty go with known probability distributions. It is possible for us to anticipate possible scenarios and purchase some subsets in advance in the first stage. In the second stage, we obtain the probability distribution for all the scenarios. The goal is to optimize the first-stage decision variables to minimize the expected cost over both stages. Ravic and Sinhac [3] proposed the stochastic set cover problem and showed that there exists an O(\(\log {mn}\)) approximation algorithm by analyzing the relationship between the minimum power cover and stochastic set cover problem. Furthermore, Li et al. [4] designed an approximation algorithm with a ratio of \(2(\ln {n}+1)\), in which n is the cardinality of the universe. Parthasarathy [5] designed an adaptive greedy algorithm with ratio H(n) for the stochastic set cover problem. For the stochastic set cover problem with submodular penalty, Sun et al. [6] proposed a \(2\eta \)-approximation algorithm using the primal-dual technique, where \(\eta \) is the maximum frequency of the element of the ground set in the set cover problem.

Inspired by the above problems, we consider the two-stage, finite-scenario stochastic version of the minimum power cover problem, which generalizes the minimum power cover problem and the stochastic minimum set cover problem. For this problem, we are given a set U of n users, a set S of m sensors on the plane and k possible scenarios, where k is a polynomial and each consists of a probability of occurrence. Each sensor \(s \in S\) can adjust the power it produces by changing its radius and the relationship between them satisfies the following power equation \(p\left( s \right) =c\cdot r \left( s \right) ^{\alpha }\), where \(c>0\) and the attenuation coefficient \(\alpha \ge 1\) are some constants. The objective is to identify the radius of each sensor in the first stage and augment the first-stage solution to cover all users and minimize the expected power over both stages. The remainder of this paper is organized as follows. We introduce the stochastic set cover problem in Sect. 2. In Sect. 3, we design a polynomial-time algorithm with an approximate ratio of \(O(\alpha )\) by using the primal-dual technique and present the proof. In Sect. 4, we give a brief conclusion.

2 Stochastic Power Cover Problem

Based on the definition of the minimum power cover problem, the two-stage finite-scenario stochastic power cover problem can be defined as follows. The input in our version of the stochastic power cover problem consists of a set U of n users, a first-stage set \(U_{0}\subseteq U \), a set S of m sensors on the plane and k possible scenarios where k is a polynomial. As with the definition in MPC problem introduced above, the relationship between the radius of a sensor and the power it consumes also satisfies the power equation (\(*\)) where c and \(\alpha \) are some constants. However, \(c>0\) will change as the scenario changes and we usually call \(\alpha \ge 1\) the attenuation coefficient. For a scenario \(j \in \left\{ 1,2,\dots , k \right\} \), we use \(p_{j}\) to define its probability, \(U_{j}\subseteq U\) is the set of users that need to be covered, which may or may not be subsets of the first-stage set \(U_{0} \) and the coefficient in the power equation is denoted by \(c_{j}\) in scenario j. In the first stage, the coefficient in the power equation is \(c_{0}\). We need to anticipate possible scenarios and determine the radius of the sensors in advance in the first stage. In the second stage, when the coverage requirements in all the scenarios appear in the form of the probability distribution, we need to expand the radius of the disks or pick more disks to complement the decision of the first stage. The objective for this problem is to find a power assignment that covers all the users and minimizes the total power of the first stage and the expected power consumption of the second stage.

For convenience, we use a set \(\mathcal {F} \) of disks whose centers are sensors to represent a power assignment for the sensor set S. If \(\mathcal {F}^{*} \) is an optimal assignment for this problem, then for any disk \(Disk(s,r(s)) \in \mathcal {F}^{*}\), there is at least one user \(u \in U\) that lies on the boundary of disk Disk(sr(s)); otherwise, we may reduce the radius of the disks to cover the same set of users and find a feasible assignment with a lower value. Since in every scenario there are at most n points of users, each sensor can generate up to n disks with different radius and all sensors have a maximum of mn disks that need to be considered. For all scenarios, there are at most kmn disks that need to be considered, so in the following, we use \(\mathcal {D}\) to denote such a set of all disks, \((U,\mathcal {D},k)\) denotes an instance of the SPC problem. For any \(D \in \mathcal {D}\), let c(D) represent the center sensor of D, and r(Dj) is the radius of disk D in scenario \(j \in \{ 1,2,\dots , k \}\), U(D) denotes the users covered by D and \(U_{j}(D)\) denotes the users covered by D in \(U_{j}\) for all \(j=0,1,\dots , k\).

Based on an analysis similar to [7], in order to control the approximation ratio, we need to guess the disk with the maximum radius denoted by \(D_{j,max}\) in an optimal solution \(\mathcal {F}^{*}=\bigcup \limits _{j=0}^{k} \mathcal {F} _{j}^{*}\) for \(j \in \left\{ 0,1,\dots , k \right\} \), that is \(r(D_{j,max},j)=\max \limits _{D:D\in \mathcal {F}_{j}^{*}}r(D,j).\) Let \(\mathcal {D}_{max}=\bigcup \limits _{j=0}^{k} D_{j,max} \), \(OPT^{'} \) is the optimal value of the residual instance about \(\mathcal {D}_{max}\), OPT is the optimal value of the original instance \((U,\mathcal {D},k)\). Similar to the analysis in [7], we have the following lemma:

Lemma 1

\(OPT=OPT^{'}+\sum _{j=0}^{k} p_{j} c_{j} r(D_{j,max} ,j)^{\alpha } \).

In the guessing technique, each disk \(D_{j,max}\) is guessed as the disk with the maximum radius of \(\mathcal {F} _{j}^{*}\) for all \(j=0,1,\dots , k\) in the optimal solution \(\mathcal {F}^{*}\); therefore, by looping \((k+1)mn\) times, we can assume that \(\mathcal {D}_{max}\) is known. Later, we will present a three-phase primal-dual approximation algorithm for the residual instance. And for simplicity of notation, we still use \((U,\mathcal {D},k)\) to denote the residual instance.

figure b
$$\begin{aligned} \text{ s.t. }&\sum _{\begin{array}{c} D\in \mathcal {D}\\ u\in U_{0}(D) \end{array}}^{} x_{D,0}+\sum _{\begin{array}{c} D\in \mathcal {D}\\ u\in U_{j}(D) \end{array} }^{} x_{D,j} \ge 1,\forall j \in \left\{ 1,2,\dots , k \right\} , \forall u \in U_{0}\cap U_{j}, \end{aligned}$$
(1)
$$\begin{aligned} &\sum _{\begin{array}{c} D\in \mathcal {D}\\ u\in U_{j}(D) \end{array}}^{} x_{D,j} \ge 1 ,\forall j \in \left\{ 1,2,\dots , k \right\} , \forall u \in U_{j} \setminus U_{0}, \end{aligned}$$
(2)
$$\begin{aligned} & x_{D,0},x_{D,j} \in \left\{ 0,1 \right\} ,\forall j \in \left\{ 1,2,\dots , k \right\} , \forall D\in \mathcal {D}. \end{aligned}$$
(3)

In this formulation, the variable \(x_{D,j}\) indicates in scenario j whether we select the disk D. That is:

$$\begin{aligned} x_{D,j}={\left\{ \begin{array}{ll} 1,&{}\text {if disk } D \text { is selected in scenario } j \text { to cover some users,}\\ 0,&{}\text {otherwise.} \end{array}\right. } \end{aligned}$$

The first set of constraints of (1) guarantees that each user \(u \in U_{0}\cap U_{j}\) is covered in either the first or second stage, and constraint (2) forces the users in \(U_{j} \setminus U_{0}\) must be covered in the second stage. We can obtain the linear program by replacing constraint (3). Its LP relaxation and corresponding dual program of the linear relaxation are as shown below:

figure c
figure d

Next, we recall the definition and some geometric properties of the \(\rho \)-relaxed independent set that have been introduced in [8], where \(\rho \in [0,2]\) is a given constant. Given a set U of users and a set \(\mathcal {D}\) of disks on the plane, for any two disks \(D_{1},D_{2}\in \mathcal {D}\), if \(U(D_{1})\cap U(D_{2})=\emptyset \) or \(d(c(D_{1}),c(D_{2}))>\rho \max \{r(D_{1}),r(D_{2})\}\), we call that \(\mathcal {D}\) is a \(\rho \)-relaxed independent set, where d(ab)is the Euclidean distance between points a and b.

According to the above definition, we can obtain the following lemma, and its proof process is the same as in [8]. This lemma will be used in the later proof of the approximate ratio of the primal-dual algorithm.

Lemma 2

For any \(t\in \{2,3,\dots \}\), we have \(\max \limits _{u\in U}|\{D|u\in U(D),D\in \mathcal {D}\}|\le t-1\), where \(\mathcal {D}\) is a \(\rho \)-relaxed independent set with \(\rho =2\sin \frac{\pi }{t}\).

3 Algorithm for the SPC Problem

The algorithm is a three-phase primal-dual approximation algorithm consisting of four steps. For ease of description and modeling, we now present some more notations as follows: for a disk D, U(D) denotes the users covered by D, and P(D) denotes the expected power it consumes, that is,

$$ P(D)=c_{0} r(D,0)^{\alpha }+\sum _{j=1 }^{k} p_{j} c_{j} r(D,j)^{\alpha }=\sum _{j=0 }^{k} p_{j} c_{j} r(D,j)^{\alpha },$$

here \(p_{0}=1\). For a set of disks \(\mathcal {D}\), we also use \(U(\mathcal {D})\) to denote the set of users covered by disks in \(\mathcal {D}\); for a solution \(\mathcal {F}\), let \(P(\mathcal {F})\) denote the expected power it consumes, that is,

$$P(\mathcal {F})=\sum _{D:D\in \mathcal {F}}^{}P(D)=\sum _{j=0}^{k} \sum _{D:D\in \mathcal {F}}^{}p_{j}c_{j}r(D,j) ^{\alpha }.$$

We say a disk \(D\in \mathcal {D}\) is tight if it satisfies either

$$\begin{aligned} \sum _{j=1 }^{k}\sum _{u\in U_{0}(D)\cap U_{j}(D) }^{} y_{u,j} = c_{0}r(D,0)^{\alpha }, \end{aligned}$$
(5)

or

$$\begin{aligned} \sum _{u\in U_{j}(D) }^{} y_{u,j} = p_{j}c_{j}r(D,j)^{\alpha }. \end{aligned}$$
(6)

The basic framework of the algorithm is shown as follows:

  • Step1: In the first step, we raise the dual variables \(y_{u,j}\) uniformly for all users in \(U_{j} \setminus U_{0}\), separately for each j. All disks that become tight (satisfy Eq. (5)) have \(x_{D,j}\) set to 1. In this way, we can find a disk set \(\mathcal {D} _{j}^{tight}\), where \(\mathcal {D} _{j}^{tight}\) can cover all users in \(U_{j} \setminus U_{0}\).

  • Step2: In the second step, we do a greedy dual-ascent on all uncovered users of \(U_{j}\). These users are contained in \(U_{0}\cap U_{j}\). We also raise the dual variables \(y_{u,j}\) for these uncovered users, if a disk is tight (satisfy Eq. (5)), then we select it in the stage one solution by setting \(x_{D,0}=1\), and if it is not tight for \(x_{D,0}\) but is tight for \( x_{D,j}\), then we select it in the resource solution and set \(x_{D,j}=1\). In this way, we can find a disk set \(\mathcal {D} _{0}^{tight}\) and extend the disk set \(\mathcal {D} _{j}^{tight}\), where \(\mathcal {D} _{0}^{tight}\cup \mathcal {D} _{j}^{tight}\) can cover all users in \(U_{j}\).

  • Step3: Before going into the fourth step, remove the disk \(D_{j,last}\) which is the last disk added into \(\mathcal {D} _{j}^{tight}(j=0,1,\dots , k)\). Then, a maximal \(\rho \)-relaxed independent set of disks \(\mathcal {I} _{j}\) is computed in a greedy manner.

  • Step4: Finally, every disk in \(\mathcal {I} _{j}\) has its radius enlarged \(1+\rho \) times. Such set of disks together with \(D_{j,last}(j=0,1,\dots , k)\) are the output of the algorithm.

We propose the detailed three-phase primal-dual algorithm in Algorithm 1 below.

figure e

Lemma 3

\(\mathcal {F}\) is a feasible solution.

Proof

Consider a user in scenario \(j=1,\dots , k\), by definition of the algorithm, it will be either covered by disks in \(\mathcal {D} _{j}^{tight}\), or disks in \(\mathcal {D} _{0}^{tight}\) (or both), so that \(\bigcup _{j=0}^{k}\mathcal {D} _{j}^{tight}\) is a feasible solution for \((U,\mathcal {D},k)\). Next, we will prove that \(\mathcal {F}\) is also a feasible solution. For any user \(u \in U(\mathcal {D} _{j}^{tight}),j=0,\dots , k\), if u is not covered by \(\mathcal {F}_{j}\), then it must be covered by a disk \(D_{l_{j}^{'} } \in \mathcal {D} _{j}^{tight}\setminus \mathcal {F}_{j}\). Following from the definition of \(\rho \)-relaxed independent set, there is a disk \(D_{l_{j}^{''} }\in \mathcal {I}_{j}\) satisfying that \(r(D_{l_{j}^{''},j} )\ge r(D_{l_{j}^{'}},j)\) and \(d(c(D_{l_{j}^{''}}), c(D_{l_{j}^{'}}))\le \rho r(D_{l_{j}^{''}},j)\). Therefore, we have

$$\begin{aligned} \begin{aligned} d(u,c(D_{l_{j}^{''}}))&\le d(u, c(D_{l_{j}^{'}}))+d(c(D_{l_{j}^{''}}),c(D_{l_{j}^{'}}))\\ &\le r(D_{l_{j}^{'}},j )+\rho r(D_{l_{j}^{''}},j)\\ &\le (1+\rho )r(D_{l_{j}^{''}},j). \end{aligned} \end{aligned}$$

That implies that u is covered by disk \(D(c(D_{l_{j}^{''}}),(1+\rho )r(D_{l_{j}^{''},j}))\in \mathcal {F}_{j}\) contradicting previous assumption. Therefore, \(\mathcal {F}\) is a feasible solution.

Lemma 4

For any integer \(t\in \{2,3,4,\dots \}\), the objective value of \(\mathcal {F}\) is no more than \(2(t-1)(1+2\sin \frac{\pi }{t} )^{\alpha }OPT^{'}+P(\mathcal {D}^{max} )\).

Proof

$$\begin{aligned} P(\mathcal {I})&=\sum _{j=0}^{k} p_{j}\sum _{D:D\in \mathcal {I}_{j} }^{}c_{j}r(D,j) ^{\alpha } \\ &= \sum _{D:D\in \mathcal {I}_{0}}^{} c_{0}r(D,0)^{\alpha } + \sum _{j=1}^{k} p_{j}\sum _{D:D\in \mathcal {I}_{j} }^{}c_{j}r(D,j) ^{\alpha }\\ &= \sum _{D:D\in \mathcal {I}_{0}}^{} \sum _{j=1}^{k} \sum _{u:u\in U_{0}(D)\cap U_{j}(D)}^{}y_{u,j} + \sum _{j=1}^{k} \sum _{D:D\in \mathcal {I}_{j} }^{} \sum _{u:u\in U_{j}(D)}^{}y_{u,j} \\ &=\sum _{j=1}^{k} \sum _{D:D\in \mathcal {I}_{0}}^{} \sum _{u:u\in U_{0}(D)\cap U_{j}(D)}^{}y_{u,j} + \sum _{j=1}^{k} \sum _{D:D\in \mathcal {I}_{j} }^{} \sum _{u:u\in U_{j}(D)}^{}y_{u,j} \\ &\le \sum _{j=1}^{k} \sum _{u:u\in U(\mathcal {I}_{0})}^{}y_{u,j} \cdot | \{ D_{0}| D_{0}\in \mathcal {I}_{0}, u\in U(D_{0})\} | \\ {} &+\sum _{j=1}^{k} \sum _{u:u\in U(\mathcal {I}_{j})}^{}y_{u,j}\cdot | \{ D_{j}| D_{j}\in \mathcal {I}_{j }, u\in U(D_{j})\} | \\ &\le (t-1)\sum _{j=1}^{k} \sum _{u:u\in U(\mathcal {I}_{0})}^{}y_{u,j}+(t-1)\sum _{j=1}^{k} \sum _{u:u\in U(\mathcal {I}_{j})}^{}y_{u,j}\\ &\le (t-1)\sum _{j=1}^{k} \sum _{u:u\in U(\mathcal {D} _{0}^{tight}\setminus \{ D_{0,last}\})}^{}y_{u,j}+(t-1)\sum _{j=1}^{k} \sum _{u:u\in U(\mathcal {D} _{j}^{tight}\setminus \{ D_{j,last}\})}^{}y_{u,j}\\ &\le 2(t-1)\sum _{j=1}^{k} \sum _{u:u\in U_{j}}y_{u,j}\\ &\le 2(t-1)OPT^{''}\\ &\le 2(t-1)OPT^{'}, \end{aligned}$$

where \(OPT^{''}\) is the optimal value of the dual program. The third equation follows from Eq. (5) and (6), the second inequation follows from Lemma 2 and \(\mathcal {I}_{j}\) is a \(\rho \)-relaxed independent set, and the third inequation follows from \(\mathcal {I}_{j}\subseteq \mathcal {D} _{j}^{tight}\setminus \{ D_{j,last}\}, j=0,1,\dots , k\) and the last inequation follows from the well-known strong duality theorem. From the inequations above, we have

$$\begin{aligned} \begin{aligned} P(\mathcal {F})&=(1+\rho )^{\alpha }P(\mathcal {I})+\sum _{j=0}^{k}p_{j}c_{j}r(D_{j,last},j)^{\alpha }\\ &\le 2(t-1)(1+\rho )^{\alpha }OPT^{'}+\sum _{j=0}^{k}p_{j}c_{j}r(D_{j,last},j)^{\alpha }\\ &\le 2(t-1)(1+2\sin \frac{\pi }{t} )^{\alpha }OPT^{'}+P(\mathcal {D}^{max} ). \end{aligned} \end{aligned}$$

The first equality follows from \(\mathcal {F}_{j}=\{D(c(D),(1+\rho )r(D,j))|D\in \mathcal {I}_{j}\} \cup D_{j,last}\), the second inequality follows from \(r(D_{j,last},j)\le r(D_{j,max},j), \mathcal {D}_{max}=\bigcup _{j=0}^{k} D_{j,max}\). Therefore, the lemma holds.

Theorem 1

There is an O(\(\alpha \))-approximation algorithm for the MinSPC problem.

Proof

$$\begin{aligned} \begin{aligned} P(\mathcal {F}\cup \mathcal {D}_{max})&=P(\mathcal {F})+P( \mathcal {D}_{max})\\ &\le 2(t-1)(1+2\sin \frac{\pi }{t} )^{\alpha }OPT^{'}+2P(\mathcal {D}^{max} )\\ &\le 2(t-1)(1+2\sin \frac{\pi }{t} )^{\alpha }(OPT^{'}+P(\mathcal {D}^{max}))\\ &=2(t-1)(1+2\sin \frac{\pi }{t} )^{\alpha }OPT, \end{aligned} \end{aligned}$$

where the first inequality follows from Lemma 4, and the second inequality follows from \(\alpha \ge 1, t\in \{2,3,4,\dots \}\), and the last equality follows from Lemma 1. Furthermore, as in the analysis in [8], the approximation of Algorithm 1 is O(\(\alpha \)).

4 Conclusions

In this paper, we introduce the stochastic minimum power cover problem, which generalizes the minimum power cover problem and the stochastic minimum set cover problem. We prove an O(\(\alpha \))-approximation algorithm for this problem, which can be implemented in polynomial time.

For the stochastic optimization problems, we now consider only the two-stage finite-scenario version for the stochastic power cover problem. In the future, there is substantial potential for us to design some algorithms for this problem with multi-stage, exponential scenarios and more constraints which comes with great challenges.