Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

6.1 Motivation and Overview

A classic type of resource management problem is as follows: Given a certain amount of resource and a set of users, find an assignment of resource to maximize the number of satisfied users. The maximum lifetime coverage is such a classic type of problem in wireless sensor networks.

When a very large number of sensors are randomly deployed into a certain region possibly by an aircraft to monitor a certain set of targets, usually, there are a lot of redundant sensors. A better usage of those redundant sensors is to schedule active/sleep time of sensors to increase the lifetime of the system.

A simple scheduling is to divide sensors into disjoint subsets, each of which fully covers all targets, called a sensor cover [18, 80].

Sensor-Cover-Partition: Given n targets r 1, , r n and m sensors s 1, , s m , each covering a subset of targets, find the maximum number of disjoint sensor covers.

This problem is NP-hard. Various heuristics and approximation algorithms have been given in [11, 13, 96]. In general, there is no polynomial-time ( − ε)lnn)-approximation for any ε > 0 unless NP ⊆ DTIME(n O(loglogn)) [48] and there exists polynomial-time O(logn)-approximation [6, 80]. But, there is an open problem in a special case.

Open Problem 6.1.1.

Suppose all sensors are uniform, that is, they have the same sensing radius. It is unknown whether a polynomial-time constant-approximation exists or not.

When the sensor set and the target set are identical, Sensor-Cover-Partition becomes the following domatic partition problem.

Max#DS: Given a graph G = (V, E), partition the vertex set V into maximum number of disjoint dominating sets.

In general graph, there is no polynomial-time (1 − ε)lnn-approximation unless NP ⊆ DTIME(n O(loglogn)) and there exists polynomial-time O(logn)-approximation for Max#DS [48]. However, for unit disk graphs, there is a polynomial-time constant-approximation [86].

For this type of scheduling, the sensor is activated only once, that is, once the sensor is activated, it keeps active until it dies.

Cardei et al. [15] found that it is possible to increase the lifetime if each sensor is allowed to alternate between active and sleeping states. An example can be found in Chap. 1. The model is also better supported by an interesting fact discovered in [64] that putting a sensor alternatively in active and sleeping states in a proper way may double its lifetime since the battery could be recovered in a certain level during sleeping. The formulation of this model is as follows.

Max-Lifetime Coverage: Given n targets t 1, , t n and m sensors s 1, , s m , each covering a subset of targets, find a family of sensor cover S 1, , S p with time lengths t 1, , t p in [0, 1], respectively, to maximize \({t}_{1} + \cdots + {t}_{p}\) subject to that the total active time of every sensor is at most 1.

This is still an NP-hard problem. Cardei [15] formulated it as a 0-1 integer programming and designed a heuristic without guaranteed theoretical bound. Berman et al. [6, 7] first designed an approximation algorithm for Max-Lifetime Coverage with theoretical bound. They showed that there exists a polynomial-time approximation for Max-Lifetime Coverage with performance ratio O(logn) where n is the number of sensors. By employing Garg–Könemann theorem [55], Berman et al. reduced Max-Lifetime Coverage to the following:

MinW-Sensor-Cover: Consider n targets t 1, , t n and m sensors s 1, , s m , each covering a subset of targets. Given a weight function on sensors c : { s 1, , s m } → R  + , find the minimum total weight sensor cover.

They showed that if MinW-Sensor-Cover has a polynomial-time ρ-approximation, then Max-Lifetime Coverage has a polynomial-time (1 + ε)ρ-approximation for any ε > 0. Note that MinW-Sensor-Cover is equivalent to MinW-Sensor-Cover. Therefore, it has a polynomial-time (1 + logn)-approximation. Hence, Max-Lifetime Sensor Cover has a polynomial-time O(logn)-approximation. Actually, the first one who found the application of Garg-Könemann theorem in study of lifetime maximization type of problems is Calnescu et al. [12].

Ding et al. [34] noted that all results in Chap. 5 about MinW-DS can be extended to MinW-Sensor-Cover in the case that all sensors and targets lie in the Euclidean plane and all sensors have the same covering radius. Therefore, they proved that in this case, Max-Lifetime Coverage has polynomial-time 3.63-approximation.

Du et al. [37] extended this approach to study the coverage problem with connectivity requirement. They constructed a polynomial-time constant-approximation in geometric case and O(logn)-approximation in general case. However, many maximum lifetime coverage with connectivity requirement are still open. The following is an example.

Open Problem 6.1.2.

Does Max#CDS have a polynomial-time constant- approximation in unit disk graphs?

6.2 Max-Lifetime Connected Coverage

As described in the previous section, the method of Garg and Könemann [55] plays an important role in design of constant-approximation for various problems on the maximum lifetime coverage. In this section, we introduce it through the work of Du et al. [37].

Du et al. [37] studied a quite general model of wireless sensor networks which was previously studied by Zhang and Li [126]. In this model, each sensor has two modes, active mode and sleep mode, and the active mode has two phases, the full-active phase and the semi-active phase. A full-active sensor can sense, transmit, receive, and relay the data packets. A semi-active sensor cannot sense data packets, but it can transmit, receive, and relay data packets. Usually, a sensor in the full-active phase consumes more energy than in the semi-active phase.

Sensors are often randomly deployed into hostile environment, such as battlefield and inaccessible area with chemical or nuclear pollution, so that recharging batteries of sensors is a mission impossible. Assume the battery of each sensor contains a certain amount of energy, say unit amount. Then the lifetime of each sensor depends on energy consumption.

Du et al. [37] studied the following problem:

Max-Lifetime Connected-Coverage with two active phases: Given a set of targets and a set of sensors with two active phases, find an active/sleeping schedule for sensors to maximize the system lifetime where the network system is said to be alive if the following conditions are satisfied:

  1. (A1)

    Every target is monitored by a full-active sensor.

  2. (A2)

    All (full-/semi-) active sensors induce a connected subgraph.

They studied this problem with the primal-dual method of Garg and Könemann [55].

Let S be the set of all sensors. Assume all sensors are uniform, that is, they have the same communication radius R c , the same sensing radius R s , the same full-active energy consumption u of unit time and the same semi-active energy consumption v of unit time. Also, assume u ≥ v. A pair p of sets is called an active sensor set pair if p = (p 1, p 2) where p 1 is a set of full-active sensors and p 2 is a set of semi-active sensors with p 1 ∩ p 2 = . For any active sensor set pair p, define

$${a}_{s,p} = \left \{\begin{array}{ll} u&\quad \mbox{ if }s \in {p}_{1}, \\ v &\quad \mbox{ if }s \in {p}_{2}, \\ 0 &\quad \mbox{ otherwise}. \end{array} \right.$$

Suppose \(\mathcal{C}\) is the collection of all active sensor set pairs satisfying conditions (A1) and (A2). Then Max-Lifetime Connected Coverage with two active phases can be formulated as the following linear programming:

$$\begin{array}{rcl} \max & & \sum\limits_{p\in \mathcal{C}}{x}_{p} \\ \mbox{ subject to}& & \sum\limits_{p\in \mathcal{C}}{a}_{s,p}{x}_{p} \leq 1\ \ \ \mbox{ for }s \in S \\ & & {x}_{p} \geq 0\ \ \ \mbox{ for }p \in \mathcal{C}.\end{array}$$

Its dual is as follows.

$$\begin{array}{rcl} \min & & \sum\limits_{s\in S}{y}_{s} \\ \mbox{ subject to}& & \sum\limits_{s\in S}{a}_{s,p}{y}_{s} \geq 1\ \ \ \mbox{ for }p \in \mathcal{C}, \\ & & {y}_{s} \geq 0\ \ \ \mbox{ for }s \in S.\end{array}$$

Motivated from the work of Garg and Könemann [55], Du et al. [37] designed the following primal-dual algorithm.

6.2.1 Primal-Dual Algorithm DPWW

Initially, choose x p  = 0 for all \(p \in \mathcal{C}\) and y s  = δ for all s ∈ S where δ is a positive constant which will be determined later.

In each iteration, carry out the following steps until (y s , s ∈ S) becomes dual feasible, that is, all constrains in dual linear programming are satisfied:

Step 1.:

Compute a ρ-approximation solution p  ∗  for

MinW-CSC with two active phases:

$$\min\limits_{p\in \mathcal{C}}\sum\limits_{s\in S}{a}_{s,p}{y}_{s}.$$
Step 2.:

Compute a solution s  ∗  for

$$\max\limits_{s\in S}{a}_{s,{p}^{{_\ast}}}.$$
Step 3.:

Update x p and y s as follows:

  1. (B1)

    x p does not change for pp  ∗ , and

    $${x}_{{p}^{{_\ast}}} \leftarrow {x}_{{p}^{{_\ast}}} + \frac{1} {{a}_{{s}^{{_\ast}},{p}^{{_\ast}}}}.$$
  2. (B2)

    y s does not change for s ∉ p 1  ∗ p 2  ∗ , and

    $${y}_{s} \leftarrow {y}_{s}\left (1 + \theta \frac{{a}_{s,{p}^{{_\ast}}}} {{a}_{{s}^{{_\ast}},{p}^{{_\ast}}}}\right )$$

for s ∈ p 1  ∗ p 2  ∗  where θ is a constant chosen later.

The following lemmas give two important properties at the end of above algorithm.

Lemma 6.2.1.

At the end of Primal-Dual Algorithm DPWW, \(({x}_{p},p \in \mathcal{C})\) may not be a primal-feasible solution. However, \(({x}_{p}/\tau, p \in \mathcal{C})\) is a primal-feasible solution where \(\tau = \frac{(v/u)\ln \frac{1+\theta } {v\delta } } {\ln (1+\theta v/u)}\).

Proof.

Note that when y s gets updated, the following facts must hold:

  1. (a)

    (y s , s ∈ S) is not dual feasible.

  2. (b)

    s ∈ p 1  ∗ p 2  ∗ .

It follows immediately from (a) that \(\sum\nolimits _{s\in S}{a}_{s,{p}^{{_\ast}}}{y}_{s} < 1\), which together with (b) yields that y s  < 1 ∕ v before y s receives any value change. After y s is updated, we have

$${y}_{s} < \left (1 + \theta \frac{{a}_{s,{p}^{{_\ast}}}} {{a}_{{s}^{{_\ast}},{p}^{{_\ast}}}}\right )/v \leq (1 + \theta )/v.$$

Therefore, at the end of Primal-Dual Algorithm DPWW, \({y}_{s} < (1 + \theta )/v\).

Now, consider a constraint in the primal linear programming,

$$\sum\limits_{p\in \mathcal{C}}{a}_{s,p}{x}_{p} \leq 1,$$

which may not be satisfied after x p is updated. If updating x p increases the value of \(\sum\nolimits _{p\in \mathcal{C}}{a}_{s,p}{x}_{p}\) by adding \(\frac{{a}_{s,{p}^{{_\ast}}}} {{a}_{{s}^{{_\ast}},{p}^{{_\ast}}}}\), then the value of y s is increased by multiplying a factor \(1 + \theta \frac{{a}_{s,{p}^{{_\ast}}}} {{a}_{{s}^{{_\ast}},{p}^{{_\ast}}}}\). Note that the value of \(\frac{{a}_{s,{p}^{{_\ast}}}} {{a}_{{s}^{{_\ast}},{p}^{{_\ast}}}}\) has only two possibilities, v ∕ u and 1. Suppose \(\frac{{a}_{s,{p}^{{_\ast}}}} {{a}_{{s}^{{_\ast}},{p}^{{_\ast}}}}\) takes value v ∕ u for k times and 1 for times. Then the value of \(\sum\nolimits _{p\in \mathcal{C}}{a}_{s,p}{x}_{p}\) receives an increase in \(k(v/u) + \mathcal{l}\) and

$${(1 + \theta v/u)}^{k}{(1 + \theta )}^{\mathcal{l}} \leq \frac{1 + \theta } {v\delta }$$

since initially y s  = δ. Moreover, initially, \(\sum\nolimits _{p\in \mathcal{C}}{a}_{s,p}{x}_{p} = 0\). Thus, at the end of Primal-Dual Algorithm DPWW, the value of \(\sum\nolimits _{p\in \mathcal{C}}{a}_{s,p}{x}_{p}\) is \(k(v/u) + \mathcal{l}\). The maximum value of \(k(v/u) + \mathcal{l}\) can be obtained from the following linear programming with respect to k and :

$$\begin{array}{rcl} \max & & k(v/u) + \mathcal{l} \\ \mbox{ subject to}& & k\ln (1 + \theta v/u) + \mathcal{l}\ln (1 + \theta ) \leq \ln \frac{1 + \theta } {v\delta } \\ & & k \geq 0,\mathcal{l} \geq \end{array}$$
(0.)

By theory of the linear programming, the maximum value of objective function can always be achieved by some extreme point. For above one, the feasible domain has three extreme points

$$(0,0),\ \ \left (0, \frac{\ln \frac{1+\theta } {v\delta } } {\ln (1 + \theta )}\right ),\quad \left ( \frac{\ln \frac{1+\theta } {v\delta } } {\ln (1 + \theta v/u)},0\right ).$$

Their objective function values are

$$0,\ \ \frac{\ln \frac{1+\theta } {v\delta } } {\ln (1 + \theta )},\quad \frac{v} {u} \cdot \frac{\ln \frac{1+\theta } {v\delta } } {\ln (1 + \theta v/u)},$$

respectively. Note that \(\frac{z} {\ln (1+\theta z)}\) is strictly monotone decreasing for z ≤ 1. Thus,

$$0 < \frac{\ln \frac{1+\theta } {v\delta } } {\ln (1 + \theta )} < \frac{v} {u} \cdot \frac{\ln \frac{1+\theta } {v\delta } } {\ln (1 + \theta v/u)}.$$

Hence, at the end of Primal-Dual Algorithm DPWW,

$$\sum\limits_{p\in \mathcal{C}}{a}_{s,p}{x}_{p} \leq \tau = \frac{v} {u} \cdot \frac{\ln \frac{1+\theta } {v\delta } } {\ln (1 + \theta v/u)}.$$

Therefore,

$$\sum\limits_{p\in \mathcal{C}}{a}_{s,p}{x}_{p}/\tau \leq 1.$$

Lemma 6.2.2.

At the end of Primal-Dual Algorithm DPWW,

$$\sum\limits_{p\in \mathcal{C}}{x}_{p}/\tau \geq \frac{\ln {(v\vert S\vert \delta )}^{-1}} {\tau \theta \rho } \cdot {\mathrm{opt}}_{\mathrm{lcc}}$$

where optlcc is the objective function value of optimal solution for Max-Lifetime Connected Coverage with two active phases and \(\tau = {(v/u)\log }_{1+\theta v/u}\frac{1+\theta } {\delta v}\).

Proof.

Denote by x p (0) the initial value of x p and by y s (0) the initial value of y s . Denote by x p (i) and y s (i), respectively, the values of x p and y s after the ith iteration. Denote by s  ∗ (i) and p  ∗ (i), respectively, the values of s  ∗  and p  ∗  in the ith iteration. Furthermore, denote \(X(i) =\sum\nolimits _{p\in \mathcal{C}}{x}_{p}(i)\) and Y (i) =  ∑ s ∈ S y s (i). Then, for i ≥ 1, one has

$$\begin{array}{rcl} Y (i)& =& \sum\limits_{s\in S}{y}_{s}(i - 1) + \theta \frac{1} {{a}_{{s}^{{_\ast}}(i),{p}^{{_\ast}}(i)}}\sum\limits_{s\in S}{a}_{s,{p}^{{_\ast}}(i)}{y}_{s}(i - 1) \\ & \leq & Y (i - 1) + \theta (X(i) - X(i - 1)){\rho} \min\limits_{p\in \mathcal{C}}\sum\limits_{s\in S}{a}_{s,p}{y}_{s}(k - 1).\end{array}$$

Thus,

$$Y (i) \leq Y (0) + \theta \rho \sum\limits_{k=1}^{i}({(X(k) - X(k - 1))}\min\limits_{ p\in \mathcal{C}}\sum\limits_{s\in S}{a}_{s,p}{y}_{s}(k - 1).$$

By the duality theory of linear programming, optlcc is also the objective function value of optimal solution for the dual linear programming. Therefore,

$${\mathrm{opt}}_{\mathrm{lcc}} =\min\limits_{{y}_{s}}{ \frac{\sum\nolimits_{s\in S}{y}_{s}} {\min_{p\in \mathcal{C}}\sum\nolimits_{s\in S}{a}_{s,p}{y}_{s}}},$$

where the minimization is subject to y s  ≥ 0 for s ∈ S. Hence,

$$\min\limits_{p\in \mathcal{C}}\sum\limits_{s\in S}{a}_{s,p}{y}_{s}(k - 1) \leq \frac{Y (k - 1)} {{\mathrm{opt}}_{\mathrm{lcc}}}.$$

Therefore,

$$Y (i) \leq \vert S\vert \delta + \frac{\theta \rho } {\mathrm{opt}}\sum\limits_{k=1}^{i}(X(k) - X(k - 1))Y (k - 1).$$

Define

$$w(0) = \vert S\vert \delta $$

and

$$w(i) = \vert S\vert \delta + \frac{\theta \rho } {\mathrm{opt}}\sum\limits_{k=1}^{i}(X(k) - X(k - 1))w(k - 1).$$

It is easy to prove by induction on i that Y (i) ≤ w(i). Moreover,

$$\begin{array}{rcl} w(i)& =& \left (1 + \frac{\theta \rho } {{\mathrm{opt}}_{\mathrm{lcc}}}(X(i) - X(i - 1))\right )w(i - 1) \\ & \leq &{ \mathrm{e}}^{ \frac{\theta \rho } {{\mathrm{opt}}_{\mathrm{lcc}}} (X(i)-X(i-1))}w(i - 1) \\ & \leq &{ \mathrm{e}}^{ \frac{\theta \rho } {{\mathrm{opt}}_{\mathrm{lcc}}} X(i)}w(0) \\ & =&{ \mathrm{e}}^{ \frac{\theta \rho } {{\mathrm{opt}}_{\mathrm{lcc}}} X(i)}\vert S\vert \delta.\end{array}$$

Suppose Primal-Dual Algorithm DPWW stops at the mth iteration. Then Y (m) ≥ 1 ∕ v. Hence

$$1/v \leq Y (m) \leq w(m) \leq \vert S\vert \delta {\mathrm{e}}^{ \frac{\theta \rho } {{\mathrm{opt}}_{\mathrm{lcc}}} X(m)}.$$

Therefore,

$$\frac{{\mathrm{opt}}_{\mathrm{lcc}}} {X(m)/\tau } \leq \frac{\tau \theta \rho } {\ln {(v\vert S\vert \delta )}^{-1}}.$$

Theorem 6.2.3 (Du et al. [37]). 

If MinW-CSC with two active phases has a polynomial-time ρ-approximation, then Max-Lifetime Connected Coverage with two active phases has a polynomial-time ρ(1 + ε)-approximation for any ε > 0.

Proof.

Choose \(\delta = (1 + \theta ){((1 + \theta )\vert S\vert )}^{-\theta }/v\). Note that

$$\frac{\ln \frac{1+\theta } {\delta v} } {\ln {(\delta v\vert S\vert )}^{-1}} = \frac{1} {1 - \theta },$$

and \({(1 + \theta v/u)}^{u/(v\theta )+1} > e\) implies \(\ln (1 + \theta v/u) > \frac{v\theta } {u+v\theta }\). Thus,

$$\frac{\tau \theta \rho } {\ln {(v\vert S\vert \delta )}^{-1}} = \frac{\theta \rho } {(1 - \theta )\ln (1 + \theta v/u)} \leq \rho \cdot \frac{1 + \theta v/u} {1 - \theta }.$$

Choose θ such that

$$\frac{1 + \theta v/u} {1 - \theta } < 1 + \epsilon.$$

Then

$$\frac{\mathrm{opt}} {\sum\limits_{p\in \mathcal{C}}{x}_{p}/\tau } \leq (1 + \epsilon )\rho.$$

To estimate the running time of Primal-Dual Algorithm DPWW, let p  ∗  be a polynomial time ρ-approximation solution for MinW-CSC with Two Active Phases. Note that every iteration can be carried out in polynomial-time. Therefore, it suffices to estimate the number of iterations. Note that at each iteration, at least one of y s has its value increased. In the proof of Lemma 6.2.1, it is already proved that at the end of the algorithm, each y s has its value increased by multiplying at most \({\log }_{1+\theta v/u}\frac{1+\theta } {\delta v}\). Therefore, the number of iterations is at most

$$\vert S{\vert \log }_{1+\theta v/u}\frac{1 + \theta } {\delta v} = \frac{\vert S\vert \theta \ln ((1 + \theta )\vert S\vert )} {\ln (1 + \theta v/u)} = O(\vert S\vert \log \vert S\vert ),$$

where \(\delta v = (1 + \theta ){((1 + \theta )\vert S\vert )}^{-\theta }\) and θ is fixed as ε is fixed.

In Chap. 5, it has been shown that there exists a polynomial-time 3.63-approximation for MinW-DS. This result can be extended to the following problem.

MinW-Sensor-Cover: Consider a set of targets and a set of sensors lying in the Euclidean plane. Suppose all sensors have the same sensing radius R s , but may have different weights. The problem is to find the minimum weight subset of sensors for covering all targets.

Therefore, the following holds.

Theorem 6.2.4 (Du et al. [37]). 

Max-Lifetime Connected Coverage with Two Active Phases has polynomial-time (7.105 + ε)-approximations for any ε > 0 when all targets and all sensors lie in the Euclidean plane and all sensors are uniform with R c ≥ 2R s .

Proof.

Let OptCSC be the optimal solution for MinW-CSC with two active phases. Compute a polynomial-time 3. 63-approximation solution A for MinW-Sensor-Cover with weight y s u for each sensor s. Then

$$\sum\limits_{s\in A}{y}_{s}u \leq 3.63 \cdot {\mathrm{opt}}_{\mathrm{CSC}},$$

where optCSC is the objective function value of OptCSC. Since R c  ≥ 2R s , every sensor in A is adjacent to some sensor in OptCSC. This means that OptCSCA induces a connected subgraph and hence OptCSC contains the set of Steiner nodes in a feasible solution for Node-weighted Steiner Tree on the terminal set A. Now, find a polynomial-time 3. 475-approximation solution B for Node-Weighted Steiner Tree with weight y s v for each sensor s. Then

$$\sum\limits_{s\in B}{y}_{s}v \leq 3.475 \cdot \sum\limits_{s\in Op{t}_{\mathrm{CSC}}}{y}_{s}v \leq 3.475 \cdot {\mathrm{opt}}_{\mathrm{CSC}}.$$

Therefore,

$$\sum\limits_{s\in A}{y}_{s}u +\sum\limits_{s\in B}{y}_{s}v \leq 7.105 \cdot {\mathrm{opt}}_{\mathrm{CSC}}.$$

6.3 Domatic Partition

So far, the best known constant-approximation for Max#DS in unit disk graphs is designed also using grid partition, however with a new technique. Let us start to introduce a problem on sensor-cover-partition with a separating line.

Sensor-Cover-Partition with Separating Line: Let L be a horizontal line. Given a set T of targets above L and a set S of sensors with sensing radius one below L, assume that every target is covered by at least one sensor. The problem is to find the maximum number of disjoint sensor covers. (A sensor cover is a subset of sensors covering all targets.)

Let \(\delta (S,T) =\min\limits_{t\in T}\vert \{s \in S\mid t \in {\mathrm{disk}}_{1}(s)\}\vert \) where disk1(s) denotes the disk with radius one and the center s. Call as the skyline the part, above line L, of envelope of disks disk1(s) for all s ∈ S. Let S′ be the set of those sensors s such that circle1(s) has a piece appearing in the skyline where circle1(s) denotes the circle with radius one and the center s. By Lemma 5.7.1 S′ lines up from right to left by following their pieces on the skyline. For any t ∈ T, denote C S′ (t) = disk1(t) ∩ S′. The following properties are important.

Lemma 6.3.1.

Let s 1 ,s 2 ,s 3 be three sensors in S with s 1 .x ≤ s 2 .x ≤ s 3 .x where s i .x denotes the x-coordinate of point s i . Suppose there exists a target t such that t ∈disk1 (s 1 ) ∩disk1 (s 3 ) but t∉disk2 (s 2 ). Then \(\mathrm{up}(L) \cap {\mathrm{disk}}_{1}({s}_{2}) \subseteq \mathrm{up}(L) \cap ({\mathrm{disk}}_{1}({s}_{1}) \cup {\mathrm{disk}}_{1}({s}_{3}))\) where up (L) denotes the half plane above the horizontal line L and circle1 (s 2 ) cannot appear in the skyline.

Proof.

It is trivial in the case that s 1. x = s 2. x or s 2. x = s3. x. Thus, we next assume s 1. x < s 2. x < s 3. x. For contradiction, suppose there exists a point p ∈ up(L) ∩ disk1(s 2) but \(p\not\in \mathrm{up}(L) \cap ({\mathrm{disk}}_{1}({s}_{1}) \cup {\mathrm{disk}}_{1}({s}_{3})\). Note that t ∈ disk1(s 1) ∩ disk1(S 3) implies that for any point q ∈ up(L) with q. x = t. x and q. y ≤ t. y, q ∈ disk1(s 1) ∩ disk1(S 3). Moreover, t ∉ disk1(s 2) implies that for any q ∈ up(L) ∩ disk1(s 2) with q. x = t. x and q. y < t. y and hence q ∈ disk1(s 1) ∩ disk1(S 3). It follows that p. xt. x. Hence p. x < t. x or p. x > t. x. First, consider the case that p. x < t. x. In this case, two segments ps 2 and ts 1 must intersect at a point o. Note that | ps 2 |  <  | ps 1 | and | ts 2 |  >  | ts 1 | . Hence, \(\vert p{s}_{2}\vert + \vert t{s}_{1}\vert < \vert p{s}_{1}\vert + \vert t{s}_{2}\vert \). However, by the property of the triangle,

$$\vert po\vert + \vert o{s}_{1}\vert \geq \vert p{s}_{1}\vert $$

and

$$\vert to\vert + \vert o{s}_{2}\vert \geq \vert t{s}_{2}\vert.$$

Therefore

$$\vert p{s}_{2}\vert + \vert t{s}_{1}\vert = \vert po\vert + \vert o{s}_{2}\vert + \vert to\vert + \vert o{s}_{1}\vert \geq \vert p{s}_{1}\vert + \vert t{s}_{2}\vert,$$

a contradiction. Similarly, a contradiction can result from the case that p. x > t. x.

Note that circle1(s 2) ∩ up(L) cannot intersect \(\mathrm{up}(L) \cap ({\mathrm{circle}}_{1}({s}_{1}) \cup {\mathrm{circle}}_{1}({s}_{3}))\). In fact, if they have an intersection point p, then a contradiction can still result from the above argument by noting that the argument still works when | ps 2 |  =  | ps 1 | . So, up(L) ∩ disk1(s 2) is contained strictly inside of \(\mathrm{up}(L) \cap ({\mathrm{disk}}_{1}({s}_{1}) \cup {\mathrm{disk}}_{1}({s}_{3}))\). Hence, circle1(s 2) cannot appear in the skyline.

Lemma 6.3.2.

For any t ∈ T, C S′ (t) is a nonempty contiguous subset of the ordered set S′.

Proof.

Suppose s 1, s 2, s 3 ∈ S′ with s 1. x ≤ s 2. x ≤ s 3. x. If s 1, s 3 ∈ C S′ (t) and s 2. x ∉ C S′ (t), then by Lemma 6.3.1, s 2 ∉ S′, a contradiction.

Lemma 6.3.3.

Suppose T′ is a subset of targets, satisfying a property that for any two distinct targets t,t′∈ T′, C S (t)⊈C S (t′). Then for any two distinct t,t′∈ T′, C S′ (t) ∩ C S′ (t′)≠∅ implies that C S′ (t) contains an endpoint of C S′ (t′).

Proof.

The lemma holds trivially in the case that C S′ (t) is not contained in C S′ (t). So, we next assume C S′ (t) ⊆ C S′ (t). By the assumption on T′, there exists s ∈ C S (t) ∖ C S (t′). Let s r and s l be the right endpoint and the left endpoint of C S′ (t′). Let s′ ∈ C S′ (t) ∩ C S′ (t′). Next, consider two cases.

Case 1. s l . x ≤ s. x ≤ s r . x. Note that t′ is contained in disk1(s r ) and disk1(s l ) but not contained in disk1(s). By Lemma 6.3.1, \(t \in \mathrm{up}(L) \cap {\mathrm{disk}}_{s} \subseteq \mathrm{up}(L) \cap ({\mathrm{disk}}_{1}({s}_{l}) \cup {\mathrm{disk}}_{1}({s}_{r}))\). Therefore, s l  ∈ C S′ (t) or s r  ∈ C S′ (t).

Case 2. s 1. x > s. x or s r . x < s. x. Note that s l . x ≤ s′. x ≤ s r . x. For contradiction, suppose t is contained by neither disk1(s l ) nor disk1(s r ). In the case that s. x < s 1. x, t is contained by disk1(s) and disk1(s′), but not contained by disk1(s l ). By Lemma 6.3.1, s l  ∉ S′, a contradiction. Similarly, a contradiction can result from the case that s r . x < s. x.

Now, it is ready to show the following.

Theorem 6.3.4.

There is a polynomial-time algorithm which can find at least δ(S,T)∕4 disjoint sensor covers.

Proof.

Consider the following algorithm.

The DomPart Algorithm.

input: a sensor set S and a target set T.

j ← 0;

E ← S;

while E is a set cover do begin

1.    j ← j + 1;

2.    T′ ← T;

     while there exist t, t′ ∈ T′ such that C E (t) ⊆ C E (t′)

     do T′ ← T′ ∖ { t′};

3.    Let E′ ⊆ E contribute the skyline of disks at E;

4.    Find a maximal subset T′ of T′ such that C E′ (t) for t ∈ T′ are disjoint;

5.    \({A}_{j} =\{ \mbox{ two endpoints of}\ {C}_{E^{\prime}}(t)\mid t \in T^{\prime\prime}\}\);

6.    E ← E ∖ A j ;

end-while

output: A 1, A 2, , A j .

First, we show that each A i for i = 1, , j is a sensor cover. In fact, for each t′ ∈ T′, A i contains two endpoints of C E′ (t′) and hence t′ is covered by A i . For t′ ∈ T′ ∖ T′, there exists t′ ∈ T′ such that C E′ (t′) ∩ C E′ (t′)≠. By Lemma 6.3.3, C E′ (t′) contains an endpoint of C E′ (t′) and hence t′ is covered by A i . For t ∈ T ∖ T′, there exists t′ ∈ T′ such that C E (t′) ⊆ C E (t). So, there exists t′ ∈ T′ such that C E (t) contains an endpoint of C E′ (t′) and hence t is covered by A i .

Next, we show that at the end of the jth iteration, | C E (t) | ≥ δ(S, T) − 4j for every t ∈ T. To do so, let E j denote the E at the end of the jth iteration. Suppose this inequality holds at the end of the (j − 1)th iteration, that is, \(\vert {C}_{{E}_{j-1}}(t)\vert \geq \delta (S,T) - 4(j - 1)\) for all t ∈ T. We show that \(\vert {C}_{{E}_{j}}(t)\vert \geq \delta (S,T) - 4j\) for all t ∈ T.

In the jth iteration, for t′ ∈ T′, two endpoints of C E′ (t′) are deleted from E j − 1 and hence

$$\vert {C}_{{E}_{j}}(t^{\prime\prime})\vert \geq \vert {C}_{{E}_{j-1}}(t^{\prime\prime})\vert - 2 > \delta (S,T) - 4j.$$

For t′ ∈ T′ ∖ T′, if C E′ (t′) contains an endpoint of C E′ (t′) for t′ ∈ T′, then by Lemma 6.3.3, C E′ (t′) must contain an endpoint of C E′ (t′). Thus, there are at most two such t′’s because all C E′ (t′) for t′ ∈ T′ are disjoint. This means that

$$\vert {C}_{{E}_{j}}(t^{\prime})\vert \geq \vert {C}_{{E}_{j-1}}(t^{\prime})\vert - 4 \geq \delta (S,T) - 4j.$$

For t ∈ T ∖ T′, there exists t′ ∈ T′ such that \({C}_{{E}_{j-1}}(t^{\prime}) \subseteq {C}_{{E}_{j-1}}(t)\). This relationship is preserved in the algorithm, that is, \({C}_{{E}_{j}}(t^{\prime}) \subseteq {C}_{{E}_{J}}(t)\). Therefore,

$$\vert {C}_{{E}_{j}}(t)\vert \geq \vert {C}_{{E}_{j}}(t^{\prime})\vert \geq \delta (S,T) - 4j.$$

It follows immediately from this inequality that at the end of The DomPart Algorithm, j ≥ δ(S, T) ∕ 4.

With Theorem 6.3.4, Pandit et al. [86] constructed an algorithm for Max#DS in unit disk graphs as follows.

Put input unit disk graph G = (V, E) into a square and partition the square with a grid of cells with diameter one (or say, diagonal length one). A cell is called a heavy cell if it contains at least δ ∕ 14 nodes where δmin is the minimum node degree of G. A cell is light if it is not heavy. For each node v in a light cell, disk1(v) intersects at most 14 cells, at least one of which contains at least δmin ∕ 14 nodes adjacent to v. Choose such a heavy cell σ and put v to \({T}^{\sigma }\), say that v belongs to σ. Let \({S}^{\sigma } = \sigma \cap V\). Consider \({S}^{\sigma }\) as a sensor set and \({T}^{\sigma }\) as a target set. Then the following lemma gives an important fact.

Lemma 6.3.5.

If for every heavy cell σ, \({S}^{\sigma }\) can be partitioned into k sensor covers for \({T}^{\sigma }\), then G has k disjoint dominating sets.

Proof.

Choose a sensor cover \({A}^{\sigma }\) for each heavy cell σ. Let A be the union of \({A}^{\sigma }\) for σ over all heavy cells. Then A is a dominating set because each \({A}^{\sigma }\) dominates not only all nodes in \({T}^{\sigma }\), but also dominates all nodes in \({S}^{\sigma }\).

For each heavy cell σ, partition \({T}^{\sigma }\) into four parts \(({T}_{\mathrm{north}}^{\sigma },{T}_{\mathrm{south}}^{\sigma },{T}_{\mathrm{east}}^{\sigma },{T}_{\mathrm{west}}^{\sigma })\) where \({T}_{\mathrm{north}}^{\sigma }\) consists of nodes lying above the line through the upper bound of σ, \({T}_{\mathrm{south}}^{\sigma }\) consists of nodes lying below the line through the lower bound of σ, \({T}_{\mathrm{east}}^{\sigma }\) consists of nodes lying in the right of the line through the right bound of σ, and \({T}_{\mathrm{west}}^{\sigma }\) consists of nodes lying in the left of the line through the left bound of σ. When two parts are available for a node v in \({T}^{\sigma }\), v can arbitrarily choose one of them as its home. Corresponding these four parts, partition \({S}^{\sigma }\) also into four parts \(({S}_{\mathrm{north}}^{\sigma },{S}_{\mathrm{south}}^{\sigma },{S}_{\mathrm{east}}^{\sigma },{S}_{\mathrm{west}}^{\sigma })\) by independently and randomly distributing each node into these four parts.

Now, solve Sensor-Cover-Partition with separation line on four inputs \(({S}_{\mathrm{north}}^{\sigma },{T}_{\mathrm{north}}^{\sigma })\), \(({S}_{\mathrm{south}}^{\sigma },{T}_{\mathrm{south}}^{\sigma })\), \(({S}_{\mathrm{east}}^{\sigma },{T}_{\mathrm{east}}^{\sigma })\), and \(({S}_{\mathrm{west}}^{\sigma },{T}_{\mathrm{west}}^{\sigma })\). Combine those solutions into k disjoint dominating sets of G where

$$k =\min \{ \delta ({S}_{\mathrm{south}}^{\sigma },{T}_{\mathrm{ south}}^{\sigma }),\delta ({S}_{\mathrm{ east}}^{\sigma },{T}_{\mathrm{ east}}^{\sigma }),\delta ({S}_{\mathrm{ west}}^{\sigma },{T}_{\mathrm{ west}}^{\sigma })\mid \mbox{ all heavy cells}\ \delta \}.$$

Next, we show that k ≥ δmin ∕ 112 with a quite high probability.

Note that for each \(t \in {T}^{\sigma }\), \(\vert \sigma \cap {\mathrm{disk}}_{1}(t)\vert \geq {\delta }^{\mathrm{min}}/14\) and the probability of at least one of two nodes in \(\sigma \cap {\mathrm{disk}}_{1}(t)\) distributed in the part containing t is 3 ∕ 4. By Chernoff bound, the probability of at least δmin ∕ 56 nodes in σ ∩ disk1(t) distributed in the part containing t is at least \(1 -{\mathrm{e}}^{-{\delta }^{\mathrm{min}}/112 }\).

Note that for each heavy cell σ, there are at most 20 cells within distance one to σ. So, there are at most 20 light cells which contain a node belonging to σ. Hence, \(\vert {T}^{\sigma }\vert \leq (20/14){\delta }^{\mathrm{min}}\). Thus, the probability of the following held is at least \(1 - (20/14){\delta }^{\mathrm{min}}{\mathrm{e}}^{-{\delta }^{\mathrm{min}}/112 }\):

$$\min (\delta ({S}_{\mathrm{south}}^{\sigma },{T}_{\mathrm{ south}}^{\sigma }),\delta ({S}_{\mathrm{ east}}^{\sigma },{T}_{\mathrm{ east}}^{\sigma }),\delta ({S}_{\mathrm{ west}}^{\sigma },{T}_{\mathrm{ west}}^{\sigma })) \geq {\delta }^{\mathrm{min}}/56.$$

Since the number of heavy cells cannot be bounded by Omin), it is hard to estimate the probability of k ≥ δmin ∕ 56. Thus, it requires more efforts on distribution of each element of S σ in order to establish a solution of the following problem.

Open Problem 6.3.6.

Is there a polynomial-time algorithm which produces \(\Omega ({\delta }^{\mathrm{min}})\) disjoint dominating sets for G with high probability?

6.4 Min-Weight Dominating Set

Pandit et al. [86] gave an interesting idea to construct approximation algorithms for MinW-DS using algorithm for Max#DS.

Consider the following LP-relaxation of MinW-DS.

$$\begin{array}{rcl} \min & \quad & \sum\limits_{i\in V }{c}_{i}{x}_{i} \\ \mbox{ subject to}& \quad & \sum\limits_{i\in {\mathrm{disk}}_{1}(j)}{x}_{i} \geq 1\mbox{ for all }j \in V \\ & \quad & {x}_{i} \geq 0\mbox{ for all }i \in V.\end{array}$$

Let (x i  ∗ , i ∈ V ) be an optimal solution of this LP. Denote n =  | V | . Let

$$\bar{{x}}_{i} = \left \{\begin{array}{l@{\quad }l} 0 \quad &\mbox{ if }{x}_{i}^{{_\ast}}\leq 1/2n \\ \frac{k} {2n}\quad &\mbox{ if }\frac{k-1} {2n} < {x}_{i}^{{_\ast}}\leq \frac{k} {2n}. \end{array} \right.$$

Lemma 6.4.1.

The following holds:

  1. (1)

    For j ∈ V, \(\sum\nolimits _{i\in {\mathrm{disk}}_{1}(j)}\bar{{x}}_{i} \geq 1/2\).

  2. (2)

    \(\sum\nolimits _{i\in V }{c}_{i}\bar{{x}}_{i} \leq 2 \cdot {\mathrm{opt}}_{\mathrm{WDS}}\) where opt eds is the objective function value of an optimal solution for MinW-DS.

Proof.

Since | V ∩ disk(j) | ≤ n, there are at most nx i  ∗  are rounded down to 0. Therefore,

$$\sum\limits_{i\in {\mathrm{disk}}_{1}(j)}\bar{{x}}_{i} \geq 1 - n \cdot \frac{1} {2n} = 1/2.$$

This means that (1) holds. For (2), note that

$$\sum\limits_{i\in V }{c}_{i}\bar{{x}}_{i} \leq 2\sum\limits_{i\in {\mathrm{disk}}_{1}(j)}{c}_{i}{x}_{i}^{{_\ast}}\leq 2 \cdot {\mathrm{opt}}_{\mathrm{ WDS}}.$$

Construct a set P by making \(2n \cdot \bar{ {x}}_{j}\) copies of node j for each j ∈ V. Suppose each copy of j has the same weight as that of j.

Lemma 6.4.2.

c(P) ≤ 4n ⋅optWDS.

Proof.

By Lemma 6.4.1, \(c(P) = 2n \cdot \sum\nolimits _{i\in V }{c}_{i} \cdot \bar{ {x}}_{i} \leq 4n{\mathrm{opt}}_{\mathrm{WDS}}\).

Lemma 6.4.3.

δ(P,V ) ≥ n.

Proof.

By Lemma 6.4.1, \(\sum\nolimits _{i\in {\mathrm{disk}}_{1}(j}\bar{{x}}_{i} \geq 1/2\). Thus, for each j ∈ V, \(\vert P \cap {\mathrm{disk}}_{1}(j)\vert = 2n\sum\nolimits _{i\in {\mathrm{disk}}_{1}(j)}\bar{{x}}_{i} \geq n\).

Suppose there is an algorithm which can produce at least δ(P, V ) ∕ C sensor cover packing A 1, , A t (t ≥ n ∕ C) for sensor set P and target set V. Then there exists A i such that

$$c({A}_{i}) \leq \frac{c(P)} {t} \leq \frac{C \cdot c(P)} {n} \leq \frac{4Cn \cdot {\mathrm{opt}}_{\mathrm{WDS}}} {n} = 4C \cdot {\mathrm{opt}}_{\mathrm{WDS}}.$$

This means that the following holds.

Theorem 6.4.4.

If there is a polynomial-time algorithm for Sensor-Cover-Partition which can produce δ(P,V )∕C sensor covers for sensor set P and target set V, then there is a polynomial-time 4C-approximation for MinW-DS.