Abstract
We consider approximation algorithms for packing integer programs (PIPs) of the form \(\max \{\langle c, x\rangle : Ax \le b, x \in \{0,1\}^n\}\) where c, A, and b are nonnegative. We let \(W = \min _{i,j} b_i / A_{i,j}\) denote the width of A which is at least 1. Previous work by Bansal et al. [1] obtained an \(\varOmega (\frac{1}{\varDelta _0^{1/\lfloor W \rfloor }})\)-approximation ratio where \(\varDelta _0\) is the maximum number of nonzeroes in any column of A (in other words the \(\ell _0\)-column sparsity of A). They raised the question of obtaining approximation ratios based on the \(\ell _1\)-column sparsity of A (denoted by \(\varDelta _1\)) which can be much smaller than \(\varDelta _0\). Motivated by recent work on covering integer programs (CIPs) [4, 7] we show that simple algorithms based on randomized rounding followed by alteration, similar to those of Bansal et al. [1] (but with a twist), yield approximation ratios for PIPs based on \(\varDelta _1\). First, following an integrality gap example from [1], we observe that the case of \(W=1\) is as hard as maximum independent set even when \(\varDelta _1 \le 2\). In sharp contrast to this negative result, as soon as width is strictly larger than one, we obtain positive results via the natural LP relaxation. For PIPs with width \(W = 1 + \epsilon \) where \(\epsilon \in (0,1]\), we obtain an \(\varOmega (\epsilon ^2/\varDelta _1)\)-approximation. In the large width regime, when \(W \ge 2\), we obtain an \(\varOmega ((\frac{1}{1 + \varDelta _1/W})^{1/(W-1)})\)-approximation. We also obtain a \((1-\epsilon )\)-approximation when \(W = \varOmega (\frac{\log (\varDelta _1/\epsilon )}{\epsilon ^2})\).
C. Chekuri and K. Quanrud supported in part by NSF grant CCF-1526799. M. Torres supported in part by fellowships from NSF and the Sloan Foundation.
Access provided by Autonomous University of Puebla. Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Packing integer programs (abbr. PIPs) are an expressive class of integer programs of the form:
where \(A \in \mathbb {R}_{\ge 0}^{m \times n}\), \(b \in \mathbb {R}_{\ge 0}^m\) and \(c \in \mathbb {R}_{\ge 0}^n\) all have nonnegative entriesFootnote 1. Many important problems in discrete and combinatorial optimization can be cast as special cases of PIPs. These include the maximum independent set in graphs and hypergraphs, set packing, matchings and b-matchings, knapsack (when \(m=1\)), and the multi-dimensional knapsack. The maximum independent set problem (MIS), a special case of PIPs, is NP-hard and unless \(P=NP\) there is no \(n^{1-\epsilon }\)-approximation where n is the number of nodes in the graph [10, 18]. For this reason it is meaningful to consider special cases and other parameters that control the difficulty of PIPs. Motivated by the fact that MIS admits a simple \(\frac{1}{\varDelta (G)}\)-approximation where \(\varDelta (G)\) is the maximum degree of G, previous work considered approximating PIPs based on the maximum number of nonzeroes in any column of A (denoted by \(\varDelta _0\)); note that when MIS is written as a PIP, \(\varDelta _0\) coincides with \(\varDelta (G)\). As another example, when maximum weight matching is written as a PIP, \(\varDelta _0 = 2\). Bansal et al. [1] obtained a simple and clever algorithm that achieved an \(\varOmega (1/\varDelta _0)\)-approximation for PIPs via the natural LP relaxation; this improved previous work of Pritchard [13, 14] who was the first to obtain an approximation for PIPs only as a function of \(\varDelta _0\). Moreover, the rounding algorithm in [1] can be viewed as a contention resolution scheme which allows one to get similar approximation ratios even when the objective is submodular [1, 6]. It is well-understood that PIPs become easier when the entries in A are small compared to the packing constraints b. To make this quantitative we consider the well-studied notion called the width defined as \(W := \min _{i,j: A_{i,j} > 0} b_i/A_{i,j}\). Bansal et al. obtain an \(\varOmega ( (\frac{1}{\varDelta _0})^{1/\lfloor {W}\rfloor })\)-approximation which improves as W becomes larger. Although they do not state it explicitly, their approach also yields a \((1-\epsilon )\)-approximation when \(W = \varOmega (\frac{1}{\epsilon ^2}\log (\varDelta _0/\epsilon ))\).
\(\varDelta _0\) is a natural measure for combinatorial applications such as MIS and matchings where the underlying matrix A has entries from \(\{0,1\}\). However, in some applications of PIPs such as knapsack and its multi-dimensional generalization which are more common in resource-allocation problems, the entries of A are arbitrary rational numbers (which can be assumed to be from the interval [0, 1] after scaling). In such applications it is natural to consider another measure of column-sparsity which is based on the \(\ell _1\) norm. Specifically we consider \(\varDelta _1\), the maximum column sum of A. Unlike \(\varDelta _0\), \(\varDelta _1\) is not scale invariant so one needs to be careful in understanding the parameter and its relationship to the width W. For this purpose we normalize the constraints \(Ax \le b\) as follows. Let \(W = \min _{i,j: A_{i,j} > 0} b_i/A_{i,j}\) denote the width as before (we can assume without loss of generality that \(W \ge 1\) since we are interested in integer solutions). We can then scale each row \(A_i\) of A separately such that, after scaling, the i’th constraint reads as \(A_i x \le W\). After scaling all rows in this fashion, entries of A are in the interval [0, 1], and the maximum entry of A is equal to 1. Note that this scaling process does not alter the original width. We let \(\varDelta _1\) denote the maximum column sum of A after this normalization and observe that \(1 \le \varDelta _1 \le \varDelta _0\). In many settings of interest \(\varDelta _1 \ll \varDelta _0\). We also observe that \(\varDelta _1\) is a more robust measure than \(\varDelta _0\); small perturbations of the entries of A can dramatically change \(\varDelta _0\) while \(\varDelta _1\) changes minimally.
Bansal et al. raised the question of obtaining an approximation ratio for PIPs as a function of only \(\varDelta _1\). They observed that this is not feasible via the natural LP relaxation by describing a simple example where the integrality gap of the LP is \(\varOmega (n)\) while \(\varDelta _1\) is a constant. In fact their example essentially shows the existence of a simple approximation preserving reduction from MIS to PIPs such that the resulting instances have \(\varDelta _1 \le 2\); thus no approximation ratio that depends only on \(\varDelta _1\) is feasible for PIPs unless \(P=NP\). These negative results seem to suggest that pursuing bounds based on \(\varDelta _1\) is futile, at least in the worst case. However, the starting point of this paper is the observation that both the integrality gap example and the hardness result are based on instances where the width W of the instance is arbitrarily close to 1. We demonstrate that these examples are rather brittle and obtain several positive results when we consider \(W \ge (1+\epsilon )\) for any fixed \(\epsilon > 0\).
1.1 Our Results
Our first result is on the hardness of approximation for PIPs that we already referred to. The hardness result suggests that one should consider instances with \(W > 1\). Recall that after normalization we have \(\varDelta _1 \ge 1\) and \(W \ge 1\) and the maximum entry of A is 1. We consider three regimes of W and obtain the following results, all via the natural LP relaxation, which also establish corresponding upper bounds on the integrality gap.
-
(i)
\(1 <W \le 2\). For \(W = 1+\epsilon \) where \(\epsilon \in (0,1]\) we obtain an \(\varOmega (\frac{\epsilon ^2}{\varDelta _1})\)-approximation.
-
(ii)
\(W \ge 2\). We obtain an \(\varOmega ( (\frac{1}{1+ \frac{\varDelta _1}{W}})^{1/(W-1)})\)-approximation which can be simplified to \(\varOmega ((\frac{1}{1+ \varDelta _1})^{1/(W-1)})\) since \(W \ge 1\).
-
(iii)
A \((1-\epsilon )\)-approximation when \(W = \varOmega (\frac{1}{\epsilon ^2}\log (\varDelta _1/\epsilon ))\).
Our results establish approximation bounds based on \(\varDelta _1\) that are essentially the same as those based on \(\varDelta _0\) as long as the width is not too close to 1. We describe randomized algorithms which can be derandomized via standard techniques. The algorithms can be viewed as contention resolution schemes, and via known techniques [1, 6], the results yield corresponding approximations for submodular objectives; we omit these extensions in this version.
All our algorithms are based on a simple randomized rounding plus alteration framework that has been successful for both packing and covering problems. Our scheme is similar to that of Bansal et al. at a high level but we make a simple but important change in the algorithm and its analysis. This is inspired by recent work on covering integer programs [4] where \(\ell _1\)-sparsity based approximation bounds from [7] were simplified.
1.2 Other Related Work
We note that PIPs are equivalent to the multi-dmensional knapsack problem. When \(m=1\) we have the classical knapsack problem which admits a very efficient FPTAS (see [2]). There is a PTAS for any fixed m [8] but unless \(P=NP\) an FPTAS does not exist for \(m=2\).
Approximation algorithms for PIPs in their general form were considered initially by Raghavan and Thompson [15] and refined substantially by Srinivasan [16]. Srinivasan obtained approximation ratios of the form \(\varOmega (1/n^{W})\) when A had entries from \(\{0,1\}\), and a ratio of the form \(\varOmega (1/n^{1/\lfloor {W}\rfloor })\) when A had entries from [0, 1]. Pritchard [13] was the first to obtain a bound for PIPs based solely on the column sparsity parameter \(\varDelta _0\). He used iterated rounding and his initial bound was improved in [14] to \(\varOmega (1/\varDelta _0^2)\). The current state of the art is due to Bansal et al. [1]. Previously we ignored constant factors when describing the ratio. In fact [1] obtains a ratio of \((1 - o(1)\frac{e-1}{e^2\varDelta _0})\) by strengthening the basic LP relaxation.
In terms of hardness of approximation, PIPs generalize MIS and hence one cannot obtain a ratio better than \(n^{1-\epsilon }\) unless \(P=NP\) [10, 18]. Building on MIS, [3] shows that PIPs are hard to approximate within a \(n^{\varOmega (1/W)}\) factor for any constant width W. Hardness of MIS in bounded degree graphs [17] and hardness for k-set-packing [11] imply that PIPs are hard to approximate to within \(\varOmega (1/\varDelta _0^{1-\epsilon })\) and to within \(\varOmega ((\log \varDelta _0)/\varDelta _0)\) when \(\varDelta _0\) is a sufficiently large constant. These hardness results are based on \(\{0,1\}\) matrices for which \(\varDelta _0\) and \(\varDelta _1\) coincide.
There is a large literature on deterministic and randomized rounding algorithms for packing and covering integer programs and connections to several topics and applications including discrepancy theory. \(\ell _1\)-sparsity guarantees for covering integer programs were first obtained by Chen, Harris and Srinivasan [7] partly inspired by [9].
2 Hardness of Approximating PIPs as a Function of \(\varDelta _1\)
Bansal et al. [1] showed that the integrality gap of the natural LP relaxation for PIPs is \(\varOmega (n)\) even when \(\varDelta _1\) is a constant. One can use essentially the same construction to show the following theorem whose proof can be found in the appendix.
Theorem 1
There is an approximation preserving reduction from MIS to instances of PIPs with \(\varDelta _1 \le 2\).
Unless \(P=NP\), MIS does not admit a \(n^{1-\epsilon }\)-approximation for any fixed \(\epsilon > 0\) [10, 18]. Hence the preceding theorem implies that unless \(P=NP\) one cannot obtain an approximation ratio for PIPs solely as a function of \(\varDelta _1\).
3 Round and Alter Framework
The algorithms in this paper have the same high-level structure. The algorithms first scale down the fractional solution x by some factor \(\alpha \), and then randomly round each coordinate independently. The rounded solution \(x'\) may not be feasible for the constraints. The algorithm alters \(x'\) to a feasible \(x''\) by considering each constraint separately in an arbitrary order; if \(x'\) is not feasible for constraint i some subset S of variables are chosen to be set to 0. Each constraint corresponds to a knapsack problem and the framework (which is adapted from [1]) views the problem as the intersection of several knapsack constraints. A formal template is given in Fig. 1. To make the framework into a formal algorithm, one must define \(\alpha \) and how to choose S in the for loop. These parts will depend on the regime of interest.
For an algorithm that follows the round-and-alter framework, the expected output of the algorithm is \(\mathbb {E}\left[ \langle c, x''\rangle \right] = \sum _{j =1}^n c_j\cdot \Pr [x_j'' = 1]\). Independent of how \(\alpha \) is defined or how S is chosen, \(\Pr [x_j'' = 1] = \Pr [x_j'' = 1 | x_j' = 1]\cdot \Pr [x_j' = 1]\) since \(x_j'' \le x_j'\). Then we have
Let \(E_{ij}\) be the event that \(x_j''\) is set to 0 when ensuring constraint i is satisfied in the for loop. As \(x_j''\) is only set to 0 if at least one constraint sets \(x_j''\) to 0, we have
Combining these two observations, we have the following lemma, which applies to all of our subsequent algorithms.
Lemma 1
Let \(\mathcal {A}\) be a randomized rounding algorithm that follows the round-and-alter framework given in Fig. 1. Let \(x'\) be the rounded solution obtained with scaling factor \(\alpha \). Let \(E_{ij}\) be the event that \(x_j''\) is set to 0 by constraint i. If for all \(j \in [n]\) we have \(\sum _{i =1}^m \Pr [E_{ij} | x_j' = 1] \le \gamma ,\) then \(\mathcal {A}\) is an \(\alpha (1 - \gamma )\)-approximation for PIPs.
We will refer to the quantity \(\Pr [E_{ij}|x_j' = 1]\) as the rejection probability of item j in constraint i. We will also say that constraint i rejects item j if \(x_j''\) is set to 0 in constraint i.
4 The Large Width Regime: \(W \ge 2\)
In this section, we consider PIPs with width \(W \ge 2\). Recall that we assume \(A \in [0,1]^{m \times n}\) and \(b_i = W\) for all \(i \in [m]\). Therefore we have \(A_{i,j} \le W/2\) for all i, j and from a knapsack point of view all items are “small”. We apply the round-and-alter framework in a simple fashion where in each constraint i the coordinates are sorted by the coefficients in that row and the algorithm chooses the largest prefix of coordinates that fit in the capacity W and the rest are discarded. We emphasize that this sorting step is crucial for the analysis and differs from the scheme in [1]. Figure 2 describes the formal algorithm.
The Key Property for the Analysis: The analysis relies on obtaining a bound on the rejection probability of coordinate j by constraint i. Let \(X_j\) be the indicator variable for j being chosen in the first step. We show that \(\Pr [E_{ij} \mid X_j = 1] \le c A_{ij}\) for some c that depends on the scaling factor \(\alpha \). Thus coordinates with smaller coefficients are less likely to be rejected. The total rejection probability of j, \(\sum _{i=1}^m \Pr [E_{ij}\mid X_j=1]\), is proportional to the column sum of coordinate j which is at most \(\varDelta _1\).
The analysis relies on the Chernoff bound, and depending on the parameters, one needs to adjust the analysis. In order to highlight the main ideas we provide a detailed proof for the simplest case and include the proofs of some of the other cases in the appendix. The rest of the proofs can be found in the full version [5].
4.1 An \(\varOmega (1/\varDelta _1)\)-approximation Algorithm
We show that yields an \(\varOmega (1/\varDelta _1)\)-approximation if we set the scaling factor \(\alpha _1 = \frac{1}{c_1\varDelta _1}\) where \(c_1 = 4e^{1+1/e}\).
The rejection probability is captured by the following main lemma.
Lemma 2
Let \(\alpha _1= \frac{1}{c_1\varDelta _1}\) for \(c_1 = 4e^{1+1/e}\). Let \(i \in [m]\) and \(j \in [n]\). Then in the algorithm , we have \(\Pr [E_{ij} | X_j = 1] \le \frac{A_{i,j}}{2\varDelta _1}\).
Proof
At iteration i of , after the set \(\{A_{i,1}, \ldots , A_{i,n}\}\) is sorted, the indices are renumbered so that \(A_{i,1} \le \cdots \le A_{i,n}\). Note that j may now be a different index \(j'\), but for simplicity of notation we will refer to \(j'\) as j. Let \(\xi _\ell = 1\) if \(x_\ell ' = 1\) and 0 otherwise. Let \(Y_{ij} = \sum _{\ell = 1}^{j-1} A_{i,\ell }\xi _\ell \).
If \(E_{ij}\) occurs, then \(Y_{ij} > W - A_{i,j}\), since \(x_j''\) would not have been set to zero by constraint i. That is,
The event \(Y_{ij} >W - A_{i,j}\) does not depend on \(x_j'\). Therefore,
To upper bound \(\mathbb {E}[Y_{ij}]\), we have
As \(A_{i,j} \le 1\), \(W \ge 2\), and \(\alpha _1 < 1/2\), we have \(\frac{(1-\alpha _1)W}{A_{i,j}} > 1\). Using the fact that \(A_{i,j}\) is at least as large as all entries \(A_{i,j'}\) for \(j' < j\), we satisfy the conditions to apply the Chernoff bound in Theorem 7. This implies
Note that \(\frac{W}{W-A_{i,j}} \le 2\) as \(W \ge 2\). Because \(e^{1-\alpha _1}\le e\) and by the choice of \(\alpha _1\), we have
Then we prove the final inequality in two parts. First, we see that \(W \ge 2\) and \(A_{i,j} \le 1\) imply that \(\frac{W - A_{i,j}}{A_{i,j}}\ge 1\). This implies
Second, we see that
for \(A_{i,j} \le 1\), where the first inequality holds because \(W - A_{i,j} \ge 1\) and the second inequality holds by Lemma 7. This concludes the proof.
Theorem 2
When setting \(\alpha _1= \frac{1}{c_1\varDelta _1}\) where \(c_1 = 4e^{1+1/e}\), for PIPs with width \(W \ge 2\), is a randomized \((\alpha _1/2)\)-approximation algorithm.
Proof
Fix \(j \in [n]\). By Lemma 2 and the definition of \(\varDelta _1\), we have
By Lemma 1, which shows that upper bounding the sum of the rejection probabilities by \(\gamma \) for every item leads to an \(\alpha _1(1-\gamma )\)-approximation, we get the desired result.
4.2 An \(\varOmega (\frac{1}{(1 +\varDelta _1/W)^{1/(W-1)}})\)-approximation
We improve the bound from the previous section by setting \(\alpha _1 = \frac{1}{c_2(1 +\varDelta _1/W)^{1/(W-1)}}\) where \(c_2 = 4e^{1+2/e}\). Note that the scaling factor becomes larger as W increases. The proof of the following lemma can be found in the appendix.
Lemma 3
Let \(\alpha _1= \frac{1}{c_2(1 +\varDelta _1/W)^{1/(W-1)}}\) for \(c_2 = 4e^{1 + 2/e}\). Let \(i \in [m]\) and \(j \in [n]\). Then in the algorithm , we have \(\Pr [E_{ij} | X_j = 1] \le \frac{A_{i,j}}{2\varDelta _1}\).
If we replace Lemma 2 with Lemma 3 in the proof of Theorem 2, we obtain the following stronger guarantee.
Theorem 3
When setting \(\alpha _1= \frac{1}{c_2(1 +\varDelta _1/W)^{1/(W-1)}}\) where \(c_2 = 4e^{1 + 2/e}\), for PIPs with width \(W \ge 2\), is a randomized \((\alpha _1 /2)\)-approximation.
4.3 A \((1-O(\epsilon ))\)-approximation When \(W \ge \varOmega (\frac{1}{\epsilon ^2}\ln (\frac{\varDelta _1}{\epsilon }))\)
In this section, we give a randomized \((1-O(\epsilon ))\)-approximation for the case when \(W \ge \varOmega (\frac{1}{\epsilon ^2}\ln (\frac{\varDelta _1}{\epsilon }))\). We use the algorithm in Fig. 2 with the scaling factor \(\alpha _1 = 1 - \epsilon \).
Lemma 4
Let \(0< \epsilon < \frac{1}{e}\), \(\alpha _1 = 1 - \epsilon \), and \(W = \frac{2}{\epsilon ^2}\ln (\frac{\varDelta _1}{\epsilon }) + 1\). Let \(i \in [m]\) and \(j \in [n]\). Then in , we have \(\Pr [E_{ij} | X_j = 1] \le e\cdot \frac{\epsilon A_{i,j}}{\varDelta _1}\).
Lemma 4 implies that we can upper bound the sum of the rejection probabilities for any item j by \(e\epsilon \), leading to the following theorem.
Theorem 4
Let \(0< \epsilon < \frac{1}{e}\) and \(W = \frac{2}{\epsilon ^2}\ln (\frac{\varDelta _1}{\epsilon }) + 1\). When setting \(\alpha _1 = 1 - \epsilon \) and \(c = e+1\), is a randomized \((1 - c\epsilon )\)-approximation algorithm.
5 The Small Width Regime: \(W = (1+\epsilon )\)
We now consider the regime when the width is small. Let \(W = 1+ \epsilon \) for some \(\epsilon \in (0,1]\). We cannot apply the simple sorting based scheme that we used for the large width regime. We borrow the idea from [1] in splitting the coordinates into big and small in each constraint; now the definition is more refined and depends on \(\epsilon \). Moreover, the small coordinates and the big coordinates have their own reserved capacity in the constraint. This is crucial for the analysis. We provide more formal details below.
We set \(\alpha _2\) to be \(\frac{\epsilon ^2}{c_3\varDelta _1}\) where \(c_3 = 8e^{1+2/e}\). The alteration step differentiates between “small” and “big” coordinates as follows. For each \(i \in [m]\), let \(S_i = \{j : A_{i,j} \le \epsilon /2\}\) and \(B_i = \{j : A_{i,j} > \epsilon /2\}\). We say that an index j is small for constraint i if \(j \in S_i\). Otherwise we say it is big for constraint i when \(j \in B_i\). For each constraint, the algorithm is allowed to pack a total of \(1+\epsilon \) into that constraint. The algorithm separately packs small indices and big indices. In an \(\epsilon \) amount of space, small indices that were chosen in the rounding step are sorted in increasing order of size and greedily packed until the constraint is no longer satisfied. The big indices are packed by arbitrarily choosing one and packing it into the remaining space of 1. The rest of the indices are removed to ensure feasibility. Figure 3 gives pseudocode for the randomized algorithm which yields an \(\varOmega (\epsilon ^2 / \varDelta _1)\)-approximation.
It remains to bound the rejection probabilities. Recall that for \(j \in [n]\), we define \(X_j\) to be the indicator random variable and \(E_{ij}\) is the event that j was rejected by constraint i.
We first consider the case when index j is big for constraint i. Note that it is possible that there may not exist any big indices for a given constraint. The same holds true for small indices.
Lemma 5
Let \(\epsilon \in (0,1]\) and \(\alpha _2 = \frac{\epsilon ^2}{c_3\varDelta _1}\) where \(c_3 = 8e^{1+2/e}\). Let \(i \in [m]\) and \(j \in B_i\). Then in , we have \(\Pr [E_{ij} | X_j = 1] \le \frac{A_{i,j}}{2\varDelta _1}\).
Proof
Let \(\mathcal {E}\) be the event that there exists \(j' \in B_i\) such that \(j' \ne j\) and \(X_{j'} = 1\). Observe that if \(E_{ij}\) occurs and \(X_j = 1\), then it must be the case that at least one other element of \(B_i\) was chosen in the rounding step. Thus,
where the second inequality follows by the union bound. Observe that for all \(\ell \in B_i\), we have \(A_{i,\ell } > \epsilon /2\). By the LP constraints, we have \(1 + \epsilon \ge \sum _{\ell \in B_i} A_{i,\ell } x_{\ell } > \frac{\epsilon }{2}\cdot \sum _{\ell \in B_i} x_{\ell }\). Thus, \(\sum _{\ell \in B_i} x_\ell \le \frac{1+\epsilon }{\epsilon /2} = 2/\epsilon + 2\).
Using this upper bound for \(\sum _{\ell \in B_i} x_{\ell }\), we have
where the second inequality utilizes the fact that \(\epsilon \le 1\) and the third inequality holds because \(c_3 \ge 16\) and \(A_{i,j} > \epsilon /2\).
Next we consider the case when index j is small for constraint i. The analysis here is similar to that in the preceding section with width at least 2 and thus the proof is deferred to the full version [5].
Lemma 6
Let \(\epsilon \in (0,1]\) and \(\alpha _2 = \frac{\epsilon ^2}{c_3\varDelta _1}\) where \(c_3 = 8e^{1+2/e}\). Let \(i \in [m]\) and \(j \in S_i\). Then in , we have \(\Pr [E_{ij} | X_j = 1] \le \frac{A_{i,j}}{2\varDelta _1}\).
Theorem 5
Let \(\epsilon \in (0,1]\). When setting \(\alpha _2 = \frac{\epsilon ^2}{c_3\varDelta _1}\) for \(c_3 = 8e^{1+2/e}\), for PIPs with width \(W = 1 + \epsilon \), is a randomized \((\alpha _2/2)\)-approximation algorithm.
Proof
Fix \(j \in [n]\). Then by Lemmas 5 and 6 and the definition of \(\varDelta _1\), we have
Recall that Lemma 1 gives an \(\alpha _2(1-\gamma )\)-approximation where \(\gamma \) is an upper bound on the sum of the rejection probabilities for any item. This concludes the proof.
Notes
- 1.
We can allow the variables to have general integer upper bounds instead of restricting them to be boolean. As observed in [1], one can reduce this more general case to the \(\{0,1\}\) case without too much loss in the approximation.
References
Bansal, N., Korula, N., Nagarajan, V., Srinivasan, A.: Solving packing integer programs via randomized rounding with alterations. Theory Comput. 8(24), 533–565 (2012). https://doi.org/10.4086/toc.2012.v008a024
Chan, T.M.: Approximation schemes for 0-1 knapsack. In: 1st Symposium on Simplicity in Algorithms (2018)
Chekuri, C., Khanna, S.: On multidimensional packing problems. SIAM J. Comput. 33(4), 837–851 (2004)
Chekuri, C., Quanrud, K.: On approximating (sparse) covering integer programs. In: Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 1596–1615. SIAM (2019)
Chekuri, C., Quanrud, K., Torres, M.R.: \(\ell _1\)-sparsity approximation bounds for packing integer programs (2019). arXiv preprint: arXiv:1902.08698
Chekuri, C., Vondrák, J., Zenklusen, R.: Submodular function maximization via the multilinear relaxation and contention resolution schemes. SIAM J. Comput. 43(6), 1831–1879 (2014)
Chen, A., Harris, D.G., Srinivasan, A.: Partial resampling to approximate covering integer programs. In: Proceedings of the Twenty-Seventh Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 1984–2003. Society for Industrial and Applied Mathematics (2016)
Frieze, A., Clarke, M.: Approximation algorithms for the m-dimensional 0-1 knapsack problem: worst-case and probabilistic analyses. Eur. J. Oper. Res. 15(1), 100–109 (1984)
Harvey, N.J.: A note on the discrepancy of matrices with bounded row and column sums. Discrete Math. 338(4), 517–521 (2015)
Håstad, J.: Clique is hard to approximate within \(n^{1- \epsilon }\). Acta Math. 182(1), 105–142 (1999)
Hazan, E., Safra, S., Schwartz, O.: On the complexity of approximating k-set packing. Comput. Complex. 15(1), 20–39 (2006)
Mitzenmacher, M., Upfal, E.: Probability and Computing: Randomized Algorithms and Probabilistic Analysis. Cambridge University Press, Cambridge (2005)
Pritchard, D.: Approximability of sparse integer programs. In: Fiat, A., Sanders, P. (eds.) ESA 2009. LNCS, vol. 5757, pp. 83–94. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-04128-0_8
Pritchard, D., Chakrabarty, D.: Approximability of sparse integer programs. Algorithmica 61(1), 75–93 (2011)
Raghavan, P., Tompson, C.D.: Randomized rounding: a technique for provably good algorithms and algorithmic proofs. Combinatorica 7(4), 365–374 (1987)
Srinivasan, A.: Improved approximation guarantees for packing and covering integer programs. SIAM J. Comput. 29(2), 648–670 (1999)
Trevisan, L.: Non-approximability results for optimization problems on bounded degree instances. In: Proceedings of the Thirty-Third Annual ACM Symposium on Theory of Computing, pp. 453–461. ACM (2001)
Zuckerman, D.: Linear degree extractors and the inapproximability of max clique and chromatic number. In: Proceedings of the Thirty-Eighth Annual ACM Symposium on Theory of Computing, pp. 681–690. ACM (2006)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendices
Appendix
A Chernoff Bounds and Useful Inequalities
The following standard Chernoff bound is used to obtain a more convenient Chernoff bound in Theorem 7. The proof of Theorem 7 follows directly from choosing \(\delta \) such that \((1 + \delta )\mu = W - \beta \) and applying Theorem 6.
Theorem 6
([12]). Let \(X_1,\ldots , X_n\) be independent random variables where \(X_i\) is defined on \(\{0, \beta _i\}\), where \(0 < \beta _i \le \beta \le 1\) for some \(\beta \). Let \(X = \sum _i X_i\) and denote \(\mathbb {E}[X]\) as \(\mu \). Then for any \(\delta > 0\),
Theorem 7
Let \(X_1,\ldots , X_n \in [0,\beta ]\) be independent random variables for some \(0 < \beta \le 1\). Suppose \(\mu = \mathbb {E}[\sum _i X_i] \le \alpha W\) for some \(0< \alpha < 1\) and \(W \ge 1\) where \((1 - \alpha )W > \beta \). Then
Lemma 7
Let \(x \in (0,1]\). Then \((1/e^{1/e})^{1/x} \le x\).
Lemma 8
Let \(y \ge 2\) and \(x \in (0, 1]\). Then \(x/y \ge (1/e^{2/e})^{y/2x}\).
B Skipped Proofs
1.1 B.1 Proof of Theorem 1
Proof
Let \(G = (V,E)\) be an undirected graph without self-loops and let \(n = \left| V\right| \). Let \(A \in [0,1]^{n \times n}\) be indexed by V. For all \(v \in V\), let \(A_{v,v} = 1\). For all \(uv \in E\), let \(A_{u,v} = A_{v,u} = 1/n\). For all the remaining entries in A that have not yet been defined, set these entries to 0. Consider the following PIP:
Let S be the set of all feasible integral solutions of (1) and \(\mathcal {I}\) be the set of independent sets of G. Define \(g : S \rightarrow \mathcal {I}\) where \(g(x) = \{v : x_v = 1\}\). To show g is surjective, consider a set \(I \in \mathcal {I}\). Let y be the characteristic vector of I. That is, \(y_v\) is 1 if \(v \in I\) and 0 otherwise. Consider the row in A corresponding to an arbitrary vertex u where \(y_u = 1\). For all \(v \in V\) such that v is a neighbor to u, \(y_v = 0\) as I is an independent set. Thus, as the nonzero entries in A of the row corresponding to u are, by construction, the neighbors of u, it follows that the constraint corresponding to u is satisfied in (1). As u is an arbitrary vertex, it follows that y is a feasible integral solution to (1) and as \(I = \{v : y_v = 1\}\), \(g(y) = I\).
Define \(h : S \rightarrow \mathbb {N}_0\) such that \(h(x) = \left| g(x)\right| \). It is clear that \(\max _{x \in S} h(x)\) is equal to the optimal value of (1). Let \(I_{max}\) be a maximum independent set of G. As g is surjective, there exists \(z \in S\) such that \(g(z) = I_{max}\). Thus, \(\max _{x\in S} h(x) \ge \left| I_{max}\right| \). As \(\max _{x \in S} h(x)\) is equal to the optimum value of (1), it follows that a \(\beta \)-approximation for PIPs implies a \(\beta \)-approximation for maximum independent set.
Furthermore, we note that for this PIP, \(\varDelta _1 \le 2\), thus concluding the proof.
1.2 B.2 Proof of Lemma 3
Proof
The proof proceeds similarly to the proof of Lemma 2. Since \(\alpha _1 < 1/2\), everything up to and including the application of the Chernoff bound there applies. This gives that for each \(i\in [m]\) and \(j \in [n]\),
By choice of \(\alpha _1\), we have
We prove the final inequality in two parts. First, note that \(\frac{W - A_{i,j}}{A_{i,j}} \ge W - 1\) since \(A_{i,j} \le 1\). Thus,
Second, we see that
for \(A_{i,j} \le 1\), where the first inequality holds because \(W \ge 2\) and the second inequality holds by Lemma 8.
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Chekuri, C., Quanrud, K., Torres, M.R. (2019). \(\ell _1\)-sparsity Approximation Bounds for Packing Integer Programs. In: Lodi, A., Nagarajan, V. (eds) Integer Programming and Combinatorial Optimization. IPCO 2019. Lecture Notes in Computer Science(), vol 11480. Springer, Cham. https://doi.org/10.1007/978-3-030-17953-3_10
Download citation
DOI: https://doi.org/10.1007/978-3-030-17953-3_10
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-17952-6
Online ISBN: 978-3-030-17953-3
eBook Packages: Computer ScienceComputer Science (R0)