1 Introduction

Consider the following task scheduling problem: an event center receives requests/tasks from its clients. Each task j specifies a start and end time (denoted \((a_j, b_j)\)), and the amount \(x_j\) of some shared resource (e.g., staff support) that this task requires throughout its duration. The goal is to accept some target t number of tasks so that the maximum resource-utilization over time is as small as possible. Concretely, we want to choose a set S of tasks with \(|S|=t\) to minimize

$$\begin{aligned} \max _{\text {times } \tau } \underbrace{\sum _{j \in S: \tau \in [a_j, b_j]} x_j}_{\text {usage at time }\tau }. \end{aligned}$$

This can be modeled as an interval packing problem: if the sizes are identical, the natural LP is totally unimodular and we get an optimal algorithm. For general sizes, there is a constant-factor approximation algorithmΒ [3].

However, in many settings, we may not know the resource consumption \(X_j\) precisely up-front, at the time we need to make a decision. Instead, we may be only given estimates. What if the requirement \(X_j\) is a random variable whose distribution is given to us? Again we want to choose S of size t, but this time we want to minimize the expected maximum usage:

$$\begin{aligned} \mathbb {E}\bigg [ \max _{\text {times } \tau } \sum _{j \in S: \tau \in [a_j, b_j]} X_j \bigg ]. \end{aligned}$$

Note that our decision to pick task j affects all times in \([a_j, b_j]\), and hence the loads on various places are no longer independent: how can we effectively reason about such a problem?

In this paper we consider general resource allocation problems of the following form. There are several tasks and resources, where each task j has some size \(X_j\) and uses some subset \(U_j\) of resources. That is, if task j is selected then it induces a load of \(X_j\) on every resource in \(U_j\). Given a target t, we want to select a subset S of t tasks to minimize the expected maximum load over all resources. For the non-stochastic versions of these problems (when \(X_j\) is a single value and not a random variable), we can use the natural linear programming (LP) relaxation and randomized rounding to get an \(O(\frac{\log m}{\log \log m})\)-approximation algorithm; here m is the number of resources. However, much better results are known when the task-resource incidence matrix has some geometric structure. One such example appeared above: when the resources have some linear structure, and the tasks are intervals. Other examples include selecting rectangles in a plane (where tasks are rectangles and resources are points in the plane), and selecting paths in a tree (tasks are paths and resources are edges/vertices in the tree). This class of problems has received a lot of attention and has strong approximation guarantees, see e.g. [1,2,3,4,5,6,7,8].

However, the stochastic counterparts of these resource allocation problems remain wide open. Can we achieve good approximation algorithms when the task sizes \(X_j\) are random variables? We refer to this class of problems as stochastic makespan minimization (GenMakespan). In the rest of this work, we assume that the distributions of all the random variables are known, and that the r.v.s \(X_j\)s are independent.

1.1 Results and Techniques

We show that good approximation algorithms are indeed possible for GenMakespan problems that have certain geometric structure. We consider the following two assumptions:

  • Deterministic problem assumption: There is a linear-program based \(\alpha \) approximation algorithm for a suitable deterministic variant of GenMakespan.

  • Well-covered assumption: for any subset \(D\subseteq [m]\) of resources and tasks L(D) incident to D, the tasks in L(D) incident to any resource \(i\in [m]\) are β€œcovered” by at most \(\lambda \) resources in D.

These assumptions are formalized in Sect.Β 2. To give some intuition for these assumptions, consider intervals on the line. The first assumption holds by the results ofΒ [3]. The second assumption holds because each resource is some time \(\tau \), and the tasks using time \(\tau \) can be covered by two resources in D, namely times \(\tau _1, \tau _2 \in D\) such that \(\tau _1 \le \tau \le \tau _2\).

Theorem 1 (Main (Informal))

There is an \(O(\alpha \lambda \log \log m)\)-approximation algorithm for stochastic makespan minimization (GenMakespan), with \(\alpha \) and \(\lambda \) as in the above assumptions.

We also show that both \(\alpha \) and \(\lambda \) are constant in a number of geometric settings: for intervals on a line, for paths in a tree, for rectangles in a plane and for β€œfat” objects (such as disks) in a plane. Therefore, we obtain an \(O(\log \log m)\)-approximation algorithm in all these cases.

A first naive approach for GenMakespan is (i) to write an LP relaxation with expected sizes \(\mathbb {E}[X_j]\) as deterministic sizes and then (ii) to use any LP-based \(\alpha \)-approximation algorithm for the deterministic problem. However, this approach only yields an \(O(\alpha \frac{\log m}{\log \log m})\) approximation ratio, due to the use of union bounds in calculating the expected maximum. Our idea is to use the structure of the problem to improve the approximation ratio.

Our approach is as follows. First, we use the (scaled) logarithmic moment generating function (log-mgf) of the random variables \(X_j\) to define deterministic surrogates to the random sizes. Second, we formulate a strong LP relaxation with an exponential number of β€œvolume” constraints that use the log-mgf values. These two ideas were used earlier for stochastic makespan minimization in settings where each task loads a single resourceΒ [10, 14]. In the example above, where each task uses only a single time instant. However, we need a more sophisticated LP for GenMakespan to be able to handle the combinatorial structure when tasks use many resources. Despite the large number of constraints, this LP can be solved approximately in polynomial time, using the ellipsoid method and using a maximum-coverage algorithm as the separation oracle. Third (and most important), we provide an iterative-rounding algorithm that partitions the tasks/resources into \(O(\log \log m)\) many nearly-disjoint instances of the deterministic problem. The analysis of our rounding algorithm relies on both the assumptions above, and also on the volume constraints in our LP and on properties of the log-mgf.

We also show some limitations of our approach. For GenMakespan involving intervals in a line (which is our simplest application), we prove that the integrality gap of our LP is \(\varOmega (\log ^* m)\). This rules out a constant-factor approximation via this LP. For GenMakespan on more general set-systems (without any structure), we prove that the integrality gap can be \(\varOmega (\frac{\log m}{(\log \log m)^2})\) even if all deterministic instances solved in our algorithm have an \(\alpha =O(1)\) integrality gap. This suggests that we do need to exploit additional structureβ€”such as the well-covered assumption aboveβ€”in order to obtain significantly better approximation ratios via our LP.

1.2 Related Work

The deterministic counterparts of the problems studied here are well-understood. In particular, there are LP-based O(1)-approximation algorithms for intervals in a lineΒ [3], paths in a tree (with edge loads)Β [8] and rectangles in a plane (under some restrictions) [1].

Our techniques draw on prior work on stochastic makespan minimization for identicalΒ [14] and unrelatedΒ [10] resources; but there are also important new ideas. In particular, the use of log-mgf values as the deterministic proxy for random variables comes from [14] and the use of log-mgf values at multiple scales comes from [10]. The β€œvolume” constraints in our LP also has some similarity to those in [10]: however, a key difference here is that the random variables loading different resources are correlated (whereas they were independent in [10]). Indeed, this is why our LP can only be solved approximately whereas the LP relaxation in [10] was optimally solvable. We emphasize that our main contribution is the rounding algorithm, which uses a new set of ideas; these lead to the \(O(\log \log m)\) approximation bound, whereas the rounding in [10] obtained a constant-factor approximation. Note that we also prove a super-constant integrality gap in our setting, even for the case of intervals in a line.

The stochastic load balancing problem on unrelated resources has also been studied for general \(\ell _p\)-norms (note that the makespan corresponds to the \(\ell _\infty \)-norm) and a constant-factor approximation is knownΒ [15]. We do not consider \(\ell _p\)-norms in this paper.

2 Problem Definition and Preliminaries

We are given n tasks and m resources. Each task \(j\in [n]\) uses some subset \(U_j\subseteq [m]\) of resources. For each resource \(i\in [m]\), define \(L_i\subseteq [n]\) to be the tasks that utilize i. Each task \(j\in [n]\) has a random size \(X_j\). If a task j is selected into our set S, it adds a load of \(X_j\) to each resource in \(U_j\): the load on resource \(i\in [m]\) is \(Z_i:=\sum _{j\in S\cap L_i}X_j\). The makespan is the maximum load, i.e. \(\max _{i=1}^m Z_i\). The goal is to select a subset \(S\subseteq [n]\) with t tasks to minimize the expected makespan:

$$\begin{aligned} \min _{S\subseteq [n]: |S|=t}\quad \mathbb {E}\bigg [ \max _{i=1}^m \, \sum _{j\in S\cap L_i}X_j \bigg ]. \end{aligned}$$
(1)

The distribution of each r.v. \(X_j\) is known (we use this knowledge only to compute some β€œeffective” sizes below), and these distributions are independent.

For any subset \(K\subseteq [m]\) of resources, let \(L(K) :=\cup _{i\in K} L_i\) be the set of tasks that utilize at least one resource in K.

2.1 Structure of Set Systems: The Two Assumptions

Our results hold when the following two properties are satisfied by the set system \(([n], {\mathcal{L}})\), where \({\mathcal{L}}\) is the collection of sets \(L_i\) for each \(i \in [m]\). Note that the set system has n elements (corresponding to tasks) and m sets (corresponding to resources).

A1:

(\(\alpha \)-packable): A set system \(([n], \mathcal{L})\) is said to be \(\alpha \)-packable if for any assignment of size \(s_j \ge 0\) and reward \(r_j \ge 0\) to each element \(j \in [n]\), and any threshold parameter \(\theta \ge \max _j s_j\), there is a polynomial-time algorithm that rounds a fractional solution y to the following LP relaxation into an integral solution \(\widehat{y}\), losing a factor of at most \(\alpha \ge 1\). (I.e., \(\sum _j r_j \widehat{y}_j \ge \frac{1}{\alpha }\sum _j r_j y_j\).)

$$\begin{aligned} \max \bigg \{ \sum _{j \in [n]} r_j\cdot y_j \,:\, \sum _{j\in L} s_j\cdot y_j \le \theta , \, \, \forall L \in \mathcal{L}, \text{ and } 0\le y_j\le 1, \, \, \forall j \in [n] \bigg \}. \end{aligned}$$
(2)

We also assume that the support of \(\widehat{y}\) is contained in the support of y.Footnote 1

A2:

(\(\lambda \)-safe): Let [m] be the indices of the sets in \(\mathcal{L}\); recall that these are the resources. The set system \(([n], \mathcal{L})\) is \(\lambda \)-safe if for every subset \(D \subseteq [m]\) of (β€œdangerous”) resources, there exists a subset \(M \supseteq D\) of (β€œsafe”) resources, such that (a)Β |M| is polynomially bounded by |D| and moreover, (b)Β for every \(i \in [m]\), there is a subset \(R_i \subseteq M\), \(|R_i| \le \lambda \), such that \(L_i \cap L(D) \subseteq L(R_i)\). Recall that \(L(D)=\cup _{h\in D}L_h\). We denote the set M as \(\mathtt{Extend}(D)\).

Let us give an example. Suppose P consists of m points on the real line, and consider n intervals \(I_1, \ldots , I_n\) on the line. The set system is defined on n elements (one for each interval), with m sets with the set \(L_i\) for point \(i \in P\) containing the indices of intervals that contain i. The \(\lambda \)-safe condition says that for any subset D of points in P, we can find a superset M which is not much larger such that for any point \(i\in P\), there are \(\lambda \) points in M containing all the intervals that pass through both i and D. In other words, if these intervals contribute any load to i and D, they also contribute to one of these \(\lambda \) points. And indeed, choosing \(M = D\) ensures that \(\lambda =2\): for any i we choose the nearest points in M on either side of i.

Other families that are \(\alpha \)-packable and \(\lambda \)-safe include:

  • Each element in [n] corresponds to a path in a tree, with the set \(L_i\) being the subset of paths through node i.

  • Elements in [n] correspond to rectangles or disks in a plane, and each \(L_i\) consists of rectangles/disks containing a particular point i in the plane.

For a subset \(X \subseteq [n]\), the projection of \(([n], \mathcal{L})\) to X is the smaller set system \(([n], \mathcal{L}|_X)\), where \(\mathcal{L}|_X = \{L \cap X \mid L \in \mathcal{L}\}\). The following lemma formalizes that packability and safeness properties also hold for sub-families and disjoint unions.

Lemma 1

Consider a set system \(([n], \mathcal{L})\) that is \(\alpha \)-packable and \(\lambda \)-safe. Then,

  1. (i)

    for all \(X \subseteq [n]\), the set system \((X, \mathcal{L})\) is \(\alpha \)-packable and \(\lambda \)-safe, and

  2. (ii)

    given a partition \(X_1, \ldots , X_s\) of [n], and set systems \((X_1, \mathcal{L}_1 ), \ldots , (X_s, \mathcal{L}_s )\), where \(\mathcal{L}_i = \mathcal{L}|_{X_i}\), the disjoint union of these systems is also \(\alpha \)-packable.

We consider the GenMakespan problem for settings where the set system \(([n], \{L_i\}_{i \in [m]})\) is \(\alpha \)-packable and \(\lambda \)-safe for small parameters \(\alpha \) and \(\lambda \).

Theorem 2 (Main Result)

For any instance of GenMakespan where the corresponding set system \(([n], \{L_i\}_{i \in [m]})\) is \(\alpha \)-packable and \(\lambda \)-safe, there is an \(O(\alpha \lambda \cdot \log \log m)\)-approximation algorithm.

2.2 Effective Size and Random Variables

In all the arguments that follow, imagine that we have scaled the instance so that the optimal expected makespan is between \(\frac{1}{2}\) and 1. It is useful to split each random variable \(X_j\) into two parts:

  • the truncated random variable \(X'_{j} := X_{j} \cdot \mathbf {I}_{(X_{j}\le 1)}\), and

  • the exceptional random variable \(X''_{j} := X_{j} \cdot \mathbf {I}_{(X_{j}> 1)}\).

These two kinds of random variables behave very differently with respect to the expected makespan. Indeed, the expectation is a good measure of the load due to exceptional r.v.s, whereas one needs a more nuanced notion for truncated r.v.s (as we discuss below). The following result was shown inΒ [14]:

Lemma 2 (Exceptional Items Lower Bound)

Let \(\{X_j''\} \) be non-negative discrete random variables each taking value zero or at least L. If \(\sum _j \mathbb {E}[X_j'']\ge L\) then \(\mathbb {E}[\max _j X_j'']\ge L/2\).

We now consider the trickier case of truncated random variables \(X_j'\). We want to find a deterministic quantity that is a good surrogate for each random variable, and then use this deterministic surrogate instead of the actual random variable. For stochastic load balancing, a useful surrogate is the effective size, which is based on the logarithm of the (exponential) moment generating function (also known as the cumulant generating function)Β [9, 10, 12, 13].

Definition 1 (Effective Size)

For any r.v. X and integer \(k \ge 2\), define

$$\begin{aligned} \beta _k(X) \,\, :=\,\, \frac{1}{\log k} \cdot \log \mathbb {E}\Big [ e^{(\log k) \cdot X}\Big ]. \end{aligned}$$
(3)

Also define \(\beta _1(X) := \mathbb {E}[X]\).

To see the intuition for the effective size, consider a set of independent r.v.s \(Y_1, \ldots , Y_k\) all assigned to the same resource. The following lemma, whose proof is very reminiscent of the standard Chernoff bound (see [12]), says that the load is not much higher than the expectation.

Lemma 3 (Effective Size: Upper Bound)

For indep. r.v.s \(Y_1, \ldots , Y_n\), if \(\sum _i \beta _k(Y_i) \le b\) then \(\Pr [ \sum _i Y_i \ge c ] \le \frac{1}{k^{c-b}}\).

The usefulness of the effective size comes from a partial converseΒ [14]:

Lemma 4 (Effective Size: Lower Bound)

Let \(X_1,X_2,\cdots X_n\) be independent [0,Β 1] r.v.s, and \(\{\widetilde{L_i}\}_{i=1}^m\) be a partition of [n]. If \(\sum _{j=1}^n \beta _{m}(X_j)\ge 17m\) then

$$\mathbb {E}\bigg [\max _{i=1}^m \sum _{j\in \widetilde{L_i}} X_j\bigg ]=\varOmega (1).$$

3 The General Framework

In this section we provide our main algorithm, which is used to prove TheoremΒ 2. The idea is to write a suitable LP relaxation for the problem (using the effective sizes as deterministic surrogates for the stochastic jobs), to solve this exponentially-sized LP, and then to round the solution. The novelty of the solution is both in the LP itself, and in the rounding, which is based on a delicate decomposition of the instance into \(O(\log \log m)\) many sub-instances and on showing that, loosely speaking, the load due to each sub-instance is at most \(O(\alpha \lambda )\). By binary-searching on the value of the optimal makespan, and rescaling, we can assume that the optimal makespan is between \(\frac{1}{2}\) and 1.

The LP Relaxation. Consider an instance \(\mathcal{I}\) of GenMakespan given by a set of n tasks and m resources, with sets \(U_j\) and \(L_i\) as described in Sect.Β 2. We give an LP relaxation which is feasible if the optimal makespan is at most one.

Lemma 5

Consider any feasible solution to \(\mathcal{I}\) that selects a subset \(S\subseteq [n]\) of tasks. If the expected maximum load \( \mathbb {E}\left[ \max _{i = 1}^m \sum _{j \in L_i\cap S} X_{j} \right] \le 1\), then

$$\begin{aligned} \sum _{j\in S} \mathbb {E}[X''_{j}]&\le 2, \qquad \text{ and } \end{aligned}$$
(4)
$$\begin{aligned} \sum _{j\in L(K)\cap S} \, \beta _{k}(X'_{j}) \,\,&\le \,\, b\cdot k, \text{ for } \text{ all } K\subseteq [m], \quad \text{ where } k=|K|, \end{aligned}$$
(5)

for b being a large enough but fixed constant.

LemmaΒ 5 allows us to write the following feasibility linear programming relaxation for GenMakespan (assuming the optimal value is 1). For every task j, we have a binary variable \(y_j\), which is meant to be 1 if j is selected in the solution. Moreover, we can drop all tasks j with \(c_j=\mathbb {E}[X''_j]>2\) as such a task would never be part of an optimal solution- by (4). So in the rest of this paper we will assume that \(\max _{j\in [n]} c_j \le 2\). Further, note that we only use effective sizes \(\beta _k\) of truncated r.v.s, so we have \(0\le \beta _k(X'_j)\le 1\) for all \(k\in [m]\) and \(j\in [n]\).

$$\begin{aligned} \sum _{j=1}^n y_{j}&\ge t&\end{aligned}$$
(6)
$$\begin{aligned} \sum _{j=1}^n \mathbb {E}[X_j''] \cdot y_{j}&\le 2&\end{aligned}$$
(7)
$$\begin{aligned} \sum _{j\in L(K)} \beta _{k}(X'_{j}) \cdot y_{j}&\le b\cdot k&\qquad&\forall K\subseteq [m] \text{ with } |K|=k, \,\, \forall k=1,2,\cdots m,\end{aligned}$$
(8)
$$\begin{aligned} 0\le y_{j}&\le 1&\forall j\in [n]. \end{aligned}$$
(9)

In the above LP, \(b \ge 1\) denotes the universal constant multiplying k in the right-hand-side ofΒ (5). This can be solved approximately in polynomial time.

Theorem 3 (Solving the LP)

There is a polynomial time algorithm which given an instance \(\mathcal{I}\) of GenMakespan outputs one of the following:

  • a solution \(y\in \mathbb {R}^n\) to LP (6)–(9), except that the RHS ofΒ (8) is replaced by \(\frac{e}{e-1}bk\), or

  • a certificate that LP (6)–(9) is infeasible.

The \(\frac{e}{e-1}\) factor comes from checking feasibility via an approximation algorithm for the maximum coverage problem. In the rest of this section, we assume we have a feasible solution y toΒ (6)–(9) since the approximate feasibility ofΒ (8) only affects the approximation ratio by a constant.

Rounding. We first give some intuition about the rounding algorithm. It involves formulating \(O(\log \log m)\) many almost-disjoint instances of the deterministic reward-maximization problemΒ (2) used in the definition of \(\alpha \)-packability. The key aspect of each such deterministic instance is the definition of the sizes \(s_j\): for the \(\ell ^{th}\) instance we use effective sizes \(\beta _k(X_j')\) with parameter \(k=\smash {2^{2^\ell }}\). We use the \(\lambda \)-safety property to construct these deterministic instances and the \(\alpha \)-packable property to solve them. Finally, we show that the expected makespan induced by the selected tasks is at most \(O(\alpha \beta )\) factor away from each such deterministic instance, which leads to an overall \(O(\alpha \beta \log \log m)\) ratio.

Before delving into the details, let us formulate a generalization of the reward-maximization problem mentioned inΒ (2), which we call the DetCost problem. An instance \(\mathcal{I}\) of the DetCost problem consists of a set system \(([n], \mathcal{S})\), with a size \(s_j\) and cost \(c_j\) for each element \(j \in [n]\). It also has parameters \(\theta \ge \max _j s_j\) and \(\psi \ge \max _j c_j\). The goal is to find a maximum cardinality subset V of [n] such that each set in \(\mathcal{S}\) is β€œloaded” to at most \(\theta \), and the total cost of V is at most \(\psi \). The DetCost problem has the following LP relaxation:

$$\begin{aligned} \max \bigg \{ \sum _{j \in [n]} y_j \,:\, \sum _{j\in S} s_j\cdot y_j \le \theta , \, \, \forall S \in \mathcal{S}; \, \sum _{j \in [n]} c_j\cdot y_j\le \psi ; \, 0\le y_j \le 1 \, \forall j \in [n] \bigg \}. \end{aligned}$$
(10)

Theorem 4

If a set system satisfies the \(\alpha \)-packable property, there is an \(O(\alpha )\)-approximation algorithm for DetCost relative to the LP relaxationΒ (10).

We now give the rounding algorithm for the GenMakespan problem. The procedure is described formally in AlgorithmΒ 1. The algorithm proceeds in \(\log \log m\) iterations of the for loop in linesΒ 3–7, since the parameter k is squared in line 3 for each iteration. In line 5, we identify resources i which are fractionally loaded to more than 2b, where the load is measured in terms of \(\beta _{k^2}(X_j')\) values. The set of such resources is grouped in the set \(D_\ell \), and we define \(J_\ell \) to be the tasks which can load these resources. Ideally, we would like to remove these resources and tasks, and iterate on the remaining tasks and resources. However, the problem is that tasks in \(J_\ell \) also load resources other than \(D_\ell \), and so \((D_\ell , J_\ell )\) is not independent of the rest of the instance. This is where we use the \(\lambda \)-safe property: we expand \(D_\ell \) to a larger set of resources \(M_\ell \), which will be used to show that the effect of \(J_\ell \) on resources outside \(D_\ell \) will not be significant.

figure a

Consider any iteration \(\ell \) of the for-loop. We apply the \(\lambda \)-safety property to the set-system \((J_\ell , \{L_i \cap J_\ell \}_{i \in [m]})\) and set \(D_\ell \) to get \(M_\ell := \mathtt{Extend}(D_\ell )\). We remove \(J_\ell \) from the current set J of tasks, and continue to the next iteration. We abuse notation by referring to \((J_\ell , M_\ell )\) as the following set system: each set is of the form \(L_i \cap J_\ell \) for some \(i \in M_\ell \). Having partitioned the tasks into classes \(J_1, \ldots , J_{\rho }\), we consider the disjoint union \(\mathcal{D}\) of the set systems \((J_\ell , M_\ell ),\) for \(\ell = 1, \ldots , \rho \). While the sets \(D_\ell \) are disjoint, the sets \(M_\ell \) may not be disjoint. For each resource appearing in the sets \(M_\ell \) of multiple classes, we make distinct copies in the combined set-system \(\mathcal{D}\).

Finally, we set up an instance \({\mathcal {C}} \) of DetCost (in line 11): the set system is the disjoint union of \((J_\ell , M_\ell ),\) for \(\ell = 1, \dots , \rho \). Every task \(j \in J_\ell \) has size \(\beta _{2^{2^\ell }}(X_j')\) and cost \(\mathbb {E}[X_j''].\) The parameters \(\theta \) and \(\psi \) are as mentioned in line 11. Our proofs show that the solution \({\bar{y}}\) defined in line 14 is a feasible solution to the LP relaxationΒ (10) for \({\mathcal {C}} \). This allows us to use TheoremΒ 4 to round \(\bar{y}\) to an integral solution \(N_L\). Finally, we output \(N_H \cup N_L\), where \(N_H\) is defined in line 12.

The analysis is outlined in the appendix.

4 Applications

As discussed in Sect.Β 2, the problem of selecting intervals in a line satisfies the \(\lambda \)-safe property with \(\lambda =2\). Moreover, the \(\alpha \)-packable property holds with \(\alpha = O(1)\), which follows fromΒ [3]β€”indeed, the LP relaxationΒ (2) corresponds to the unsplittable flow problem where all vertices have uniform capacity \(\theta \). So, TheoremΒ 2 now implies the following.

Corollary 1

There is an \(O(\log \log m)\)-approximation for GenMakespan where resources are vertices on a line and tasks are intervals in this line.

The full version [11] has a number of other applications:

Corollary 2

There is an \(O(\log \log m)\)-approximation for GenMakespan when

  • resources are vertices in a tree and tasks are paths in this tree.

  • resources are all points in the plane and tasks are rectangles, where the rectangles in a solution can be shrunk by a \((1-\delta )\)-factor in either dimension; \(\delta >0\) is some constant.

  • resources are all points in the plane and tasks are disks.

5 Integrality Gap Lower Bounds

We consider two natural questions – (i) does one require any assumption on the underlying set system to obtain O(1)-approximation for GenMakespan?, and (ii) what is the integrality gap of the LP relaxation given by the constraintsΒ (6)–(9) for settings where \(\alpha \) and \(\lambda \) are constants? For the first question, we show that applying our LP based approach to general set systems only givesn an \(\varOmega \left( \frac{\log m}{(\log \log m)^2} \right) \) approximation ratio, and so we do require some conditions on the underlying set system. For the second question, we show that even for set systems given by intervals on a line (as in Sect.Β 4), the integrality gap of our LP relaxation is \(\varOmega (\log ^*m)\). Hence this rules out getting a constant-factor approximation using our approach even when \(\alpha \) and \(\lambda \) are constants.