1 Introduction

In the Unsplittable Flow Problem on Paths (UFPP) an instance consists of a path \(P = (V,E)\) with m edges and a set J of n tasks. Each edge \(e \in E\) has a capacity \(c_e\). Each task \(j \in J\) has a starting vertex \(s_i \in V\), ending vertex \(t_i \in V\), a demand \(d_j\) and a weight \(w_j\). We denote the path from \(s_j\) to \(t_j\) by \(I_j\), and we say that \(j \in J\) uses an edge \(e \in E\) if \(e \in I_j\). Given a set S of tasks and an edge \(e \in E\), define \(S(e) = \{j \in S: e \in I_j\}\) to be the set of tasks in S that use e. A feasible UFPP solution is a set of tasks \(S \subseteq J\) such that \(\sum _{j \in S(e)} d_j \le c_e\), for every \(e \in E\). The goal in UFPP is to find a feasible solution of maximum weight.

We study a variant of UFPP called the Storage Allocation Problem (SAP). In SAP we have an additional constraint: it is also required that every task in the solution is given the same contiguous portion of the resource in every edge along its path. More formally, a feasible SAP solution is a subset \(S \subseteq J\) and a height function \(h:S \rightarrow \mathbb {R}^+\) such that

  1. 1.

    \(h(j) + d_j \le c_e\), for every \(j \in S\) and \(e \in I_j\), and

  2. 2.

    if \(j,i \in S\) such that \(I_j \cap I_i \ne \emptyset \) and \(h(j) \ge h(i)\), then \(h(j) \ge h(i) + d_i\).

It follows that SAP is a rectangle packing problem in which each rectangle of height \(d_j\) can be moved vertically, but not horizontally. We note that while any SAP solution induces a UFPP solution, the converse is not always true, as shown in Fig. 1.

Fig. 1
figure 1

The dotted line represents the capacity of the edges, and the strips correspond to tasks. Thick strips have demand \(\frac{1}{2}\), while thin strips have demand \(\frac{1}{4}\). The task sets in both instances form UFPP solutions. However, in both instances there is no SAP solution that contains all tasks (the instance on the right was given in [18]). a \(c_{e_1} = c_{e_3} = 0.5\) and \(c_{e_2}=1\). b \(c_e=1\) for every e

SAP naturally arises in scenarios where tasks require contiguous static portions of a resource. An object may require a contiguous range of storage space (e.g., memory allocation) for a specific time interval (\([s_j,t_j)\) for task j). A task may require bandwidth, but will only accept a contiguous set of frequencies or wavelengths. The resource may be a banner, where each task is an advertisement that requires a contiguous portion of the banner.

Given a SAP or a UFPP instance, an edge \(e \in E\) is called a bottleneck edge of a task j, if \(c_e = \min _{f \in I_j} c_f\). Define \(b(j) \triangleq \min _{f \in I_j} c_f\), namely b(j) is the capacity of a bottleneck edge of j. Given \(\delta >0\), a task j is called \(\delta \)-small if \(d_j \le \delta b(j)\), otherwise it is called \(\delta \)-large. A SAP or a UFPP instance is called \(\delta \)-small (\(\delta \)-large) if \(d_j \le \delta b(j)\) (\(d_j > \delta b(j)\)), for every \(j \in J\) (see example in Fig. 2). In the special case of SAP with uniform capacities (SAP-U), all edges in \(I_j\) are bottleneck edges, for every task j. The same goes for UFPP with uniform capacities (UFPP-U). An instance in which the maximum demand is bounded by the minimum edge capacity, i.e., \(\max _j d_j \le \min _e c_e\), is said to satisfy the no-bottleneck assumption (NBA).

Fig. 2
figure 2

Example of \(\delta \)-small tasks. a Uniform capacities. b Non uniform capacities

SAP-U is strongly related to the Dynamic Storage Allocation problem (DSA), where one is given a path and a set of tasks, and the goal is to find the minimum uniform capacity for all edges together with a SAP solution that contains all tasks.

1.1 Related Work

The special case of SAP-U (or UFPP-U) with unit capacities and demands is the Maximum Independent Set problem in interval graphs which is solvable in polynomial time (see, e.g., [25]). Both SAP-U and UFPP-U are NP-hard, since they contain Knapsack as the special case in which the paths of all requests share an edge. When the number of edges in P is constant, UFPP is a special case of Mutli-Dimensional Knapsack and hence admits a PTAS [21].

Bar-Noy et al. [5] designed local ratio algorithms for UFPP-U and SAP-U with ratio 3 and 7, respectively. The latter was obtained using a reduction from SAP-U to UFPP-U that was based on an algorithm for the Dynamic Storage Allocation Problem (DSA) by Gergov [24]. An extension of SAP-U in which each task j has a time window was studied in [5, 26]. Calinescu et al. [13] developed a randomized approximation algorithm for UFPP-U with expected performance ratio of \(2+\varepsilon \), for every \(\varepsilon >0\). They obtained this result by dividing the given instance into an instance with large tasks and an instance with small tasks. They use dynamic programming to compute an optimal solution for the large instance, and a randomized LP-based algorithm to obtain a \((1+\varepsilon )\)-approximate solution for the small instance. They also present a 3-approximation algorithm for UFPP-U that is different from the one given in [5].

Chen et al. [18] studied the special case of SAP-U where the capacity is K, where K is an integer, and all demands are integers in the range \(\left\{ 1,\ldots ,K \right\} \). They developed an \(O(n(nK)^K)\) time dynamic programming algorithm to solve this special case of SAP-U, and also gave an approximation algorithm with ratio \(\frac{e}{e-1} + \varepsilon \), for any \(\varepsilon >0\), assuming that \(d_j=O(1)\), for every j. Bar-Yehuda et al. [6] presented approximation algorithms for SAP-U that is based on a reduction from SAP-U to UFPP-U that works on very small instances, namely on instances in which \(d_j \le \delta \), for a constant \(\delta > 0\). (Here we assume that the uniform capacity is 1). The reduction is based on an algorithm for DSA by Buchsbaum et al. [12]. Bar-Yehuda et al. also presented a dynamic programming algorithm for large instances of SAP-U, and this led to two approximation algorithms for SAP-U; a randomized algorithm with ratio \(2+\varepsilon \) and a deterministic algorithm with ratio \(\frac{2e-1}{e-1}+\varepsilon < 2.582\).

Bansal et al. [3] gave a deterministic quasi-polynomial time approximation scheme for instances of UFPP assuming that all capacities and demands are quasi-polynomial, thereby ruling out an APX-hardness result for such instances of UFPP, unless \(\text {NP} \subseteq \text {DTIME}(2^{\text {polylog}(n)})\). Batra et al. [9] improved the above result by removing the assumption. They also present two PTASs, one for the case where weight by demand ratios lie in a constant range, and another for the case where one is allowed to shorten paths by an \(\epsilon \)-fraction.

Chakrabarti et al. [14] presented a constant factor approximation algorithm for UFPP under the NBA and an \(O(\log (d_{\max }/d_{\min }))\)-approximation algorithm for UFPP by extending the approach of [13]. Chekuri et al. [17] used an LP-based deterministic algorithm to obtain a \((2+\varepsilon )\)-approximation algorithm for UFPP under the NBA. Bansal et al. [4] developed an \(O(\log n)\)-approximation algorithm for UFPP, beating the integrality gap of the natural LP-relaxation, which was shown to be \(\Omega (n)\) [14]. Chekuri et al. [16] considered Unsplittable Flow on trees. They generalized the above result by providing an \(O(\log m)\)-approximation algorithm for uniform weights and an \(O(\log m \cdot \min \left\{ \log m,\log n \right\} )\)-approximation algorithm for the weighted case, where m is the number of edges in the tree. They also developed an LP formulation for UFPP that has an \(O(\log n \cdot \min \left\{ \log m,\log n \right\} )\) integrality gap and admits a polynomial time O(1)-approximation algorithm. (In a more recent version of their paper [15] they provide a polynomial-size LP with an integrality gap of \(O(\log m)\).) A linear program for UFPP whose gap is \(7+\varepsilon \) was given by Anagnostopoulos et al. [1]. Friggstad and Gao [22] present a polynomial-size linear program for Unsplittable Flow on trees whose integrality gap is \(O(\log n \cdot \min \left\{ \log m,\log n \right\} )\).

Bonsma et al. [10] developed a \((7+\varepsilon )\) approximation algorithm for UFPP and showed that UFPP is strongly NP-hard even for instances with demands in \(\{1,2,3\}\). Chrobak et al. [19] showed that UFPP-U is strongly NP-hard even for the case of uniform weights. Recently, Anagnostopoulos et al. [2] presented a \((2+\varepsilon )\)-approximation algorithm for UFPP.

Gergov [24] presented an \(O(n\log n)\) time algorithm for DSA that computes a solution of cost at most \(3\textsc {load}(J)\), where \(\textsc {load}(J)\) is the maximum sum of demands on an edge. Buchsbaum et al. [12] presented a polynomial time algorithm that computes a solution of cost at most \((1+O((D/\textsc {load}(J))^{1/7}))) \cdot \textsc {load}(J)\), where \(D\) is the maximal demand of a task. Garey and Johnson [23, Problem SR2] stated that Larry Stockmeyer showed that DSA is strongly NP-hard using a reduction from 3-Partition. The reduction is given in [11]. In fact, in the above reduction only two demand types are used. This hardness results implies that SAP-U is also strongly NP-hard.

Finally, we note that it can be shown that SAP-U is strongly NP-hard using a different reduction from 3-Partition that was used to show that Call Scheduling with unit bandwidth and arbitrary duration is strongly NP-hard [20]. (The time dimension in call scheduling corresponds to the storage dimension in the storage allocation problem.) In the conference version of this paper [7] we used a relatively simple reduction from Bin Packing, to show that SAP is strongly NP-hard, even with uniform weights and even under the NBA.

1.2 Our Contribution

We present a polynomial time \((9+\varepsilon )\)-approximation algorithm for SAP, for every constant \(\varepsilon >0\). Our algorithm is based on the recent constant factor approximation algorithm for UFPP by Bonsma et al. [10]. As done in [10] we partition the task set into three sets: small tasks, medium tasks, and large tasks.Footnote 1 Small tasks are \(\delta \)-small for some \(\delta >0\), large tasks are \(\delta '\)-large for some \(\delta '>\delta \), and medium tasks are \(\delta \)-large and \(\delta '\)-small.

The algorithms for small and medium tasks by Bonsma et al. [10] are based on an approximation framework that provides \((1+\varepsilon )\alpha \)-approximate solutions given a certain type of \(\alpha \)-approximation algorithm for UFPP with “almost uniform” capacities (\(c_e \in [2^k,2^{k+\ell })\), for some k and a constant \(\ell \)). Our algorithm for medium tasks uses a variation of this framework for SAP. The main difference is that in SAP we also need to worry about the height assignments. Additionally, we provide a 2-approximation algorithm for “almost uniform” instances. We do this by extending the dynamic programming algorithm for SAP with uniform capacities from [6] to “almost uniform” capacities. A factor 2 is lost due to the framework’s requirement from the \(\alpha \)-approximation algorithm for “almost uniform” instances. Hence, combined with the above framework we obtain a \((2+\varepsilon )\)-approximation algorithm for medium tasks.

Our \((4+\varepsilon )\)-approximation algorithm for small tasks is based on partitioning the instance into instances in which bottlenecks are within factor 2 of each other. We show how to compute an approximate solution for each instance and then explain how to adjust the heights in order to combine them. This can be seen as a variant of the framework for medium tasks, in which the \(\alpha \)-approximation algorithm should satisfy an additional requirement: the tasks must be packed in a strip. We use an LP-rounding \((4+\varepsilon )\)-approximation algorithm for the UFPP version of each such instance that computes approximate solutions in which tasks are packed in strips. (We also provide an alternative local ratio \((5+\varepsilon )\)-approximation algorithm.) A \((1+\varepsilon )\) factor is incurred by a reduction from SAP to UFPP in strips [6]. Finally a SAP solution is obtained by stacking the strips.

As for large tasks, Bonsma et al. [10] presented an approximation algorithm for large instances of UFPP that is based on (i) a reduction from UFPP to a special case of the Rectangle Packing problem, and (ii) an algorithm that solves this special case that correspond to instances that are obtained by the reduction. Their algorithm provides a schedule that is induced by a subset of pairwise non-intersecting rectangles, and therefore it is also a SAP schedule. It follows that this algorithm is also an approximation algorithm for large instances of SAP. In this paper, we give a tighter analysis and provide a better upper bound on the approximation ratio for large instances of SAP.

Finally, using a reduction from rings to paths we provide a \((10+\varepsilon )\)-approximation algorithm for SAP on ring networks, where each task has two possible paths.

1.3 Paper Organization

The remainder of the paper is organized as follows. Section 2 contains definitions and a few preliminary observations. A formal description of our results is given in Sect. 3. Our algorithms for medium, small, and large SAP instances are given in Sects. 54, and 6, respectively. We consider ring networks in Sect. 7. We conclude in Sect. 8.

2 Preliminaries

Given a task set \(S \subseteq J\), the demand of S is denoted by d(S), namely \(d(S) \triangleq \sum _{j \in S} d_j\). The load of S on an edge e is defined as \(d(S(e)) = \sum _{j \in S(e)} d_j\). A feasible UFPP solution is a set of tasks \(S \subseteq J\) such that \(d(S(e)) \le c_e\), for every \(e \in E\). A UFPP solution S is called B-packable if \(d(S(e)) \le B\), for every \(e \in E\). Given a SAP solution (Sh), the makespan of an edge e is defined as \(\mu _h(S(e)) \triangleq \max _{j \in S(e)} (h(j)+d_j)\). Observe that \(d(S(e)) \le \mu _h(S(e))\), for every \(e \in E\). A SAP solution (Sh) is called B-packable if \(\mu _h(S(e)) \le B\), for every \(e \in E\).

Given two disjoint subsets \(S_1, S_2 \subseteq J\) and two height functions \(h_1: S_1 \rightarrow \mathbb {R}^+\) and \(h_2: S_2 \rightarrow \mathbb {R}\), let \(h_1 \cup h_2: S_1 \cup S_2 \rightarrow \mathbb {R}\) be the following function:

$$\begin{aligned} (h_1 \cup h_2)(j)={\left\{ \begin{array}{ll} h_1(j) &{} \quad j \in S_1, \\ h_2(j) &{} \quad j \in S_2. \end{array}\right. } \end{aligned}$$

The following observation bounds the load of a UFPP solution on the edges in terms of the maximum bottleneck. A similar observation was made in [10].

Observation 1

Let S be a feasible UFPP solution. Then \(d(S(e)) \le 2\max _{j \in S} b(j)\), for every \(e \in E\).

Proof

Let e be an edge. Any task \(j \in S(e)\) must use an edge with capacity at most \(B = \max _{j \in S} b(j)\). Let \(e_L\) and \(e_R\) be the closest such edges to the left and to the right, respectively. (It may be that \(e_L=e_R=e\).) Hence, \(d(S(e)) \le d(S(e_L)) + d(S(e_R)) \le 2B\). \(\square \)

The next observation is the analogous observation for SAP.

Observation 2

Let (Sh) be a feasible SAP solution. Then \(\mu _h(S(e)) \le \max _{j \in S} b(j)\), for every \(e \in E\).

Proof

Let \(e \in E\) be an edge and let \(S(e) = \{j_{1},\dots ,j_{p}\}\) such that \(h(j_i) + d_{j_i} \le h(j_{i+1})\), for every i. The observation follows, since \(\mu _h(S(e)) = h(j_p) + d_{j_p} \le b(j_p) \le \max _{j \in S} b(j)\). \(\square \)

Finally, we need the following standard result that is used when one partitions the input into small and large instances. Given a SAP instance, let \(J_S\) and \(J_L\) be the subset of \(\delta \)-small tasks and the subset of \(\delta \)-large tasks, respectively. (The proof is given for completeness.)

Lemma 3

Let \(S_1\) and \(S_2\) be an \(r_1\)-approximate solution with respect to \(J_S\) and an \(r_2\)-approximate solution with respect to \(J_L\), respectively. Then, the solution of greater weight is an \((r_1+r_2)\)-approximation for the original instance.

Proof

Let \(S^*\) be an optimal solution for the original instance. Either \( w(S^* \cap J_S) \ge \frac{r_1}{r_1+r_2} w(S^*)\) or \( w(S^* \cap J_L) \ge \frac{r_2}{r_1+r_2} \cdot w(S^*)\). Hence, either \(w(S_1) \ge \frac{1}{r_1} \cdot \frac{r_1}{r_1+r_2} \cdot w(S^*) = \frac{1}{r_1+r_2} \cdot w(S^*) \) or \( w(S_2) \ge \frac{1}{r_2} \cdot \frac{r_2}{r_1+r_2} \cdot w(S^*) = \frac{1}{r_1+r_2} \cdot w(S^*) \). The lemma follows. \(\square \)

3 Statement of Results

In this section we provide a formal statement of our results. We start with our results regarding small, medium, and large instances.

Theorem 1

There exists a polynomial time algorithm such that for every constant \(\varepsilon >0\), there exists a constant \(\delta >0\), such that the algorithm computes \((4+\varepsilon )\)-approximate solutions for \(\delta \)-small SAP instances.

Theorem 2

There exists a polynomial time \((2+\varepsilon )\)-approximation algorithm for \(\delta \)-large and \((1-2\beta )\)-small SAP instances for every constants \(\varepsilon >0\), \(\beta \in (0,\frac{1}{2})\), and \(\delta \in (0,1-2\beta )\).

Theorem 3

There exists a polynomial time \((2k-1)\)-approximation algorithm for \(\frac{1}{k}\)-large SAP instances for every integer \(k \ge 1\).

The proofs of Theorems 1, 2, and 3 are given in Sects. 45, and 6, respectively. Our result for general SAP instances follows.

Theorem 4

There exists a polynomial time \((9+\varepsilon )\)-approximation algorithm for SAP, for any constant \(\varepsilon >0\).

Proof

Set \(k = 2\) and \(\beta = \frac{1}{4}\). By Theorem 1 there exists a constant \(\delta >0\) for which there is a polynomial time \((4+\varepsilon )\)-approximation algorithm for \(\delta \)-small SAP instances. By Theorem 2 there is a \((2+\varepsilon )\)-approximation algorithm for \(\delta \)-large and \(\frac{1}{2}\)-small SAP instances. Also, there a polynomial time 3-approximation algorithm for \(\frac{1}{2}\)-large SAP instances by Theorem 3. The theorem follows from Lemma 3. \(\square \)

In Sect. 7 we consider SAP on ring networks.

Theorem 5

There exists a polynomial time \((10+\varepsilon )\)-approximation algorithm for SAP on ring networks, for any constant \(\varepsilon >0\).

4 Small Tasks

In this section we prove Theorem 1, namely we present a polynomial time algorithm that, for every \(\varepsilon >0\), computes \((4+\varepsilon )\)-approximate solutions for \(\delta \)-small instance of SAP, for some constant \(\delta > 0\) (depending on \(\varepsilon \)).

We first present an LP-rounding algorithm for UFPP instances in which bottlenecks are within factor 2 of each other. A \((1+\varepsilon )\) factor is incurred by a reduction from SAP to UFPP in strips [6]. Then, we show how to use this algorithm to design an algorithm for small instances. We partition the instance into instances in which tasks have similar bottlenecks, and then use the above algorithm to compute an approximate solution that resides in a strip. A SAP solution is obtained by stacking the strips.

4.1 Packing Small Tasks in Strips

As a first step we consider the following special case of SAP. Let \(B > 0\), and assume we are given a \(\delta \)-small SAP instance in which \(b(j) \in [B,2B)\), for every \(j \in J\). Note that due to Observation 2, without loss of generality we may assume that all edge capacities are between B and 2B (see example in Fig. 3). We present a LP-rounding algorithm that computes a \(\frac{1}{2}B\)-packable \((4+\varepsilon )\)-approximate SAP solution. An alternative local ratio \((5+\varepsilon )\)-approximation algorithm is also provided in an appendix.

Fig. 3
figure 3

Example of a \(\frac{1}{2}B\)-packable solution for a \(\delta \)-small SAP instance in which \(b(j) \in [B,2B)\), for every \(j \in J\)

The first step is an LP-rounding algorithm that computes \(\frac{1}{2}B\)-packable UFPP solutions. The algorithm is based on the following integer linear formulation of UFPP:

$$\begin{aligned} \begin{array}{lll} \max &{} \displaystyle \sum _{j \in J} w_j \cdot x_j \\ \text {s.t.} &{} \displaystyle \sum _{j \in S(e)} d_j x_j \le c_e &{} \quad \forall e \in E\\ &{} x_j \in \left\{ 0,1 \right\} &{} \quad \forall j\in J \end{array} \end{aligned}$$
(1)

where \(x_j=1\) represents that j is in the solution. An LP-relaxation is obtained by replacing the integrality constraints with \(x_j \in [0,1]\), for every \(j \in J\).

Let \(x^*\) be an optimal fractional solution of (1) and define \(x' = \frac{1}{4} x^{*}\). The solution \(x'\) satisfies \(\sum _{j \in S(e)} d_j x_j \le \frac{1}{2}B\), and therefore it is feasible with respect to (1) with \(c_e = \frac{1}{2}B\), for every e. Since this is a uniform capacity instance we may use the following result of Chekuri et al. [17] to obtain an integral solution.

Theorem 6

([17]). For every constant \(\varepsilon >0\), there exists a constant \(\delta > 0\), such that given a \(\delta \)-small instance of UFPP-U, an integral solution x such that \(\sum _j w_j x_j \ge \frac{1}{1+\varepsilon } \sum _j w_j x^*_j\) can be found in polynomial time, where \(x^*\) is an optimal fractional solution of (1) .

We now transform our UFPP-U solution into a SAP solution using the following result:

Lemma 4

([6]). There exists a constant \(\delta _0 > 0\), such that if S is a B-packable UFPP solution to some \(\delta \)-small instance, where \(\delta \in (0,\delta _0)\), then S can be transformed into a B-packable SAP solution \((S',h')\) such that \(w(S') \ge (1-4\delta ) w(S)\) in polynomial time.

Using Lemma 4 we obtain an approximate SAP solution.

Lemma 5

For every constant \(\varepsilon >0\), there exists a constant \(\delta >0\) and a polynomial time algorithm that computes \(\frac{1}{2}B\)-packable \((4+\varepsilon )\)-approximate solutions for \(\delta \)-small SAP instances in which \(b(j) \in [B,2B)\), for every \(j \in J\).

Proof

Given \(\varepsilon >0\), set \(\delta \) such that (i) \(\delta \le \delta _1\), where \(\delta _1\) is the constant that is required by Theorem 6 for \(\varepsilon /5\), (ii) \(\delta < \delta _0\), and (iii) \(1-4\delta > (4+ \frac{4}{5}\varepsilon ) / (4 + \varepsilon )\) (e.g., \(\delta \le \varepsilon /100\) would suffice). Apply the algorithm from [17] to compute a \(\frac{1}{2}B\)-packable \(4\cdot (1+\frac{\varepsilon }{5})\)-approximate UFPP solution S. By Lemma 4, S can be transformed into a \(\frac{1}{2}B\)-packable SAP solution \((S',h')\) such that \(w(S') \ge (1-4\delta ) w(S)\) in polynomial time. It follows that

$$\begin{aligned} w(S') \ge \frac{1-4\delta }{4\cdot (1 +\varepsilon /5)} \cdot \textsc {opt} _{\textsc {UFPP}}(J) \ge \frac{1}{4+\varepsilon } \cdot \textsc {opt} _{\textsc {SAP}}(J)~, \end{aligned}$$

as required. \(\square \)

4.2 Stacking Strips

The next step is to partition the instance. Let \(J_t = \left\{ j \in J : 2^t \le b(j) < 2^{t+1} \right\} \), for every t. Algorithm Strip-Pack computes an approximate solution for \(J_t\), for each t, and then combines the solutions. An example of a solution produced by Algorithm Strip-Pack is shown in Fig. 4.

figure a

We conclude the section by showing that Algorithm Strip-Pack computes \((4+\varepsilon )\)-approximate solutions, provided that it uses the algorithm whose existence is shown in Lemma 5 to compute the \(2^{t-1}\)-packable solutions (Line 2).

Fig. 4
figure 4

An example of a solution produced by Algorithm Strip-Pack. a Computing solution for \(J_t\). b Lifting solution for \(J_t\). c Computing solution for \(J_{t-1}\). d Lifting solution for \(J_{t-1}\)

Proof of Theorem 1

First, the running time of Algorithm Strip-Pack is polynomial, since there are at most O(n) nonempty subsets \(J_t\), and for each such subset we call the algorithm from Lemma 5 which runs in polynomial time.

By Lemma 5 we have that Algorithm Strip-Pack computes a \(2^{t-1}\)-packable \((4+\varepsilon )\)-approximate solution \((S_t,h_t)\) for \(J_t\), for every t. By lifting the solution \((S_t,h_t)\) by \(2^{t-1}\), Algorithm Strip-Pack ensures that a feasible SAP solution is obtained. Also, let \((S^*,h^*)\) be an optimal solution for J. Then,

$$\begin{aligned} w(S)=\sum _t w(S_t) \ge \frac{1}{4+\varepsilon } \sum _t \textsc {opt} _\textsc {SAP} (J_t) \ge \frac{1}{4+\varepsilon } \sum _t w(S^* \cap J_t) =\frac{1}{4+\varepsilon } \cdot w(S^*) ~, \end{aligned}$$

as required. \(\square \)

5 Medium Tasks

In this section we prove Theorem 2. We present a polynomial time algorithm that computes \((2+\varepsilon )\)-approximate solutions for instances of SAP that are both \(\delta \)-large and \((1-2\beta )\)-small, for any constants \(\varepsilon >0\), \(\beta \in (0,\frac{1}{2})\), and \(\delta \in (0,1-2\beta )\).

5.1 Approximation Framework for SAP

Following [10], we present a framework that acts as a reduction from a SAP instance to multiple “almost uniform” SAP instances. Given an \(\alpha \)-approximation algorithm for almost uniform instances, the framework provides a \((1+\varepsilon )\alpha \)-approximation algorithm. As opposed to the framework from [10] that was designed for UFPP, our framework has an additional difficulty which is taking care of height assignments.

Let \(k \in \mathbb {Z}\) and \(\ell \in \mathbb {N}\). Given a SAP instance, let \(J^{k,\ell } = \left\{ j \in J : 2^k \le b(j) < 2^{k+\ell } \right\} \) and let \(E^{k,\ell } = \cup _{j \in J^{k,\ell }} I_j\). We observe that without loss of generality, we may assume that for each \(J^{k,\ell }\), edge capacities are between \(2^k\) and \(2^{k+\ell }\).

Observation 6

Given a SAP instance, \(k \in \mathbb {Z}\), and \(\ell \in \mathbb {N}\), we have that \(c_e \ge 2^k\), for every \(e \in E^{k,\ell }\).

Proof

If \(j \in J^{k,\ell }\), then \(c_e \ge b(j) \ge 2^k\), for every \(e \in I_j\). \(\square \)

Observation 7

Let (Sh) be a feasible SAP solution such that \(S \subseteq J^{k,\ell }\). Then \(\mu _h(S(e)) \le \min (c_e,2^{k+\ell })\), for every edge \(e \in E\).

Proof

Observation 2 implies that any feasible SAP solution \(S \subseteq J^{k,\ell }\) is \(2^{k+\ell }\)-packable. \(\square \)

Thus, from the view point of tasks in \(J^{k,\ell }\), the capacity of \(e \in E^{k,\ell }\) is \(\min (c_e,2^{k+\ell })\).

Define \(q = \left\lceil {\log _2 (1/\beta )} \right\rceil \), and let \(\ell \in \mathbb {N}\) be a constant that will be determined later. Algorithm AlmostUniform is our framework for computing SAP solutions, and it is based on the framework for UFPP that was given in [10]. The main difference is that with SAP one cannot simply combine sub-solutions. A height function for the tasks should also be computed. This motivates the following definition.

Definition 1

Fix k and \(\ell \) and let \(\beta >0\). A feasible SAP solution (Sh) where \(S \subseteq J^{k,\ell }\) is called \(\beta \)-elevated (with respect to k) if \(h(j) \ge \beta 2^k\), for every \(j \in S\).

Note that it may be the case that \(S \subseteq J^{k_1,\ell _1}, J^{k_2,\ell _2}\), where \(k_1 < k_2\). Moreover, it may be that a feasible solution (Sh) is \(\beta \)-elevated with respect to \(k_1\), but not with respect to \(k_2\).

Algorithm AlmostUniform uses an algorithm called Elevator that computes an \(\alpha \)-approximate SAP solution for \(J^{k,\ell }\) which is also \(\beta \)-elevated (with respect to k). Notice that a necessary condition for the existence of such a nonempty SAP solution is that there are \((1-\beta )\)-small tasks in \(J^{k,\ell }\).

figure b

Since \(\ell \) is a constant, the number of subsets \(J^{k,\ell }\) is linear. Hence, if the running time of Algorithm Elevator is polynomial, then the running time of Algorithm AlmostUniform is also polynomial. It remains to show that the computed solution is indeed \((1+\varepsilon )\alpha \)-approximate, for an appropriate choice of \(\ell \).

Lemma 8

Let J be a set of \(\delta \)-large and \((1-2\beta )\)-small tasks, where \(\varepsilon >0\), \(\beta \in (0,\frac{1}{2})\), and \(\delta \in (0,1-2\beta )\) are constants. The solution \((S_r,h_r)\) computed by Algorithm AlmostUniform is a feasible SAP solution, for every \(r \in \left\{ 0,\dots ,\ell +q-1 \right\} \), where \(q = \left\lceil {\log _2 (1/\beta )} \right\rceil \).

Proof

Given r, let \(k_0 = \min \mathcal {K}(r)\). Also given \(i \in \mathcal {K}(r)\), let \(i^+ = \min \{k \in \mathcal {K}(r) : k > i\}\). For \(i \in \mathcal {K}(r)\), let \(S_i = \bigcup _{k \in \mathcal {K}(r), k \le i} S^{k,\ell }\) and let \(h_i = \bigcup _{k \in \mathcal {K}(r), k \le i} h^{k,\ell }\). We prove that \((S_i,h_i)\) is feasible by induction on i. In the base case we have \(i = k_0\), and we have that \((S_i,h_i) = (S^{k_0,\ell }, h^{k_0,\ell })\) is feasible due to our assumption on Algorithm Elevator. For the inductive step, we assume that the claim holds for i and prove that it holds for \(i^+\). We know that \((S_i,h_i)\) is feasible due to the inductive hypothesis, and by Observation 7 we know that \(\mu _{h_i}(S_i(e)) \le \min (c_e,2^{i+\ell })\), for every edge \(e \in E\). Since Elevator computes a \(\beta \)-elevated SAP solution for \(J^{i^+,\ell }\), it follows that

$$\begin{aligned} h_{i^+}(j) \ge \beta \cdot 2^{i^+} \ge 2^{-q} \cdot 2^{i^+} \ge 2^{-q} \cdot 2^{i + \ell + q} \ge 2^{i+\ell } ~, \end{aligned}$$

for every \(j \in S_{i^+}\). Hence \((S_{i^+},h_{i^+})\) is a feasible SAP schedule. \(\square \)

The UFPP version of the following lemma appeared in [10] and applies here as well. We provide a proof for completeness.

Lemma 9

Let J be a set of \(\delta \)-large and \((1-2\beta )\)-small tasks, where \(\varepsilon >0\), \(\beta \in (0,\frac{1}{2})\), and \(\delta \in (0,1-2\beta )\) are constants. If Algorithm Elevator computes \(\alpha \)-approximate solutions, then \(w(S_{r^*}) \ge \frac{\ell }{\ell +q}\cdot \frac{1}{\alpha } \textsc {opt} _{\textsc {SAP}}(J)\), where \(q = \left\lceil {\log _2 (1/\beta )} \right\rceil \).

Proof

Let (Sh) be an optimal SAP solution for J. Since each \(S^{k,\ell }\) is a \(\beta \)-elevated \(\alpha \)-approximation for \(J^{k,\ell }\) and every task \(j \in J\) belongs to exactly \(\ell \) sets \(J^{k,\ell }\), it follows that

$$\begin{aligned} \sum _{r=0}^{\ell +q-1} w(S_r)&=\sum _{r=0}^{\ell +q-1} \sum _{k \in \mathcal {K}(r)} w(S^{k,\ell })\\&\ge \sum _{r=0}^{\ell +q-1} \sum _{k \in \mathcal {K}(r)} \frac{1}{\alpha } \cdot \textsc {opt} _{\textsc {SAP}}(J^{k,\ell }) \\&=\frac{1}{\alpha } \cdot \sum _{k \in \mathcal {K}} \textsc {opt} _{\textsc {SAP}}(J^{k,\ell }) \\&\ge \frac{1}{\alpha } \cdot \sum _{k \in \mathcal {K}} w(S \cap J^{k,\ell }) \\&=\frac{\ell }{\alpha } \cdot \textsc {opt} _{\textsc {SAP}}(J)~. \end{aligned}$$

Therefore, \(w(S_{r^*})\ge \frac{1}{\alpha } \cdot \frac{\ell }{\ell +q} \cdot \textsc {opt} _{\textsc {SAP}}(J)\). \(\square \)

By choosing the right value of \(\ell \) we obtain a \((1+\varepsilon )\alpha \)-approximation algorithm.

Lemma 10

Let J be a set of \(\delta \)-large and \((1-2\beta )\)-small tasks, where \(\varepsilon >0\), \(\beta \in (0,\frac{1}{2})\), and \(\delta \in (0,1-2\beta )\) are constants. Suppose we are given a polynomial time algorithm that computes a \(\beta \)-elevated \(\alpha \)-approximate SAP solution for \(J^{k,\ell }\), for every k and \(\ell \). Then, if \(\ell = \frac{1}{\varepsilon } \left\lceil {\log _2 (1/\beta )} \right\rceil \), Algorithm AlmostUniform computes a \((1+\varepsilon )\alpha \)-approximate solution in polynomial time.

Proof

We know that the computed solution is feasible due to Lemma 8, and by Lemma 9 we have that

$$\begin{aligned} w(S_{r^*}) \ge \frac{1}{\alpha } \cdot \frac{\ell }{\ell +\left\lceil {\log _2 (1/\beta )} \right\rceil } \cdot \textsc {opt} _{\textsc {SAP}}(J) =\frac{1}{\alpha } \cdot \frac{1}{1+\varepsilon } \cdot \textsc {opt} _{\textsc {SAP}}(J) ~, \end{aligned}$$

as required. \(\square \)

5.2 Computing 2-Approximations that are \(\beta \)-Elevated

In this section we present an algorithm that computes a 2-approximation solution for \(J^{k,\ell }\) which is also \(\beta \)-elevated, for any k and \(\ell \). Throughout the section we consider medium tasks, namely we assume that every task \(j \in J^{k,\ell }\) is \(\delta \)-large and \((1-2\beta )\)-small, for constants \(\varepsilon >0\), \(\beta \in (0,\frac{1}{2})\), and \(\delta \in (0,1-2\beta )\).

Our algorithm is based on the following simple observation that was given in [6] for SAP-U.

Observation 11

Given a SAP instance, there exists an optimal solution (Sh) such that, for every task j, either \(h(j)=0\) or there exists a task \(j' \ne j\) such that \(I_j \cap I_{j'} \ne \emptyset \) and \(h(j) = h(j') + d_{j'}\).

The proof of the observation uses a “gravity” argument, namely as long as there is a task whose height can be decreased, pick one such task and decrease its height as much as possible. (Alternatively, one could consider a height function \(h'\) that minimizes \(\sum _{j \in S} h(j)\).) See example in Fig. 5.

Fig. 5
figure 5

Solution (b) is obtained by applying gravity on Solution (a). a Original solution. b Solution after application of gravity

Using Observation 11 we are able to consider a specific type of optimal solutions.

Lemma 12

Suppose we are given a \(\delta \)-large SAP instance, where \(c_e \in [B,B2^\ell )\), for every \(e \in E\), for some B. Then there exists an optimal solution \((S^*,h^*)\) such that:

  1. (i)

    \(|S^*(e)| < 2^\ell /\delta \), for every e, and

  2. (ii)

    there exists a subset \(H_j \subseteq S^* \setminus \{j\}\) of size at most \(2^\ell /\delta \) such that \(h^*(j)=d(H_j)\), for every task \(j \in S^{*}\).

Proof

Let \((S^*,h^*)\) be an optimal SAP solution whose existence is implied in Observation 11. To prove (i) observe that \(d_j \ge \delta b(j) \ge \delta B\), for every \(j \in S^*\) and that \(c_e < B 2^\ell \), for every \(e \in E\). Thus from the feasibility of \(S^*\), it follows that each edge \(e \in E\) is used by less than \(B 2^{\ell }/(\delta B) = 2^\ell /\delta \) tasks. (ii) follows from Observation 11 and (i), by induction over the height. \(\square \)

Lemma 12 implies an upper bound on the number of possibilities for the height of a task \(j \in J\), given a \(\delta \)-large SAP instance, where \(c_e \in [B,B2^\ell )\), for every \(e \in E\), for some B. Since the maximal number of tasks assigned to an edge is at most \(L = 2^\ell /\delta \), the number of possible heights is bounded by \(\sum _{i=0}^L \left( {\begin{array}{c}n\\ i\end{array}}\right) = O(n^L)\). It follows that there are at most \(O(n^{O(L^2)})\) possibilities for assigning a task set and its corresponding heights to a given edge \(e \in E\). Therefore, an optimal SAP solution for J can be computed using a dynamic programming algorithm similar to the one described in [6].

Lemma 13

There is a polynomial time algorithm that computes an optimal solution for a \(\delta \)-large SAP instance, where \(c_e \in [B,B2^\ell )\), for every \(e \in E\), for some B.

Proof

Let \(V = \left\{ v_0,\ldots ,v_m \right\} \) and \(E = \left\{ e_1,\ldots ,e_m \right\} \). Given a vertex \(v_i \in V\), let \(P_i\) be the path that is induced by \(V_i = \left\{ v_i,\ldots ,v_m \right\} \). Let \(J_i\) be the tasks that are fully contained in \(P_i\). A feasible solution \((S_i,h_i)\) is called proper with respect to \(e_i\) if \(e_i \in I_j\), for every \(j \in S_i\). Recall that there are \(O(n^L)\) possibilities for choosing \(S_i\), and that given \(S_i\) there are \(O(n^{L^2})\) possibilities for choosing \(h_i\). A solution \((S_{i+1},h_{i+1})\) is compatible with the proper pair \((S_i,h_i)\) if (i) it is proper with respect to \(e_{i+1}\), (ii) Either \(j \in S_i \cap S_{i+1}\) or \(j \not \in S_i \cap S_{i+1}\) for every j such that \(e_i,e_{i+1} \in I_j\), and (iii) \(h_i(j) = h_{i+1}(j)\) for every \(j \in S_i \cap S_{i+1}\).

We define a dynamic programming table of size \(O(n^{L+L^2})\) as follows. For an edge \(e_i\) and a pair \((S_i,h_i)\) that is proper with respect to \(e_i\) the state \(\Pi (e_i,S_i,h_i)\) stands for the maximum weight of a pair \((S',h')\) such that \(S' \subseteq J_{i+1}\) and \((S_i \cup S',h_i \cup h')\) is feasible. We initialize the table \(\Pi \) by setting \(\Pi (e_m,S_m,h_m) = 0\) for every proper pair \((S_m,h_m)\) with respect to \(e_m\). We compute the rest of the entries by using:

$$\begin{aligned} \Pi (e_i,S_i,h_i) = \max _{\begin{array}{c} (S_{i+1},h_{i+1}) \\ \text {is compatible with} \\ (S_i,h_i) \end{array} } \left\{ w(S_{i+1} \setminus S_i) + \Pi (e_{i+1}, S_{i+1}, h_{i+1}) \right\} \end{aligned}$$

The weight of an optimal solution is \(\Pi (e_0,\emptyset ,h_\emptyset )\), where \(e_0\) is a dummy edge and \(h_\emptyset \) is a function whose domain is the empty set.

To compute each entry \(\Pi (e_i,S_i,h_i)\) we need to go through all the possibilities for a solution \((S_{i+1},h_{i+1})\) that is compatible with \((S_i,h_i)\). There are no more than \(O(n^{L+L^2})\) such possibilities. Hence, the total running time is \(O(m \cdot n^{L+L^2} \cdot n^{(L+L^2)}) = O(m \cdot n^{O(L^2)})\). In order to compute a corresponding solution, one needs to keep track of which option was taken in the recursive computation. An optimal solution can be reconstructed in a top down manner. \(\square \)

Lemma 13 implies that solving SAP on \(J^{k,\ell }\) can be done in polynomial time. It remains to obtain a \(\beta \)-elevated solution.

Lemma 14

Suppose we are given a \((1-2\beta )\)-small SAP instance. A SAP solution (Sh) for \(J^{k,\ell }\) can be partitioned into two \(\beta \)-elevated SAP solutions \((S_1,h_1)\) and \((S_2,h_2)\) in linear time.

Proof

Consider a task \(j \in S\) such that \(h(j) < \beta 2^k\). Since j is \((1-2\beta )\)-small and \(c_e \ge 2^k\), for every \(e \in E^{k,\ell }\), due to Observation 6, we have that

$$\begin{aligned} h(j)+d(j)< & {} \beta 2^k+(1-2\beta )b(j)\nonumber \\\le & {} \beta 2^k+ (1-2\beta )c_e\nonumber \\= & {} c_e+\beta 2^k-2\beta c_e\nonumber \\\le & {} c_e-\beta 2^k~, \end{aligned}$$
(2)

for every \(e \in I_j\). Define \(S_1 = \left\{ j \in S : h(j) < \beta 2^k \right\} \) and \(S_2 = S \setminus S_1\). Also, define \(h_1(j) = h(j) + \beta 2^k\), for all \(j \in S_1\), and \(h_2(j) = h(j)\), for all \(j \in S_2\) (see example in Fig. 6) \((S_1,h_1)\) and \((S_2,h_2)\) are both \(\beta \)-elevated by definition, and \((S_1,h_1)\) is feasible due to (2). Finally, it is not hard to verify that the described partition can be done in linear time. \(\square \)

Fig. 6
figure 6

An example of partition of optimal solution into two \(\frac{1}{4}\)-elevated solutions. The light tasks belong to \(S_1\), while the dark tasks belong to \(S_2\)

The 2-approximation algorithm follows due to Lemmas 13 and 14.

Lemma 15

There is a polynomial time algorithm that computes \(\beta \)-elevated 2-approximations for \(J^{k,\ell }\), given a \(\delta \)-large and \((1-2\beta )\)-small SAP instance.

Proof

An optimal solution \((S^*,h^*)\) for \(J^{k,\ell }\) can be computed in polynomial time due to Lemma 13. \((S^*,h^*)\) can be partitioned into two \(\beta \)-elevated solutions \((S_1,h_1)\) and \((S_2,h_2)\) due to Lemma 14. Since \(w(S^*) = w(S_1) + w(S_2)\), one of the two solutions is 2-approximate. \(\square \)

We note that it is also possible to use dynamic programming to find the optimal \(\beta \)-elevated solution for \(J^{k,\ell }\) directly. Such a solution would be 2-approximate due to Lemma 14.

We conclude this section with the proof of Theorem 2.

Proof of Theorem 2

By Lemma 15 there exists a polynomial time algorithm that computes \(\beta \)-elevated 2-approximate solutions for \(J^{k,\ell }\), for every k and \(\ell \). Therefore, by Lemma 10, Algorithm AlmostUniform is a \((2+\varepsilon )\)-approximation algorithm for \(\delta \)-large and \((1-2\beta )\)-small SAP instances. \(\square \)

6 Large Tasks

In this section we consider \(\frac{1}{k}\)-large instances of SAP, for an integer \(k \ge 1\). Recall that in such instances \(d_j > \frac{1}{k} b(j)\), for every j. We present a \((2k-1)\)-approximation algorithm for \(\frac{1}{k}\)-large instances of SAP.

Bonsma et al. [10] presented a 2k-approximation algorithm for \(\frac{1}{k}\)-large UFPP instances, for any \(k \ge 2\), that is based on a reduction from UFPP to a special case of Rectangle Packing (or Maximum Independent Set in rectangle intersection graphs). The reduction is as follows. Let \(j \in J\) be a task. The residual capacity of j is defined as \(\ell (j) \triangleq b(j)-d_j\). Task j is associated with the rectangle \(R(j) = [s_j,t_j) \times [\ell (j),b(j))\). In SAP terms, it is the rectangle that is induced by assigning height \(\ell (j)\) to j. See example in Fig. 7.

Fig. 7
figure 7

An example of four tasks that are placed at height \(\ell (j) = b(j)-d_j\), for every j

Let \(\mathcal {R}(S) = \left\{ R(j) : j \in S \right\} \) be the family of rectangles that is obtained from a subset \(S \subseteq J\). Bonsma et al. [10] showed that the set of rectangles \(\mathcal {R}(S)\) that correspond to a feasible UFPP solution S, can be colored using 2k colors such that any color induces a pairwise non-intersecting subset of rectangles. Hence there exists at least one subset for which the total weight of the tasks is at least \(\frac{1}{2k} w(S)\). Bonsma et al. also presented a polynomial time algorithm that solves the special case of Rectangle Packing that correspond to instances that are obtained by the above reduction.

Theorem 7

([10]). There is an \(O(n^{4})\) algorithm that computes an optimal rectangle packing of \(\mathcal {R}(J)\), for every UFPP instance J.

We note that the algorithm from [10] provides a UFPP solution which is induced by a subset of pairwise non-intersecting rectangles, and therefore it is also a SAP solution. It follows that this algorithm is also a 2k-approximation algorithm for \(\frac{1}{k}\)-large instances of SAP. In what follows we use the geometric properties of SAP to show that \(\mathcal {R}(S)\) can be colored using only \(2k-1\) colors for any \(\frac{1}{k}\)-large SAP solution (Sh). This implies a \((2k-1)\)-approximation algorithm for \(\frac{1}{k}\)-large instances of SAP, for any integer \(k \ge 1\).

Given a feasible SAP solution (Sh) and a task \(j \in S\), let

$$\begin{aligned} N_S(j) = \left\{ j' \in S : R(j') \cap R(j) \ne \emptyset \right\} ~. \end{aligned}$$

Notice that \(j \in N_S(j)\). Let \(\deg _S(R(j))\) be the number of rectangles in \(\mathcal {R}(S)\) that intersect R(j) (not including R(j)), namely \(\deg _S(R(j)) = |N_S(j)|-1\). We show that there exists a rectangle R(j) whose degree is at most \(2k-2\). This implies that a \((2k-1)\)-coloring can be obtained in a greedy manner.

In the next lemma we show that there may be at most k \(\frac{1}{k}\)-large tasks that share the same an edge. Notice that it relies on Observation 2 that does not apply to UFPP.

Lemma 16

Let (Qh) be a \(\frac{1}{k}\)-large SAP solution that contains a task \(j'\) such that \(\ell (j') < b(j) \le b(j')\), for every \(j \in Q\). If there exists an edge e such that \(e \in I_j\), for every \(j \in Q\), then \(|Q| \le k\).

Proof

Suppose that \(|Q| > k\). Since \(\ell (j') < b(j) \le b(j')\), for every \(j \in Q\) we have that

$$\begin{aligned} \sum _{j \in Q \setminus \{j'\}} d_j>\frac{1}{k} \sum _{j \in Q \setminus \{j'\}} b(j)>\frac{1}{k} \sum _{j \in Q \setminus \{j'\}} \ell (j') =\frac{1}{k} (|Q|-1) \cdot \ell (j') \ge \ell (j') ~. \end{aligned}$$

Therefore,

$$\begin{aligned} \mu _h(Q(e)) =\sum _{j \in Q} d_j =\sum _{j \in Q \setminus \{j'\}} d_j + d_{j'}>\ell (j') + d_{j'} = b(j')~. \end{aligned}$$

Since \(b(j') \ge b(j)\), for every \(j \in Q\), we get a contradiction to Observation 2. \(\square \)

We are now ready to show that there exists a task whose rectangle has at most \(2k-2\) neighbors.

Lemma 17

Let (Sh) be a \(\frac{1}{k}\)-large SAP solution. Then there exists a task \(j \in S\) such that \(\deg _S(R(j)) \le 2k-2\).

Proof

Let \(j_0\) be the task with minimal right endpoint, and let \(e_0\) be the right most edge in \(I_{j_0}\). Define

$$\begin{aligned} Q^-&= \left\{ j \in S : b(j) \le b(j_0) \right\} \cap N_S(j_0) , \\ Q^+&= \left\{ j \in S : b(j) \ge b(j_0) \right\} \cap N_S(j_0)~. \end{aligned}$$

Observe that \(j_0 \in Q^- \cap Q^+\). Consider \(j \in Q^-\). Since \(R(j) \cap R(j_0) \ne \emptyset \), it follows that \(b(j) > \ell (j_0)\). Hence \(Q^-\) satisfies the conditions of Lemma 16 with \(j' = j_0\) and \(e=e_0\), and we have that \(|Q^-| \le k\). Furthermore, observe that \(\ell (j) < b(j_0)\), for every \(j \in Q^+\), and thus \(\cap _{j \in Q^+} R(j) \ne \emptyset \). It follows that \(\ell (j') < b(j_0) \le b(j) \le b(j')\), for every \(j \in Q^+\), for a task \(j'\) such that \(b(j') = \max _{i \in Q^+} b(i)\). Hence \(Q^+\) satisfies the conditions of Lemma 16 with \(e=e_0\). The lemma follows since \(\deg _S(R(j)) \le |Q^-| + |Q^+| - 2 = 2k-2\). \(\square \)

We are now ready to prove Theorem 3.

Proof of Theorem 3

Given a SAP solution S, let \(G(S) = (S,E)\) be the intersection graph of \(\mathcal {R}(S)\), where \(E = \left\{ (j,j') : R(j) \cap R(j') \ne \emptyset \right\} \). Lemma 17 shows that G(S) is \((2k-2)\)-degenerate, for any optimal SAP solution S. Therefore G(S) can be colored using \(2k-1\) colors by removing a vertex with minimum degree, recursively coloring the remaining graph, and then coloring the vertex with an available color [27]. The theorem follows due to Theorem 7. \(\square \)

We note that Lemma 17 is tight for the case of \(k=2\). Figure 8 shows a \(\frac{1}{2}\)-large SAP solution and the resulting Rectangle Packing instance. Since the instance is a 5-cycle, it is not 2-colorable.

Fig. 8
figure 8

An example of a SAP solution with five tasks whose corresponding rectangles form a cycle. a \(\frac{1}{2}\)-large SAP solution. b Corresponding Rectangle Packing instance

7 SAP on Ring Networks

In this section we present a constant factor approximation algorithm for SAP on ring networks. Our algorithm is based on a simple reduction to the line network that was used by Chakrabarti et al. [14] (and later in [10]) for the Unsplittable Flow Problem on rings.

SAP on rings is defined similarly to SAP (on paths). The main difference is that in the former we are given a cycle \(C = (V,E)\) and not a path. In this case there are two possible paths from \(s_j\) to \(t_j\) for each task j, a clockwise path and a counter-clockwise path. A feasible solution for SAP on rings is a triple (ShI) where S and h are defined as in SAP, and I(j) is the path chosen for j, for any \(j \in S\).

In the next lemma we show that an approximation algorithm for SAP may be used to approximate SAP on rings.

Lemma 18

A polynomial time \(\alpha \)-approximation algorithm for SAP implies a polynomial time \((1+\alpha +\varepsilon )\)-approximation algorithm on rings, for any \(\varepsilon >0\).

Proof

The algorithm first chooses any minimum capacity edge e, namely \(c(e) = \min _f c_f\). Then, it removes e from the ring and finds a SAP solution \((S_1,h_1)\) on the resulting line network. Observe that the tasks in \(S_1\) are not routed through e. \(I_1\) is defined accordingly. The next step is to compute a solution \(S_2\) by calling an FPTAS for Knapsack. Observe that all tasks may be routed through e, either the clockwise path or the counter clockwise path contain e, so all tasks should be considered by the FPTAS. Hence, the input is the (dw), namely the jth item has size \(d_j\) and weight \(w_j\). A height function \(h_2\) is defined as follows: \(h_2(j) = \sum _{\ell \in S_2 : \ell < j} d_\ell \), for every \(j \in S_2\). We assume that all tasks in \(S_2\) are routed through e, and \(I_2\) is defined accordingly. Finally the algorithm returns the solution with maximum weight.

Clearly, \((S_1,h_1,I_1)\) is feasible. Since e is a minimum capacity edge, \((S_2,h_2,I_2)\) does not violate the capacity of any edge, and therefore it is feasible. It remains to prove that the computed solution is \((1+\alpha +\varepsilon )\)-approximate. We use an argument similar to the one used to prove Lemma 3. Let \(S^*\) be the set of tasks in an optimal solution for the original instance. Also, let \(S^*_2\) be the tasks in \(S^*\) that are routed through e, and let \(S^*_1 = S^* \setminus S^*_2\). Either \( w(S^*_1)\ge \frac{\alpha }{\alpha +1+\varepsilon } \cdot w(S^*) \) and \( w(S_1)\ge \frac{1}{\alpha } \cdot \frac{\alpha }{\alpha +1+\varepsilon } \cdot w(S^*)=\frac{1}{\alpha +1+\varepsilon } \cdot w(S^*) \) or \( w(S^*_2) \ge \frac{1+\varepsilon }{\alpha +1+\varepsilon } \cdot w(S^*) \) and \( w(S_2) \ge \frac{1}{1+\varepsilon } \cdot \frac{1+\varepsilon }{\alpha +1+\varepsilon } \cdot w(S^*) = \frac{1}{\alpha +1+\varepsilon } \cdot w(S^*) \). The lemma follows. \(\square \)

Theorem 5 follows from Theorem 4 and Lemma 18.

8 Conclusion

We presented a \((9+\varepsilon )\)-approximation algorithm for SAP. Our approximation ratios for medium and large instances match the ratios for UFPP from [10]. In fact our ratio for large tasks is even better (3 instead of 4). However, our approximation ratio for small instances is larger (\(4+\varepsilon \) vs. \(1+\varepsilon \)). This larger ratio stem from our need to pack small tasks in strips in order to use the transformation from a UFPP solution to a SAP solution. The ratio for small instances may have been smaller, if we had such a transformation that works on non-uniform instances. It would be interesting to come up with an algorithm for an extended version of DSA in which one is given a path \(P = (V,E)\) with a non-uniform capacity vector \(c \in \mathbb {R}_+^{|E|}\) and a set of (small) tasks, and the goal is to find the minimum coefficient \(\rho \) such that all tasks can be packed within the capacity vector \(\rho \cdot c\).

Finally, we note that recently Mömke and Wiese [28] improved our main result by presenting a \((2+\varepsilon )\)-approximation algorithm for SAP.