1 Introduction

Over the past decades, supply chain management is one of the major concerns in operations research. Supply chain management is the series of activities required to plan, control and execute a product’s flow, starting from raw materials to the final product and finally the distribution to the final customer, in the most cost-effective way possible. This attracts the attention of researchers in designing methods for manufacturing products efficiently according to the standards of the factory.

In recent years, scheduling problems with transportation have received much attention. This is due to the working environment where factories have to produce many goods and need to deliver them to customers. It can be seen as a two-stage scheduling problem in which the first stage is job production, and the second stage is job delivery. Many approaches exist for single stage, and they can be solved efficiently. However, by applying these techniques independently, the resulting solution may not be good. Therefore, coordination is needed in order to improve the solution.

1.1 Problem definition

In this paper, we study the following scheduling problem. We are given a set of n jobs \({J_1, J_2, \ldots , J_n}\) that need to be processed on two parallel machines \(M_1, M_2\). Each job \(J_i\) has a processing time \(p_i\) and size \(s_i \in [0,1]\). Once a job is completed, it can be transported to the destination. However, there is only a transporter in the system, and the transporter can only carry jobs with a total size of 1 and needs \(T_1\) (resp. \(T_2\)) time-unit to arrive at the destination (resp. come back from the destination). To simplify notation, we have \(T=T_1+T_2\) which corresponds to the round trip duration of the transporter. Without loss of generality, we assume that the transporter is at the location of the machines at the beginning.

The goal is to schedule all jobs and deliver all of them to the destination with the minimum makespan, i.e., the shortest time such that all jobs are delivered to the destination and the transporter goes back to the location of the machines.

The three-field notation scheme was introduced by Graham et al. (1979), which is utilized to represent the scheduling problems. The problem in this paper can be denoted as \(P2\rightarrow D|v=1,c=1|C_{\max }\), which means that there are two identical parallel machines and one transporter. \(C_{\max }\) is the objective function of the problem, and we aim to minimize the makespan.

1.2 Related works

There are also other scheduling problems with transportation, such as problems \(TF2|v = 1,c |{C_{\max }}\) etc, where there is one transporter between two machines, any job is first processed on \(M_1\), then delivered to \(M_2\), finally processed on \(M_2\), and each time the transporter can carry only c jobs. Kise (1991) proved problem \(TF2|v = 1, c =1|{C_{\max }}\) is NP-hard in an ordinary sense, In 2001 Hurink and Knust (2001) proved the problem \(TF2|v = 1,c = 1|{C_{\max }}\) is strongly NP-hard. Lee and Chen (2001) further proved that the problem \(TF2\mathrm{{|}}v = 1,c \ge 3|{C_{\max }}\) is also strongly NP-hard and mentioned that the complexity of \(TF2\mathrm{{|}}v = 1,c = 2|{C_{\max }}\) is open. Recently Lan et.al proved that \(TF2\mathrm{{|}}v = 1,c =2|{C_{\max }}\) is strongly NP-hard Lan et al. (2016).  Chang and Lee (2004) were the first to consider the case in which each job has a different size for delivery, while Li et al. (2005) focused on the problem where customers may have a different location. Woeginger (1994); Potts (1980); Hall and Shmoys (1992) studied the problem with a single machine and parallel machines; each job has its arrival times and transport times. Meanwhile, they assumed that the number of vehicles is unbounded such that a job can be delivered to the destination once they are processed. For these models, they provided the heuristics and worst-case analysis. More related works can be found in Potts and Kovalyov (2000); Potts and Van Wassenhove (1992); Webster and Baker (1995).

For the problem \(P2\rightarrow D|v=1,c=1|C_{\max }\), Chang and Lee (2004) first proposed this model and gave a polynomial-time algorithm with an approximation ratio of 2. Moreover, they proved that there is no approximation algorithm for the problem with an approximation ratio better than 3/2 unless P = NP. Then, Zhong et al. (2007) presented an improved algorithm with an approximation ratio of 5/3. Su et al. (2009) presented an improved algorithm for the problem and reduced the worst-case ratio to 8/5. Then Wang and Liu (2013) and Zhang et al. (2015) proposed \((14/9+\varepsilon )\)- approximation algorithms.

1.3 Our contributions

In this paper, we provide an algorithm with an approximation ratio of \(3/2+\varepsilon \), where \(\varepsilon \) is a positive real and arbitrarily close to 0. This matches the lower bound of the problem asymptotically. Observe that the previous approaches work for the case: the total size in the input is at least seven. For the other case, we give a new batching strategy: guessing the number of batches in an optimal solution, enumerating all the possible cases for large jobs, and appending small jobs into the batches by solving a linear programming with the guarantee that the total processing time in each batch is almost the same as the one in the optimal solution, transferring the fractional solution into integral solution with using at most one extra batch. Embedding the new batching strategy, we can prove that the approximation ratio is close 1.5 for the case: there are at most seven batches in an optimal solution.

2 Preliminary

In this section, we first introduce a bin packing problem then give some notations used later.

2.1 Bin packing problem

there are n items \(x_1,x_2,\ldots ,x_n\), each has size \(s_i\) in (0, 1). There are also empty bins with a capacity of 1. The goal is to pack all the items into a minimum number of bins while each bin can pack any subset of items with total size at most 1. This problem is known to be NP-hard Johnson (2008).

An approximation algorithm for the classical one-dimensional bin packing problem called FFD (First Fit Decreasing) is provided in Johnson et al. (1974). The FFD algorithm first sorts the list of items into non-increasing order of their size then places each item one by one with this order into the first bin where it will fit. It requires \(\Theta (n \log n)\) time, where n is the number of items to be packed. The following lemma is from Dósa et al. (2013).

Lemma 1

For any bin packing input L, the number of bins used by FFD is at most \( 11OPT(L)/9 + 6/9\), where OPT(L) is the optimal value.

2.2 Notations

Given a schedule, let C be its makespan, and \(C^*\) be the makespan of an optimal solution. Let \(C(B_i)\) be the completion time of batch \(B_i\) in a given schedule. Let \(C^*(B_i^*)\) be the completion time of batch \(B_i^*\) in an optimal schedule. Let the sum of the processing times of jobs in batch \(B_i\) be \(p(B_i)\). Let P be the total processing times of all jobs. Let \(C_M^*\) be the completion time of jobs on machines by an optimal algorithm. Then we have

$$\begin{aligned} C_M^* \ge \frac{P}{2}. \end{aligned}$$

Let \(b^*\) be the number of batches in the optimal algorithm. Then we have

$$\begin{aligned} C^* \ge C_M^* +T, \quad C^* \ge C^*(B_1^*) +b^* \cdot T. \end{aligned}$$
(1)

3 Framework of old algorithms

We first introduce a framework of old algorithms Wang and Liu (2013) and prove that it performs very well for the case where there are at least 8 batches in an optimal solution, i.e., its worst performance ratio is 1.5. In the framework, we first partition the input into batches, then schedule batches onto two machines and deliver them to the destination.

figure a

Given an algorithm A on input L, the makespan of A is denoted as \(C_A(L)\). Then it is not difficult to get the following result:

$$\begin{aligned} C_A(L)=\max \{ C(B_i )+(b+1-i)T | 1 \le i \le b \}, \end{aligned}$$

where \(C(B_i)\) is the completion time of batch \(B_i\) and b is the number of batches. However by the following lemma from Chang and Lee (2004) Wang and Liu (2013) we only need to consider the following five cases, i.e., \(i=1,2,3, b -1,b \).

Lemma 2

For input L, \(C_{A}(L) = \max \{ C(B_i )+(b+1-i)T | i=1,2,3, b -1 , b \}\), where T is the time from machines to the destination and back to machines.

Lemma 3

Chang and Lee (2004) For \(i=1,2\) and \( b \ge 3 +i\), if \(C_{A}(L) = C(B_i )+ (b+1-i) \cdot T \), we have \(p(B_i) \le 2T\).

Lemma 4

\(C(B_b) \le \frac{3}{2}C_M^{*}\) and \(C(B_{b -1} )\le C_M^{*}\) Wang and Liu (2013).

Proof

For the sake of completeness, we give a detail proof here. In the above algorithm, we actually run LS algorithm on two machines, so by Graham et al. (1979), we have

$$\begin{aligned} C(B_b) \le \frac{3}{2}C_M^{*}. \end{aligned}$$

It is not difficult to see that except for the last batch, batches \(B_1, B_3, B_5,\ldots .\) are assigned on one machine and \(B_2, B_4, B_6,\ldots \) on the other machine. Since \(p(B_1) \le p(B_2) \le \cdots \le p(B_b)\), we have

$$\begin{aligned} C(B_{b-1})= & {} p(B_{b-1}) + p(B_{b-3}) + \cdots + p(B_{b-2k+1}) \\&\le p(B_{b}) + p(B_{b-2}) +\cdots . + p(B_{b-2k+2}), \end{aligned}$$

where \(1 \le k \le b/2 \). Hence \(C(B_{b -1} )\le C_M^{*}\). \(\square \)

Lemma 5

If \(b=1\), then \(C_A(L)\le \frac{3}{2}C^*\).

Proof

When \(b =1\), we have \(C_A(L)= C(B_1) + T\). By Lemma 4, we have \(C(B_1) \le \frac{3}{2}C_M^{*}\). Then by (1), we have

$$\begin{aligned} \frac{C_A(L)}{C^*} \le \frac{C(B_1) + T}{C_M^{*} + T} \le \frac{3}{2}. \end{aligned}$$

Hence we have this lemma. \(\square \)

Using the techinques in Chang and Lee (2004) and Wang and Liu (2013), it is not difficult to get the following result.

Lemma 6

In the above algorithm, if we run FFD algorithm for batching, then \(C \le \frac{3}{2}C^*\) if \(b^* \ge 8\).

Proof

By Lemma 2, at least one of the following equations holds.

  • \(C =C(B_b)+T \)

  • \(C = C(B_{b-1}) +2T \)

  • \(C= C(B_{b-2})+3T \)

  • \(C = C(B_{2})+(b -1) T \)

  • \( C= C(B_{1})+ b \cdot T \)

Case 1: \(C= C(B_b)+T\). By Lemma 4, since \(C(B_b) \le \frac{3}{2}C_M^*\), we have

$$\begin{aligned} \dfrac{C(B_b)+T}{C^*} \le \dfrac{\frac{3}{2}C_M^* + T}{C_M^* + T } \le \dfrac{3}{2} \end{aligned}$$

Case 2: \( C = C(B_{b-1}) +2T \).

By Lemma 4, we have \(C(B_{b-1}) \le C_M^*\).

Thus we can obtain:

$$\begin{aligned} \dfrac{C(B_{b-1}) +2T}{C^*} \le \dfrac{C(B_{b-1}) +2T}{\max \{b^* T, C_M^* + T\}} \le \dfrac{T}{b^* T}+\dfrac{C_M^* + T}{C_M^* + T } < \dfrac{3}{2} \end{aligned}$$

Case 3: \( C = C(B_{b-2})+3T \).

By Lemma 4 and the above algorithm, \(C(B_{b-2}) \le C(B_{b-1}) \le C^* \). Hence we have:

$$\begin{aligned} \dfrac{C(B_{b-2}) +3T}{C^{*}} \le \dfrac{C_M^* + T}{C_M^*+T} + \dfrac{2T}{b^{*} T} \le 1 + \dfrac{2T}{b^{*} T} < \frac{3}{2}, \end{aligned}$$

since \(b^{*} \ge 4\).

Case 4: \(C = C(B_{2})+(b -1) \cdot T \).

In this case we have \(b \ge 5\), otherwise at least one of the three cases holds. By Lemme 3, we have \(p(B_2)\le 2T\).

By Lemma 1, we can get:

$$\begin{aligned} \dfrac{C(B_{2})+(b-1)T}{C^{*}} \le \dfrac{(b+1)T}{b^{*}T} \le \dfrac{\frac{11}{9}b^* + \dfrac{6}{9}+1}{b^*} \le \dfrac{11}{9}+\dfrac{15}{9b^{*}} \le \frac{3}{2}, \end{aligned}$$

where \(b^* \ge 6\).

Case 5: \(C = C(B_{1})+ b \cdot T \).

In this case we have \(b \ge 4\), otherwise at least one of Case 1, Case 2 and Case 3 holds. By Lemea 3, we have \(C(B_{1}) =p(B_1) \le 2T\).

Then we can get:

$$\begin{aligned} \dfrac{ C(B_{1})+b \cdot T}{C^{*}} \le \dfrac{(b+2)T}{b^{*}T}. \end{aligned}$$
  • When \(b^* =8\), by Lemma 1, \(b \le 10\), we have: \(\dfrac{(b+2)T}{b^{*}T} \le \frac{12T}{8T} = \frac{3}{2}\).

  • When \(b^* =9\), by Lemma 1, \(b \le 11\), thus we have: \(\dfrac{(b+2)T}{b^{*}T} \le \frac{13T}{9T} < \frac{3}{2}\).

  • When \(b^* \ge 10 \), we have \(\dfrac{C(B_{1})+b \cdot T}{C^{*}} \le \dfrac{11b^*/9 + 6/9 +2}{b^*} \le \dfrac{11}{9}+\dfrac{24}{9b^{*}} \le \frac{3}{2}\).

From the previous analysis, we can see that the lemma holds. \(\square \)

4 \((\frac{3}{2}+\varepsilon )\)-Algorithm for problem \(P2\rightarrow D \)

We first give a new batching strategy to tackle the case where there are at most seven batches in an optimal solution, then embed the strategy into the old framework and prove that the approximation ratio is close to 1.5, finally combine the old algorithm and our algorithm, we prove that there is a \((1.5 + \epsilon )\)-approximation algorithm for the problem.

4.1 New batching strategy for small \(b^* \le 7\)

In this subsection, we give a new batching strategy for the case where there are at most seven batches in an optimal solution. Normally we cannot know the exact assignment of jobs in an optimal solution. However if a configuration of batches in a feasible assignment is known, then we can guess a feasible assignment of jobs into batches such that the processing time in each batch is similar with each other and the number of batches is also similar with each other.

Assume that some partial information of a feasible assignment is known:

  • the makespan value \(\bar{C}\),

  • the number of batches in the assignment \(\bar{b}\),

  • and a vector \((\bar{p}(B_1^{'}), \bar{p}(B_2^{'}),\ldots , \bar{p}(B_{\bar{b}}^{'}) )\), where \(\bar{p}(B_i^{'})\) is the total processing time in batch \(B_i^{'}\).

We will assign all jobs into batches of \(B_1, \ldots , B_b\), where \( b \le \bar{b} +1\) such that the total processing time \(p(B_i )\) in \(B_i\) is at most as \( \bar{p}(B_i^{'})\) and the total size \(s(B_i)\) is at most one for all \(1 \le i \le \bar{b} \) and \(p(B_b) = \Theta (\frac{\varepsilon \bar{C}}{32}) \) if \(b= \bar{b} +1\), where \(\varepsilon >0\) is a given parameter. The details are given in the following algorithm.

figure b

Linear Programming for assigning jobs into \(\bar{b}\) batches: after assigning large jobs into \(\bar{b}\) batches, assume the total size of large jobs in \(B_i\) is \(S_i\), the total processing time in \(B_i\) is \(P_i\). Given \(n'\) small jobs, the following linear programming is used to assign small jobs to \(\bar{b} \) batches.

$$\begin{aligned}&\sum _{j=1}^{\bar{b}} x_{ij} =1,&i=1,2,\ldots , n^{'} \end{aligned}$$
(2)
$$\begin{aligned}&\sum _{i=1}^{n'} x_{ij}p_i + P_j \le \bar{p}(B_j^{'}),&j=1,2,\ldots ,\bar{b} \end{aligned}$$
(3)
$$\begin{aligned}&\sum _{i=1}^{n'} x_{ij}s_i + S_j \le 1,&j=1,2,\ldots ,\bar{b} \nonumber \\&x_{ij} \ge 0&\forall i,j \ge 1 \end{aligned}$$
(4)

Variable \(x_{ij} =1\) if job i is assigned into batch j.

The equations (2) show that each small job must be assigned on some machine. The equations (3) show that the total processing time of jobs assigned in batch j is bounded by \(\bar{p}(B_j^{'})\) respectively. The equations (4) show that the total size of the jobs assigned in each batch is bounded by 1 respectively.

Lemma 7

If \(\bar{b} \le 7\) there are at most \(64/\epsilon + 112\) large jobs.

Proof

If there are more than 112 large jobs with size at least 1/16, then \(\bar{b}\) would have been larger than 7. If there are more than \(64/\epsilon \) large jobs with processing time at least \(\epsilon \bar{C}/32\), then the makespan of an optimal solution would have been larger than \(\bar{C}\). \(\square \)

Lemma 8

In a basic feasible solution of the above LP, there are at most \(2\bar{b}\) small jobs such that \( 0< x_{ij} < 1\) for some j.

Proof

In a basic feasible solution, let x be the number of small jobs such that \(x_{ij} =1\) and let y be the number of small jobs such that \( 0< x_{ij} < 1\) for some j. It is not difficult to see that

$$\begin{aligned} x + y =n^{'}. \end{aligned}$$

On the other side, the linear program contains at most \(n^{'} + 2\bar{b}\) constraints and \(\bar{b}n^{'}\) variables. So in a basic feasible solution, there are at most \(n^{'} + 2\bar{b}\) no-zero variables, i.e.,

$$\begin{aligned} x + 2 y \le n^{'} + 2\bar{b}. \end{aligned}$$

Hence we have \(y \le 2\bar{b}\). \(\square \)

Lemma 9

If there are some fractional jobs in a feasible solution of the linear programming, then an extra batch is enough and \(p(B_b) \le \frac{\varepsilon \bar{C}}{2}\).

Proof

By Lemma 8, we have at most \(2\bar{b}\) small jobs with \(0< x_{ij} <1 \) for some j. Since each small job has size at most 1/16 and \(\bar{b} \le 7\), all such small jobs can be packed into one batch. Hence an extra bin for fractional small items is enough.

By the definition of small jobs and Lemma 8, \(\bar{b} \le 7\), we have

$$\begin{aligned} p(B_b) \le 2\bar{b} \cdot \frac{\varepsilon \bar{C}}{32} < \frac{\varepsilon \bar{C}}{2}. \end{aligned}$$

Hence we have this lemma. \(\square \)

4.2 A new upper bound \(1.5 + \epsilon \)

figure c

Lemma 10

There is at least one feasible solution returned by the above algorithm.

Proof

It is not difficult to see that an optimal solution with \(b^*\) batches and \((p(B_1^*), p(B_2^*),\ldots , p(B_{b^*}^*))\) is one of feasible solutions of LP(\(\bar{C}, b^*, \bar{p}\) ), where \(\bar{p}\) is vector and its i-th component value is \( (k+1)\frac{\epsilon }{32} \bar{C}\), where \(k \cdot \frac{\epsilon }{32} \bar{C} < p(B_i^*) \le (k+1)\frac{\epsilon }{32} \bar{C}\). Since in our algorithm, we try all possible values for each component of \(\bar{p}\), the optimal solution must be included in feasible solutions of LP(*,*,*) defined in the last subsection. \(\square \)

Lemma 11

The above algorithm is run in polynomial time if \(b^* \le 7\).

Proof

By Lemma 7, the number of large jobs is upper bounded by \(O(1/\epsilon )\). Given a vector \(\bar{p}\), the number of enumerating all possible cases for large jobs is at most \((b^*)^{O(1/\epsilon )} \le 7^{O(1/\epsilon )} \). The number of all possible vectors \(\bar{p}\) is at most \(O((1/\epsilon )^{b^*}) \le O(1/\epsilon ^7) \). And it takes time \(O(n^c)\) to solve a linear program for some constant c. Hence we have this lemma. \(\square \)

Lemma 12

If \(2 \le b ^* \le 7\), then the makespan by the above algorithm is at most \((1.5 + \epsilon )C^*\).

Proof

Given an optimal solution with batching vector \((p(B_1^* ), p(B_2^*),\ldots , p(B_{b^*}^*))\), we can consider the linear programm LP\((\bar{C}, b^*, \bar{p})\), where \(\bar{C} \le 2C^*\) and the i-th component \(\bar{p}_i\) is at least \(p(B_i^*)\) and at most \(p(B_i^*) + \epsilon \bar{C}/32\). Then we consider a feasible solution associated with the above LP\((\bar{C}, b^*, \bar{p})\). Let C be the makespan generated by the feasible solution. By Lemma 2, at least one of the following equations holds.

  • \(C =C(B_b)+T \)

  • \(C = C(B_{b-1}) +2T \)

  • \( C = C(B_{1})+ b \cdot T \)

  • \(C = C(B_{b-2})+3T \)

  • \(C = C(B_{2})+(b -1) \cdot T \)

Case 1: \(C= C(B_b)+T \). We can prove \(C= \frac{3}{2}C^*\) using the way similar with Case 1 in Lemma 6.

Case 2: \( C= C(B_{b-1}) +2T\). We can also prove \(C= \frac{3}{2}C^*\) using the way similar with Case 2 in Lemma 6.

Case 3: \(C= C(B_{1})+ b \cdot T\). In this case, we can assume \(b \ge 3\), otherwise Case 1 or Case 2 will occur. Then we have

$$\begin{aligned} C(B_1) = p(B_1) \le \frac{P}{3} \le \frac{2C_M^*}{3}\le \frac{2C^*}{3} \end{aligned}$$

and \(C^* \ge \frac{p(B_1^*)}{2} + b^* \cdot T\) since \(C^*(B_1^*) \ge \frac{p(B_1^*)}{2}\). If \(b =b^*\), it means that we don’t use an extra batch in Algorithm \(A_2\). Due to \(\bar{C} \le 2 C^*\),

$$\begin{aligned} C(B_1) = p(B_1) \le p(B_1^*) + \epsilon \bar{C}/32 < p(B_1^*) + \epsilon C^*, \end{aligned}$$

due to \( C^* \ge \frac{p(B_1^*)}{2} + b^* \cdot T \), we have

$$\begin{aligned} \frac{C(B_1) + b^* T}{C^*} \le \frac{p(B_1)/2 - \epsilon C^* + b^* T}{\frac{p(B_1^*)}{2} + b^* \cdot T } + \frac{p(B_1)/2 + \epsilon C^* }{C^*} = 1 + \epsilon + 1/3. \end{aligned}$$

Otherwise by Lemma 9, we have \(b= b^* +1\). By Lemma 9 and Algorithm \(A_1\), \(\bar{C} \le 2 C^*\), we have

$$\begin{aligned} C(B_1) = p(B_1) \le \epsilon C^*. \end{aligned}$$

Due to \(b^* \ge 2\) and \(C^* \ge b^* T\), we have

$$\begin{aligned} \frac{C(B_1) + (b^* +1) T}{C^*} \le \epsilon + 1.5. \end{aligned}$$

Case 4: \( C = C(B_{b-2})+3T \). In this case, we have \(b \ge 4\), otherwise one of the above three cases will happen. If \(b^* \ge 4\), then we can prove \( C/C^* \le 1.5\) using the way similar with Case 3 in Lemma 6. Otherwise \(b^* = 3\). By Lemma 9, we have \(b=4\). By the algorithm we use, \(C(B_2) = p(B_2) \le \frac{2C_M^*}{3}\). Hence we have:

$$\begin{aligned} \dfrac{C(B_{2}) +3T}{C^{*}} \le \dfrac{2C_M^*/3 + 2T/3}{C_M^*+T} + \dfrac{7T/3}{b^{*} T} \le 2/3 + 7/9 = \frac{3}{2}, \end{aligned}$$

since \(b^{*} = 3 \).

Case 5: \(C = C(B_{2})+(b -1) \cdot T\). In this case we have \(b \ge 5\), otherwise one of Case 1, Case 2 and Case 4 will happen. By Lemma 3, we have \(p(B_2)\le 2T\).

By Lemma 9, we can get:

$$\begin{aligned} \dfrac{C(B_{2})+(b-1)T}{C^{*}} \le \dfrac{(b+1)T}{b^{*}T} \le \frac{b^* +2}{b^*} \le \frac{3}{2}, \end{aligned}$$

where \(b^* \ge b -1 \ge 4\).

From the previous analysis, we can see that the lemma holds. \(\square \)

Theorem 1

There is a polynomial algorithm with makespan at most \((1.5 + \epsilon )C^*\), where \(\epsilon >0\) is sufficient small real.

Proof

Consider the following algorithm.

figure d

To get schedule \(\delta _1\), it takes \(O( n \log n)\) time steps. By Lemma 11, it takes polynomial time to get \(\delta _i\) for \( 2 \le i \le 7\). So the above algorithm takes polynomial time.

By Lemmas 5, 6, and 12, we have the makespan by the above algorithm is at most \((1.5 + \epsilon ) C^*\). \(\square \)

5 Concluding remarks

In this paper, we find that the approximation ratio of old algorithms is 1.5 for the case where there are at least eight batches, then we give a new algorithm with approximation ratio \(1.5+\varepsilon \) for the other case. We know that there is no algorithm with approximation ratio less than 1.5 unless P=NP. Whether there is 1.5-approximation algorithm for the case where there are at most seven batches in an optimal solution leaves as an open question.