1 Introduction

We consider a scheduling model with m identical machines \(\mathcal{M} = \{M_1, M_2,\ldots , M_{m}\}\) and n jobs \(\mathcal{J}=\{J_{1}, J_{2}, \ldots , J_{n}\}\). Each job has a processing time \(p_j\) and a weight \(w_j\) and each job has to be assigned to some machine. The objective is to minimize the maximum weighted completion time of jobs. Let \(C_{j}\) be the completion time of job \(J_{j}\). For a parameter \(p\in \mathbb {R}\), the \(L_{p}\)-norm of the job weighted completion time vector \((w_{1}C_{1},\ldots ,w_{n}C_{n})\) can be defined as \((\sum _{j=1}^{n}(w_{j}C_{j})^{p})^{\frac{1}{p}}\). When \(p = 1\), it is the classical total weighted completion time problem and can go back to the work of Smith (1956). For minimizing the weighted completion time, Sahni (1976) gave a fully polynomial time approximation scheme (FPTAS) for this problem on identical parallel machine setting with a fixed m. Kawaguchi and Kyan (1986) proved that list scheduling in order of nonincreasing order of \(w_{j}/p_{j}\) is an approximation algorithm with an approximation ratio of \(\frac{1}{2}(1+\sqrt{2})\) for this problem when m is not fixed. Skutella and Woeginger (2000) designed a polynomial time approximation scheme (PTAS) for this problem when m is not fixed. For jobs with release date setting, Phillips et al. (1998) showed that a given preemptive schedule on single machine can produce a nonpreemptive schedule while increasing the total weighted completion time by at most a factor of 2. Afrati et al. (1999) proposed a PTAS for minimizing weighted completion time with release dates on identical parallel machines. For more general setting with unrelated parallel machines, i.e. the job \(J_{j}\) has a processing time \(p_{ij}\) on machine \(M_i\), Skutella (2001) designed a 3/2-approximation algorithm based on the convex programming relaxation. Bansal et al. (2016) proposed the first \((3/2 - c)\)-approximation algorithm for this problem, for some constant \(c=10^{-7}>0\) and they introduced a novel rounding scheme yielding strong negative correlation for the first time. Li (2020) used strong negative correlation as a black box and improved this approximation ratio to \((3/2 - 1/6000)\) based on the simpler time-indexed linear programming relaxation. Im and Shadloo (2020) via iterative fair contention-resolution techniques achieve a significantly stronger negative correlation and improved approximation ratio to 1.448. Baveja et al. (2023) improved the value of the negative correlation of Bansal et al. (2016) from 1/108 to 1/27, thus further improved the constant c in the algorithms of Bansal et al. (2016) for this problem. When \(p = +\infty \), it is the maximum value of \(w_{j}C_{j}\). For minimizing the weighted makespan (the maximum weighted completion time), Feng and Yuan (2007) first introduced the weighted makespan \(WC_{\max }\) on a single-machine scheduling problem. Li (2015) studied the online single machine scheduling problem to minimize the weighted makespan \(WC_{\max }\) in which jobs arrive over time. For the general online problem in Li (2015), Chai et al. (2018) provided two on-line algorithms with the best-possible competitive ratio 2. Recently, Lu et al. (2021) studied the single machine scheduling problem with rejection to minimize the weighted makespan. They showed that this problem was NP-hard and proposed an FPTAS for this problem.

When the weight of jobs is equal to 1, minimizing the weighted makespan on parallel machines is exactly minimizing the makespan. This is one of the most classical scheduling problems on parallel machines, Graham (1966), Graham (1969) provided the first approximation algorithm. For earlier approximation schemes for makespan minimization on parallel machines, see Hochbaum and Shmoys (1987), Hochbaum (1997), Alon et al. (1998). The efficient polynomial time approximation scheme (EPTAS) for scheduling on identical parallel machines with the asymptotic running time was presented by Jansen et al. (2020). Recently, Kones and Levin (2019) devised a unified framework for designing EPTAS for general makespan minimization on parallel machines.

A \(\rho \)-approximation algorithm for a minimization problem is a polynomial time algorithm that always finds a feasible solution with an objective value of at most \(\rho \) times the optimal value. The infimum value of \(\rho \) for which an algorithm is a \(\rho \)-approximation is called the approximation ratio or the performance guarantee of the algorithm. A PTAS for a given problem is a family of approximation algorithms such that the family has a \((1 + \varepsilon )\)-approximation algorithm for any \(\varepsilon > 0\). An EPTAS is a PTAS whose time complexity is upper bounded by a value of \(f(\frac{1}{\varepsilon })\cdot poly(n)\) where f is some computable (not necessarily polynomial) function and poly(n) is a polynomial of the length of the (binary) encoding of the input. An FPTAS is an EPTAS which satisfies that f must be a polynomial in \(\frac{1}{\varepsilon }\).

Motivated by the study in Lu et al. (2021). In this paper, we first present a (\(2-\frac{1}{m}\))-approximation algorithm for the weighted makespan scheduling problem on identical parallel machines. Then, we develop a randomized EPTAS for this problem and we also design a randomized FPTAS for the special case when m is fixed. The remainder of this paper is organized as follows. In Sect. 2, we describe the definition of the weighted makespan scheduling problem and some preliminaries that will be used throughout the paper. In Sect. 3, we present a (\(2-\frac{1}{m}\))-approximation algorithm for the problem. In Sect. 4, we present the approximation schemes for the problem with a constant type of weight. In Sect. 5, we present the randomized approximation schemes for this problem. We present some conclusions and possible future research in the last section.

2 Problem formulation and preliminaries

In this paper, we consider the scheduling problem with m identical machines \(\mathcal{M} = \{M_1, M_2,\ldots , M_{m}\}\) and n jobs \(\mathcal{J}=\{J_{1}, J_{2}, \ldots , J_{n}\}\). Each job has a processing time \(p_j\) and a weight \(w_j\). Without loss of generality, we assume that all \(p_j\) and \(w_j\) values are positive integers. A schedule is an assignment of n jobs to m machines, i.e. find a schedule \(\sigma : \mathcal{J}\rightarrow \mathcal{M}\). Furthermore, let \(C_j\) be the completion time of \(J_j\). Let \(C_{\max }=\max \{C_{j}|J_{j}\in \mathcal{J}\}\) and \(WC_{\max } = \max \{w_{j}C_{j}|J_{j}\in \mathcal{J}\}\) denote the makespan and weighted makespan (the maximum weighted completion time) of jobs, respectively. The objective is to minimize the weighted makespan of the jobs. Using the general notation for scheduling problems, our problem can be denoted by \(P||WC_{\max }\) and let \(P_{m}||WC_{\max }\) denote the special case when m is fixed. When \(m=1\), Li (2015) showed that the LW (Largest Weight first) rule yields an optimal schedule, i.e. if some job processing finished or the machine is idle, then schedule the job with the largest weight.

Clearly, our problem is NP-hard, since it is a general case of minimizing the makespan. We assume that \(\mathcal J^{\prime }\) is a set of some jobs. Let \(p(\mathcal{J^{\prime }})=\sum _{J_{j}\in \mathcal{J^{\prime }}}p_{j}\) be the total processing time of \(\mathcal J^{\prime }\). Let \(w_{1}>w_{2}>\cdots >w_{\tau }\) be the \(\tau \) different job weights for job \(J_{j}\in \mathcal{J}\), let \(\mathcal{J}^{t}=\{J_{j}\in \mathcal{J}|w_{j}=w_{t}\}\), where \(\mathcal{J}^{t}\) is the jobs with weight \(w_{t}\). Then, we have a lower bound for our problem in the following:

$$\begin{aligned} WC_{\max }\ge w_{t}\frac{p(\mathcal{J}^{t})}{m}, ~\mathrm{~for~each}~t=1,\ldots ,\tau . \end{aligned}$$
(1)

We demonstrate an example to explain our problem. We are given an instance I for \(P||WC_{\max }\) with two machines and four jobs: \(J_{1}=(w_{1},p_{1})=(2,1)\), \(J_{2}=(w_{2},p_{2})=(2,3)\), \(J_{3}=(w_{3},p_{3})=(3,2)\), \(J_{4}=(w_{4},p_{4})=(4,1)\). We can construct an optimal solution that assigns \(J_{4}\) followed by \(J_{2}\) to the first machine and \(J_{3}\) followed by \(J_{1}\) to the second machine. The value of the objective function is \(WC_{\max }=w_{2}C_{2}=8\). In this paper, let \(\mathbb {N}\) and \(\mathbb {N}_0\) be the set of positive and nonnegative integers, respectively. Let \(\sigma ^{*}\) and OPT be an optimal schedule and corresponding objective value of \(\sigma ^{*}\), respectively.

3 A (\(2-\frac{1}{m}\))-approximation algorithm

In this section, we provide a (\(2-\frac{1}{m}\))-approximation algorithm for problem \(P||WC_{\max }\).

Algorithm \(A_{1}\)

Step 1: Sort all jobs such that \(w_{1} \ge w_{2} \ge \cdots \ge w_{n}\).

Step 2: If a machine becomes idle, assign jobs to the machine according to the LS algorithm.

Let I be the original instance. Let \(\sigma \) be the schedule obtained from algorithm \(A_{1}\) for instance I, and OUT be the corresponding objective values of \(\sigma \). We have the following theorem.

Theorem 3.1

\(OUT\le (2-\frac{1}{m})OPT \).

Proof

For any job \(J_{j}\), let \(C_j\) and \(C_{j}^{*}\) be the completion time of \(J_j\) in \(\sigma \) and \(\sigma ^{*}\), respectively. Also, let \(C_{\max }=\max _{j}C_{j}\) and \(C_{\max }^{*}=\max _{j}C_{j}^{*}\) be the makespan of \(\sigma \) and \(\sigma ^{*}\), respectively. Let job \(J_c\) be the last job assigned to the maximum load machine. Clearly, we have

$$\begin{aligned} C_{\max }\le & {} \frac{\sum _{j=1}^{n}p_{j}-p_{c}}{m}+p_{c}\le \frac{\sum _{j=1}^{n}p_{j}}{m}+(1-\frac{1}{m})p_{c}\le \frac{\sum _{j=1}^{n}p_{j}}{m}\\{} & {} +(1-\frac{1}{m}){\max }_{j}p_{j}\le (2-\frac{1}{m})C_{\max }^{*}. \end{aligned}$$

In the schedule \(\sigma \), let \(J_{l}\) be the job such that \(w_{l}C_{l}=OUT\). Without loss of generality, we suppose that all jobs start before \(S_{l}\), where \(S_{l}\) is the starting time of job \(J_{l}\). Otherwise, it can be deleted from I. Let the new instance be \(I^{1}\), we have \(OUT=OUT^{1}\) and \(OPT\ge OPT^{1}\), where \(OUT^{1}\) and \(OPT^{1}\) be the value of schedule obtained from algorithm \(A_{1}\) and optimal solution for instance \(I^{1}\), respectively. That is, \(OUT/OPT\le OUT^{1}/OPT^{1}\), it dose not decreasing the value of OUT/OPT. See Fig. 1 for a visualization, where the shadow rectangle represents the set of jobs whose starting time after \(S_{l}\). Thus, we can suppose that \(w_{l}\le w_{j}\) for any job \(J_{j}\), since every job starts before \(S_{l}\). Therefore, we have

$$\begin{aligned} OUT=w_{l}C_{l}\le w_{l}C_{\max }\le (2-\frac{1}{m})w_{l}C_{\max }^{*}\le (2-\frac{1}{m})w_{l^{*}}C_{\max }^{*}\le (2-\frac{1}{m})OPT, \end{aligned}$$

where \(w_{l^{*}}\) is the weight of the job with largest completion time in schedule \({\sigma }^{*}\). This completes the proof of Theorem 3.1. \(\square \)

Fig. 1
figure 1

A visualization for instance I and \(I^{\prime }\)

4 Approximation schemes for a constant type of weight

In many real cases, some parameter may only constant types. For example, in natural scenarios, many machines are of the same type. Gehrke et al. (2018) consider the scheduling on unrelated machines of few different types. In this problem, each job \(J_j\) on machine \(M_i\) has a processing time \(p_{ij}\). For the case where the number K of machine types is constant, i.e. the machine \(i\ne i^{\prime }\) of the same type k satisfy \(p_{ij}=p_{i^{\prime }j}\), they presented a PTAS for this problem. Subsequently, Jansen and Maack (2019) presented an EPTAS for scheduling on unrelated machines of few different types. For the case \(K = 2\), Imreh (2003) designed greedy algorithms with approximation rates \(2 + (m_1-1)/m_2\) and \(4-2/m_1\), Bleuse et al. (2015) presented an algorithm with approximation rate \(4/3+3/m_2\), where \(m_i\) is the number of machines for type \(i=1,2\). Moreover, Raravi and Nélis (2012) designed a PTAS for the case \(K=2\). In this section, we will develop an EPTAS and FPTAS for \(P||WC_{\max }\) and \(P_{m}||WC_{\max }\) such that jobs with a constant type of weight, respectively.

We can assume that \(w_j \le 1\) for all jobs \(J_j\in \mathcal{J}\), since the weight of all the jobs can be divided by the maximum weight value. In this section, we consider all jobs with a constant type of weight. Let \(0< \delta < 1\) be a real constant and \(\Delta \in \mathbb {N}\) be a constant. Without loss of generality, we assume that all jobs of weights \(w_j\) be the form \(\delta ^h\) with \(h \in \{0, 1, \ldots , \Delta \}\) in instance I of problem \(P||WC_{\max }\) and \(Pm||WC_{\max }\). Let \(\gamma \) be any arbitrary small constant with \(0< \gamma < 1\), and let \({\hat{\gamma }}=\frac{\delta ^{\Delta }\cdot \gamma }{4\Delta +5}\) such that \(\frac{1}{{\hat{\gamma }}}\) is an integer.

The main idea of our approximation schemes for a constant type of weight is as follows. First, we round the processing time of each job and glue the small jobs in original instance I and the rounded instance is \({\hat{I}}\). Then, all the jobs with the same weight and rounded processing time are contained in the same job type. By the construction of \({\hat{I}}\) the number of different job types is a constant. Finally, we can determine an optimal solution to instance \({\hat{I}}\). Based on the optimal solution to \({\hat{I}}\), we construct a feasible solution to the original instance I. We can show that the ratio of the value of the constructed feasible solution for the original problem to the value of the optimal solution for the original problem is \((1+\gamma )\).

4.1 EPTAS

In our EPTAS, we first apply algorithm \(A_{1}\) to obtain a solution with value OUT. By Theorem 3.1, we have \(OPT\le OUT\le (2-\frac{1}{m})OPT \le 2OPT\) or equivalently \(OUT/2\le OPT\le OUT\). Let \(L=OUT\), then L is an upper bound of OPT. As a result, we have \(t^{*}\le \frac{1}{\delta ^{\Delta }}WC_{t^{*}}\le \frac{1}{\delta ^{\Delta }}OPT\le \frac{1}{\delta ^{\Delta }}L\) in which \(t^{*}, WC_{t^{*}}\) are the corresponding makespan and weighted completion time values with respect to \(\sigma ^{*}\), respectively. Let \(f=\frac{1}{\delta ^{\Delta }}\) and it is also a constant. Hence, we may assume that \(p_{j}\le fL\), for \(j=1,\ldots ,n\).

Let \(\mathcal{J}_{0}=\{J_{j}\in \mathcal{J}|p_{j}\le {\hat{\gamma }}{f}L\}\) be the set of small jobs and \(\mathcal{J}\setminus \mathcal{J}_{0}\) be the set of big jobs. Furthermore, we divide the big jobs in \(\mathcal{J}{\setminus } \mathcal{J}_{0}\) into \(\kappa \) subsets \(\mathcal{J}_{1}, \mathcal{J}_{2}, \ldots , \mathcal{J}_{\kappa }\) such that

$$\begin{aligned}{} & {} \kappa =\frac{1-{\hat{\gamma }}}{{\hat{\gamma }}^{2}} ~\textrm{and}~ \mathcal{J}_{k}\\{} & {} \quad =\{J_{j}\in \mathcal{J}| {\hat{\gamma }}{f}L+(k-1){\hat{\gamma }}^{2}{f}L<p_{j}\le {\hat{\gamma }}{f}L+k{\hat{\gamma }}^{2}{f}L\},~\textrm{for}~k=1,2,\ldots , \kappa . \end{aligned}$$

Based on the instance I, we construct a new instance \({\hat{I}}\). For each job \(J_{j}\in \mathcal{J}_{0}\), we construct a job set \(\hat{\mathcal{J}}_{0}\), which contains

small job with processing time \({\hat{\gamma }}{f}L\). For each job \(J_{j}\in \mathcal{J}_{k}(k=1,2,\ldots ,\kappa )\), we construct a job \({\hat{J}}_{j}\in \hat{\mathcal{J}}_{k}\) such that

By the construction of \(\mathcal{J}_{k}\), we have \(\frac{1}{{\hat{\gamma }}}+(k-1)<\frac{p_{j}}{{\hat{\gamma }}^{2}{f}L}\le \frac{1}{{\hat{\gamma }}}+k\). Thus, we have

for job \({\hat{J}}_{j}\in \hat{\mathcal{J}}_{k}\), since \(\frac{1}{{\hat{\gamma }}}\) is an integer.

Let \(\hat{\mathcal{J}}=\cup _{k=0}^{\kappa }\hat{\mathcal{J}}_{k}\) and \({\hat{n}}=|\hat{\mathcal{J}}|\) be the new job set and corresponding the number of jobs in \(\hat{\mathcal{J}}\), respectively. By the definition of \(J_{j}\in \mathcal{J}_{k}(k=1,2,\ldots ,\kappa )\), we also have

$$\begin{aligned} p_{j}\le {\hat{p}}_{j}\le p_{j}+{\hat{\gamma }}^{2}{f}L\le (1+{\hat{\gamma }})p_{j}, \end{aligned}$$

where the last inequality follows from \(p_{j}>{\hat{\gamma }}{f}L\) for each job \(J_{j}\in \mathcal{J}_{k}\). In the following, we will partition all of the small jobs into \(\Delta +1\) subsets \({S}_0, {S}_1,\ldots ,{S}_{\Delta }~({\hat{S}}_0, {\hat{S}}_1,\ldots ,{\hat{S}}_{\Delta })\), where the jobs in \({S}_{h}~({\hat{S}}_{h})\) have the same weight \(w_{h} (h = 0, 1,\ldots ,\Delta )\) for instance \(I~({\hat{I}})\).

Lemma 4.1

Let \({\hat{OPT}}\) and OPT denote the optimal objective values for the corresponding instance \({\hat{I}}\) and I. Then the following holds

$$\begin{aligned} OPT\le {\hat{OPT}}+(\Delta +1){\hat{\gamma }}{f}L. \end{aligned}$$

Proof

Let \({\hat{\sigma }}\) denote the optimal solution to instance \({\hat{I}}\). It suffices to show that there exists a feasible solution \(\sigma ^{\prime }\) such that \(OPT^{\prime }\le (1+{\hat{\gamma }}){\hat{OPT}}\), where \(OPT^{\prime }\) is the objective value of feasible solution \(\sigma ^{\prime }\). Notice that replace every big job \({\hat{J}}_{j}\) by its corresponding job \(J_j\) in I, this cannot increase the machine completion time.

Let \({\hat{S}}_{i,h}\) denote the total processing time of the type h small jobs that are processed on machine \(M_i\) in \({\hat{\sigma }}\). Consider a small job set \(S_{h}\), without loss of generality, we can assume that on each machine, the corresponding jobs are processed consecutively in schedule \({\hat{\sigma }}\). For \(i = 1, 2, \ldots , m\), we schedule the jobs in \(\mathcal{J}_0\) on machine \(M_i\) until the total processing time of the jobs just exceeds \({\hat{S}}_{i,h}\). Finally, we schedule the remaining jobs on any machine, and for the jobs assigned to the same machine we sequence them in an order of decreasing weights. This completes the construction of solution \(\sigma ^{\prime }\) and it is easy to verify that \(\sigma ^{\prime }\) is a feasible solution.

By the construction of solution \({\sigma }^{\prime }\), the completion time of each job on machine \(M_{i}(i=1,\ldots ,m)\) in \(\sigma ^{\prime }\) will not exceed the completion time of each job on machine \(M_{i}\) in \({\hat{\sigma }}\) at most \((\Delta +1)\cdot {\hat{\gamma }}{f}L\). In solution \(\sigma ^{\prime }\), we assume that \(J_{j}\) with weight \(w_{h}\) for some fixed h is the job such that \(w_{j}C_{j}=OPT^{\prime }\) and processed on machine \(M_i\). Recall that the last scheduled job in some subset \(\mathcal{J}\) has the maximal weighted makespan among all jobs in the subset. Thus, in optimal solution \({\hat{\sigma }}\) of new instance \({\hat{I}}\), we can assume that job \(J_{j^{\prime }}\) has a similar weight \(w_{h}\) and is last processed in subset \({\hat{S}}_{h}\) on machine \(M_i\). Thus, we have

$$\begin{aligned} OPT^{\prime }\le & {} w_{j^{\prime }}(C_{j^{\prime }}({\hat{\sigma }})+(\Delta +1){\hat{\gamma }}{f}L)\\\le & {} {\hat{OPT}}+\max _{0\le h\le \Delta }{\delta }^{h}(\Delta +1){\hat{\gamma }}{f}L \le {\hat{OPT}}+(\Delta +1){\hat{\gamma }}{f}L. \end{aligned}$$

By the construction of \(\sigma ^{\prime }\) and the above assumptions, job \(J_{j}\) has the same weight as \(J_{j^{\prime }}\) and the completion time of job \(J_{j}\) will not exceed the completion time of job \(J_{j^{\prime }}\) at most \((\Delta +1){\hat{\gamma }}{f}L\), which means that the first inequality holds. The second inequality follows from the fact that \({\hat{OPT}}\) is the weighted makespan in \({\hat{\sigma }}\). This completes the proof. \(\square \)

Lemma 4.2

Let OPT and \({\hat{OPT}}\) denote the optimal objective values for the corresponding instance I and \({\hat{I}}\), respectively. We have

$$\begin{aligned} {\hat{OPT}}\le (1+{\hat{\gamma }})OPT+(\Delta +1){\hat{\gamma }}{f}L. \end{aligned}$$

Proof

Let \(\sigma ^{*}\) denote the optimal solution to instance I. Next we construct a new solution \({\hat{\sigma }}\) and the corresponding objective value is \({\hat{OPT}}\). Replace every big job \(J_j\) by its corresponding big job \({\hat{J}}_{j}\). This may increase every machine completion time by a factor of at most \((1+{\hat{\gamma }})\). Next, let \(S_{ih}\) denote the total size of small jobs of weight \(w_{h}\) that are processed on machine \(M_i\) in schedule \(\sigma ^{*}\). We schedule jobs with length \({\hat{\gamma }}{f}L\) and weight \(w_{h}\) on machine \(M_{i}\) for instance \({\hat{I}}\). Finally, for the jobs assigned to the same machine, we sequence them in an order of decreasing weights. And it can accommodate all jobs in instance \({\hat{I}}\).

By the construction of solution \({\hat{\sigma }}\), the completion time of the each job on machine \(M_{i}(i=1,\ldots ,m)\) in \({\hat{\sigma }}\) will not exceed the completion time of the each job on machine \(M_{i}\) in \(\sigma ^{*}\) by a factor of \((1+{\hat{\gamma }})\) and an amount of \((\Delta +1)\cdot {\hat{\gamma }}{f}L\). In solution \({\hat{\sigma }}\), we assume that \(J_{j}\) with weight \(w_{h}\) for some fix h be the job such that \(w_{j}C_{j}={\hat{OPT}}\) and processed on machine i. Recall that the last scheduled job in some subset \(\mathcal{J}\) has the maximal weighted makespan among all jobs in the subset. Then, in optimal solution \(\sigma ^{*}\) of original instance I, we can assume that the job \(J_{j^{\prime }}\) with similar weight \(w_{h}\) and last processed in subset \(S_{h}\) on machine \(M_i\). Thus, we have

$$\begin{aligned} {\hat{OPT}}\le & {} w_{j^{\prime }}((1+{\hat{\gamma }})C_{j^{\prime }}(\sigma ^{*})+(\Delta +1){\hat{\gamma }}fL)\\\le & {} (1+{\hat{\gamma }})OPT+\max _{0\le h\le \Delta }{\delta }^{h}(\Delta +1){\hat{\gamma }}{f}L\le (1+{\hat{\gamma }})OPT+(\Delta +1){\hat{\gamma }}{f}L. \end{aligned}$$

By the construction of \({\hat{\sigma }}\) and the above assumptions, job \(J_{j}\) has the same weight as \(J_{j^{\prime }}\) and the completion time of job \(J_{j}\) will not exceed the completion time of job \(J_{j^{\prime }}\) by a factor of \((1+{\hat{\gamma }})\) and a amount of \((\Delta +1)\cdot {\hat{\gamma }}{f}L\), which means that the first inequality holds. The second inequality follows from fact that OPT is the weighted makespan in \(\sigma ^{*}\). This completes the proof. \(\square \)

Lemma 4.3

Let \(\tau \) be the total number of different types of jobs, then \(\tau \le (1+\Delta )\cdot (1+\kappa )\), where \(\kappa =\frac{1-{\hat{\gamma }}}{{\hat{\gamma }}^{2}}\).

We index the \((1+\Delta )\cdot (\kappa +1)\) job types as \(1,2,\ldots ,(1+\Delta )\cdot (\kappa +1)\). We assume that the \((1+\Delta )\cdot (\kappa +1)\) job types are indexed in a non-increasing order of job weights. The job in each type are sorted in any order, since the jobs in each type have the same processing time and weight. We refer vector \(\textbf{n}=(n_{0,0},n_{0,1},\ldots ,n_{\Delta ,\kappa })\) represented the input jobs, where \(n_{h,k}\) denotes the number of jobs of weight \({w}_{h}\) and size \({\hat{\gamma }}{f}L+k{\hat{\gamma }}^{2}{f}L\), for \(h=0,\ldots ,\Delta \) and \(k=0,\ldots ,\kappa \). Note that \(\sum _{h=0}^{\Delta }\sum _{k=0}^{\kappa }n_{h,k}\le n\). An assignment to a machine is a vector \(\textbf{u}=(u_{0,0},u_{0,1},\ldots ,u_{\Delta ,\kappa })\) where \(u_{h,k}\) is the number of jobs with weight \(\delta ^{h}\) and size \({\hat{\gamma }}{f}L+k{\hat{\gamma }}^{2}{f}L\) that are assigned to the machine. The completion time of the machine is given by \(C(\textbf{u})=\sum _{h=0}^{\Delta }\sum _{k=0}^{\kappa }u_{h,k}\cdot ({\hat{\gamma }}{f}L+k{\hat{\gamma }}^{2}{f}L)\). Let \({\hat{t}}\) denote the optimal makespan value for the corresponding new instance \({\hat{I}}\). Since \(t^{*}\le fOPT\le fL\) and the construction of instance \({\hat{I}}\), we have \({\hat{t}}\le f{\hat{OPT}}\le (1+{\hat{\gamma }})fL+(\Delta +1){\hat{\gamma }}f^{2}L\), which satisfies that

$$\begin{aligned} C(\textbf{u})=\sum _{h=0}^{\Delta }\sum _{k=0}^{\kappa }u_{h,k}\cdot ({\hat{\gamma }}fL+k{\hat{\gamma }}^{2}fL)\le (1+{\hat{\gamma }})fL+(\Delta +1){\hat{\gamma }}f^{2}L, \end{aligned}$$

implying that

$$\begin{aligned} \sum _{h=0}^{\Delta }\sum _{k=0}^{\kappa }u_{h,k}\le 1+\frac{1}{{\hat{\gamma }}}+(\Delta +1)f. \end{aligned}$$

Denote by U the set of vectors \(\textbf{u}\) with \(C(\textbf{u})\le (1+{\hat{\gamma }})fL+(\Delta +1){\hat{\gamma }}f^{2}L\). For a vector \(\textbf{u}\in U\), each entry \(u_{h,k}\) is bounded by a number that only depends on the constants \({\hat{\gamma }}\) and is thus independent of the input. Therefore, the set \(|U|\le (\frac{1}{{\hat{\gamma }}}+1+(1+\Delta )f)^{\tau }\) is of constant size.

For each vector \(\textbf{u}\in U\), let \(g(\textbf{u})=\max \limits _{k_{1}=0,\ldots ,\Delta }\delta ^{k_{1}}\sum _{h=0}^{k_{1}}\sum _{k=0}^{\kappa }u_{h,k}\cdot ({\hat{\gamma }}fL+k{\hat{\gamma }}^{2}fL)\) denote the contribution to the objective function of a machine that is scheduled according to \(\textbf{u}\). Since the weighted makespan is determined by the last job of the subset that have the same weight.

We can now formulate the problem of finding an optimal schedule as an integer linear program with a constant number of variables. For each vector \(\textbf{u}\in U\) we introduce a variable \(x_{\textbf{u}}\) which denotes the number of machines that are assigned jobs according to \(\textbf{u}\). An optimal schedule is then given by the following program:

$$\begin{aligned} \min \quad&z\\ s.t. \quad&\sum _{\textbf{u}\in U}x_{\textbf{u}}=m\\&\sum _{\textbf{u}\in U}x_{\textbf{u}}\cdot \textbf{u}=\textbf{n}\\&y_{\textbf{u}}\le x_{\textbf{u}}\le m\cdot y_{\textbf{u}}&\forall \textbf{u}\in U\\&z\ge y_{\textbf{u}}\cdot g(\textbf{u})&\forall \textbf{u}\in U\\&x_{\textbf{u}}\in \{0,1,\ldots ,m\}&\forall \textbf{u}\in U \\&y_{\textbf{u}}\in \{0,1\}&\forall \textbf{u}\in U. \\ \nonumber \end{aligned}$$

The 0–1 variables \(y_{\textbf{u}}\) indicates whether the assignment \(\textbf{u}\) is used \((y_{\textbf{u}}=1)\) or not \((y_{\textbf{u}}=0)\). In case the assignment \(\textbf{u}\) is used, then the term \(y_{\textbf{u}}\cdot g(\textbf{u})\) contributes to the objective function. Since the number of variables of this integer linear program is \(2|U|+1\). Therefore, the above integer linear program can be solved optimally within time

$$\begin{aligned} O(|U|^{O(|U|)}\log ^{O(1)}n)=O(g(\frac{1}{{\hat{\gamma }}})n), \end{aligned}$$

where \(|U|=O((\frac{1}{{\hat{\gamma }}}+\Delta f)^{\frac{\Delta }{{\hat{\gamma }}^{2}}})\), f and \(\Delta \) are three constants. It has been shown by Lenstra (1983) that an integer linear program in constant dimension can be solved in polynomial time, whose running time is exponential in the dimension of the program but polynomial in the logarithms of the coefficients, where \(g(\frac{1}{{\hat{\gamma }}})\) is an exponential function of \(\frac{1}{{\hat{\gamma }}}\).

Based on Lemma 4.1 and Lemma 4.2, we can obtain a feasible schedule \(\sigma \) with objective value \(OUT_{\gamma }\) is at most \((1+\gamma )OPT\), i.e.

Lemma 4.4

\(OUT_{\gamma }\le (1+\gamma )OPT\).

Recall that \({\hat{\gamma }}=\frac{\delta ^{\Delta }\cdot \gamma }{4\Delta +5}\). Hence,

$$\begin{aligned} OUT_{\gamma }\le & {} {\hat{OPT}}+(\Delta +1){\hat{\gamma }}{f}L\\\le & {} (1+{\hat{\gamma }})OPT+(\Delta +1){\hat{\gamma }}{f}L+(\Delta +1){\hat{\gamma }}{f}L\\\le & {} (1+\gamma )OPT, \end{aligned}$$

where the last inequality follows from \(L\le 2OPT\) and \(f=\frac{1}{\delta ^{\Delta }}\).

We now consider the time complexity of \(P||WC_{\max }\) in our EPTAS. We first need to apply our \(O(n\log n)\) algorithm \(A_{1}\) to determine the value of L. It only takes O(n) time to construct instances \({\hat{I}}\). An optimal solution for instance \({\hat{I}}\) can be found within time \(O(g(\frac{1}{{\hat{\gamma }}})n)\), by solving the above integer program. Based on the optimal solution to \({\hat{I}}\), solution \(\sigma \) is easily generated in \(O(n\log n)\) time. Therefore, the overall running time is \(O(n\log n+g(\frac{1}{{\hat{\gamma }}})n)\), where \(g(\frac{1}{{\hat{\gamma }}})\) is an exponential function of \(\frac{1}{{\hat{\gamma }}}\). We thus have the following theorem.

Theorem 4.5

There exists an EPTAS with running time \(O(n\log n+g(\frac{1}{\gamma })n)\) for problem \(P||WC_{\max }\) with a constant type of weight, where \(g(\frac{1}{\gamma })\) is an exponential function of \(\frac{1}{\gamma }\).

4.2 FPTAS when m is fixed

In the following we present an efficient FPTAS for \(P_{m}||WC_{\max }\). Following from the fact that the processing time of each job in \(\mathcal {{\hat{J}}}\) is no less than \({\hat{\gamma }}{f}L\) and \(C(\textbf{u})\le (1+{\hat{\gamma }})fL+(\Delta +1){\hat{\gamma }}f^{2}L\), we have the following lemma.

Lemma 4.6

The number of jobs in \(\mathcal {{\hat{J}}}\) is at most

$$\begin{aligned} {{\hat{n}}\le \tau \cdot (\frac{1}{{\hat{\gamma }}}+1+(1+\Delta )f)m.} \end{aligned}$$

We now develop a different exact algorithm to solve problem instance \({\hat{I}}\). We present a dynamic programming algorithm for \(P_{m}||WC_{\max }\) as follows.

Sort all jobs such that \(w_{1}\ge \cdots \ge w_{n}\). Let \(f_{j}(t_{1},\ldots ,t_{m})\) be the minimum \(WC_{\max }\) value when (1) the jobs in consideration are \({\hat{J}}_{1},\ldots , {\hat{J}}_{j}\); and (2) the makespan of the jobs assigned to \(M_{i}\) is exactly \(t_{i}\). It is clear that \(t_{i}=0,{\hat{\gamma }}fL,{\hat{\gamma }}fL+{\hat{\gamma }}^{2}fL,\ldots ,(1+{\hat{\gamma }})fL+(\Delta +1){\hat{\gamma }}f^{2}L\).

If \({\hat{J}}_{j}\) is selected to be processed by machine \(M_{i}\), then we have the makespan of \({\hat{J}}_{1},\ldots , {\hat{J}}_{j}\) is \(t_{i} - p_{j}\). Thus, we have

$$\begin{aligned} f_{j}(t_{1},\ldots ,t_{m})=\max \{f_{j-1}(t_{1},\ldots ,t_{i-1},t_{i}-{\hat{p}}_{j},t_{i+1},\ldots ,t_{m}),w_{j}\cdot t_{i}\}. \end{aligned}$$

We have the following dynamic programming DP:

Boundary condition:

$$\begin{aligned} f_{1}(t_{1},\ldots ,t_{m})=\left\{ \begin{aligned}&w_{1}{\hat{p}}_{1},{} & {} \textrm{if}~t_{i}={\hat{p}}_{1}~\mathrm{for~ some}~i~\textrm{and}~t_{j}=0~\mathrm{for~all}~j\ne i; \\&+\infty ,{} & {} \textrm{if}~t_{1}=t_{2}=\cdots =t_{m}=0;\\&+\infty ,{} & {} \text {otherwise}. \end{aligned} \right. \end{aligned}$$

The recursive function:

$$\begin{aligned} f_{j}(t_{1},\ldots ,t_{m})= \min \left\{ \begin{aligned}&\max \{f_{j-1}(t_{1}-{\hat{p}}_{j},\ldots ,t_{m}),w_{j}\cdot t_{1}\}; \\&\max \{f_{j-1}(t_{1},t_{2}-{\hat{p}}_{j}{,}\ldots ,t_{m}),w_{j}\cdot t_{2}\}; \\&\cdots \\&\max \{f_{j-1}(t_{1},\ldots ,t_{m}-{\hat{p}}_{j}),w_{j}\cdot t_{m}\}. \nonumber \end{aligned} \right. \end{aligned}$$

The Optimal Value:

$$\begin{aligned}{} & {} \min \{f_{{\hat{n}}}(t_{1},\ldots ,t_{m})\}|t_{i}\in \{0,{\hat{\gamma }}fL,{\hat{\gamma }}fL\\{} & {} \quad +{\hat{\gamma }}^{2}fL,\ldots ,(1+{\hat{\gamma }})fL +(\Delta +1){\hat{\gamma }}f^{2}L\},i=1,\ldots ,m\}. \end{aligned}$$

We now consider the time complexity of our FPTAS. We first need to apply our \(O(n\log n)\) algorithm \(A_{1}\) to determine the value of L. It only takes O(n) time to construct instances \({\hat{I}}\). An optimal solution for instance \({\hat{I}}\) by using algorithm DP, whose running time is

$$\begin{aligned}{} & {} O\left( {\hat{n}}\left( 1+\frac{1+(\Delta +1){\hat{\gamma }}f}{{\hat{\gamma }}^{2}}\right) ^{m}\right) \\{} & {} \quad =O\left( \tau \cdot \left( \frac{1}{{\hat{\gamma }}}+(\Delta +1)f\right) m\left( 1+\frac{1+(\Delta +1){\hat{\gamma }}f}{{\hat{\gamma }}^{2}}\right) ^{m}\right) \\{} & {} \quad =O\left( \frac{1}{{\hat{\gamma }}^{2m+3}}\right) , \end{aligned}$$

where m, \(\Delta \), f are three constants, \(\tau \le (1+\Delta )(1+\kappa )\) and \(\kappa =\frac{1-{\hat{\gamma }}}{{\hat{\gamma }}^{2}}\). Based on the optimal solution to \({\hat{I}}\), solution \(\sigma ^{\prime }\) is easily generated in \(O(n\log n)\) time. Therefore, the overall running time is \(O(n\log n+\frac{1}{{\hat{\gamma }}^{2m+3}})\). We thus have the following theorem.

Theorem 4.7

There exists an FPTAS with running time \(O(n\log n+\frac{1}{\gamma ^{2m+3}})\) for problem \(P_{m}||WC_{\max }\) with a constant type of weight.

5 The randomized polynomial time approximation schemes

The randomized approximation schemes for \(P||WC_{\max }\) and \(P_{m}||WC_{\max }\) will be shown in this section. The randomized approximation schemes use analysis methods that are based on ideas from Skutella and Woeginger (2000). However, we use some novel lower bounds on the optimal solution value of our problem. Our randomized approximation schemes are divided into three steps. The first step is to divide the job set according to the weight value of \(w_j\). The second step is to find the corresponding approximate schemes for the subsets of the job divided in the previous step. In the third step, we glue the approximate solution obtained in the second step to a feasible solution to the original problem.

For the sake of approximation ratio analysis, we can assume that \(w_{j}\le 1\) for all jobs \(J_{j}\in \mathcal{J}\), since the weight of all the jobs can be divided by the maximum weight value. In the following analysis, if we use Theorem 4.5, we obtain the EPTAS for \(P||WC_{\max }\) with a constant type of weight, and if we use Theorem 4.7, we obtain the FPTAS for \(P_{m}||WC_{\max }\) with a constant type of weight. Thus, we only consider the results of randomized EPTAS for \(P||WC_{\max }\) and the results of randomized FPTAS for \(P_{m}||WC_{\max }\) which can be analogously and simply obtained.

5.1 The randomized approximation scheme

Our randomized approximation scheme is divided into the following steps.

(I) Let \(\Gamma \) be a positive integer and let \(\gamma =\frac{1}{\Gamma }\). We will select \(\Gamma \) too large such that \(\gamma \) is small. We divide the job set \(\mathcal J\) as follows: First, let \(\mathcal{J}^{h}:=\{J_{j}\in \mathcal{J}|\gamma ^{h}< w_{j}\le \gamma ^{h-1}\}\), for \(h\in \mathbb {N}\). Then, pick q of \(\{1,2,\ldots ,\Gamma \}\) uniformly at random, set \(W_{0,q}=\{1,2,\ldots ,q-1\}\), and for \(s\in \mathbb {N}\):

$$\begin{aligned} W_{s,q}= \{(s -1) \cdot \Gamma + q,\ldots , s\cdot \Gamma -1 + q\}; \end{aligned}$$

for \(s\in \mathbb {N}_{0}\), let \(\mathcal{J}_{s,q}=\cup _{h\in W_{s,q}}\mathcal{J}^{h}\).

For fixed q, the number of nonempty subsets \(\mathcal{J}_{s,q}, s\in \mathbb {N}_{0}\) is at most n. We only consider those subsets in this randomized approximation scheme. A visualization of the weight distribution is shown in Fig. 2.

Fig. 2
figure 2

A visualization of the weight distribution with \(\Gamma =4\), \(q=3\)

(II) For any nonempty sets of jobs \(\mathcal{J}_{s,q}\), the ratio of minimum weight to maximum weight is bounded by \(\gamma ^{\Gamma }\). Thus, we can assume that the weights are in the \([\gamma ^{\Gamma }\rho ,\rho ]\) range, where arbitrary \(\rho > 0\). Note that by rescaling the weights of jobs we can restrict to the case \(\rho =1\). Then, we choose the constant \(0<\delta <1\) arbitrarily close to 1, which leads to \(\frac{1}{\delta }\) arbitrary small. This is similar to the case that discussed in Section 4, we can round up the weights of all the jobs to the form \(\delta ^{h}\) in job set \(\mathcal{J}_{s,q}\) where h is an integer and bounded by a constant, since \(\Gamma , \gamma , \delta \) are constants and it will increase the value of the objective function by an arbitrary small factor of \(\frac{1}{\delta }\). Thus, we can compute a polynomial time approximation scheme according to Theorems 4.5 and let the objective value of the approximation scheme is \(OUT_{s,q}\). Let \(OUT_{s,q*}\) denotes the value of an optimal schedule for \(\mathcal{J}_{s,q}\). The scheduling produced by the second step is called the (job) subset scheduling.

(III) These schedules of job subsets are glued in the algorithm’s final step. We first randomly and uniformly permute the machines in each job subset schedule and then gluing those schedules. Since there could be a situation in which each subset of jobs in \(\mathcal{J}_{s,q}\) only contains one job, which is always scheduled to process on machine 1 in the corresponding job subset schedule and gluing the schedules will lead to only one machine processing job. Finally, the probability that two jobs from different subsets will be processed on the same machine in this randomly generated schedule is equal to \(\frac{1}{m}\).

5.2 The analysis of the randomized approximation scheme

Note that the objective value of the randomized approximation scheme schedule is equal to some job’s weighted completion time in the subset schedule plus the extra loss caused by the delay of this job in the gluing step. The value of some job’s weighted completion time is less than or equal to the weighted makespan of the same job subset in the subset schedule. Additionally, the weighted makespan in this subset schedule is less than or equal to the maximum weighted makespan in all job subset schedules. Finally, we can show that the value of the maximum weighted makespan value of the job subset schedule cannot exceeded the optimal schedule value, and we will also show that the job delays in the gluing step are too small to ignore.

Lemma 5.1

For each \(q \in \{1, \ldots ,\Gamma \}\) we have

$$\begin{aligned} OPT\ge \max _{s\in \mathbb {N}_{0}}OUT_{s,q*}~\textrm{and}~OPT\ge \gamma ^{h}\frac{p(\mathcal{J}^{h})}{m}. \end{aligned}$$

Proof

Take an optimal schedule of job set \(\mathcal{J}\) and denote the completion time of job \(J_{j}\) as \(C^{*}_{j}\). For each subset \(\mathcal{J}_{s,q}\), this optimal subset schedule can be viewed as processed on m machines starting at time 0. However, the completion time \(C_{j}^{*}\) of all jobs \(J_{j}\in \mathcal{J}_{s,q}\) in the optimal schedule also defines a feasible schedule for job subset \(\mathcal{J}_{s,q}\). This yields \(OPT=\max _{s\in \mathbb {N}_{0}, J_{j}\in \mathcal{J}_{s,q}}w_{j}C^{*}_{j}\ge \max _{s\in \mathbb {N}_{0}}OUT_{s,q*}\). Thus, OPT is not less than optimal schedule of value for each subset \(\mathcal{J}_{s,q}\). For the second lower bound, if we round the weights of jobs \(J_{j}\in \mathcal{J}^{k}\) down to \(w_{j}=\gamma ^{k}\), for \(k\in \mathbb {N}\), it will decrease the value of an optimal schedule. The result then follows from (1). \(\square \)

Without loss of generality, we assume job \(J_{j}\in \mathcal{J}^{k}\) achieves the weighted makespan in the schedule of the randomized approximation scheme, for some fixed \(k\in \mathbb {N}\). Let \(E[WC_{\max }]\) be the expected value of the schedule in the randomized approximation scheme. We have the following lemma.

Lemma 5.2

The expected value of the schedule in randomized approximation scheme is

$$\begin{aligned} E[WC_{\max }]\le E[\max _{s\in \mathbb {N}_{0}}OUT_{s,q}]+ {w_{j}}_{J_{j}\in \mathcal{J}^{k}}\sum _{h<k}\min \left\{ 1,\frac{k-h}{\Gamma }\right\} \frac{p(\mathcal{J}^{h})}{m}. \end{aligned}$$
(2)

Proof

We first keep q fixed and analyze the conditional expectation \(E_{q}[WC_{\max }]\) when some job \(J_{j}\in \mathcal{J}^{k}\) achieves the weighted makespan in the schedule of the randomized approximation scheme. Recall that the value of some job’s weighted completion time in the subset schedule is less than or equal to the weighted makespan of the same job subset. Additionally, the weighted makespan in the subset schedule is less than or equal to the maximum weighted makespan in all job subset schedules. Thus, this conditional expectation is less than or equal to the value of the job subset schedule \(\max _{s\in \mathbb {N}_{0}}OUT_{s,q}\) plus the expected loss caused by the delayed job \(J_{j}\) in the gluing step.

For fixed q, we next analyze the expected delay of some job \(J_{j}\in \mathcal{J}\). Let \(J_{j} \in \mathcal{J}^{k}\subseteq \mathcal{J}_{t,q}\) be the corresponding job subset for \(J_{j}\) such that \(t\in \mathbb {N}_{0}\) and \(k\in W_{t,q}\). The machines are permuted uniformly at random in the gluing step. Thus, the expected loss of job \(J_{j}\) delay is equal to the average load produced by the previous set of jobs, i.e., \(\cup _{s=0}^{t-1}\mathcal{J}_{s,q}\), that completed on the machine that job \(J_{j}\) processed. We know that the expected loss of job delay is \(\frac{p(\mathcal{J}^{h})}{m}\) if indices h and k fall in different subset \(W_{s,q}\) for \(s\in \mathbb {N}_{0}\). The expected loss of job delay is 0 if indices h and k fall in the same subset \(W_{s,q}\) for \(s\in \mathbb {N}_{0}\).

Since we chose q uniformly at random, then the expected value of job delay loss is equal to the probability that indices h and k lie in different subset \(W_{s,q}\) for \(s\in \mathbb {N}_{0}\). By the random choice of q, this probability is equal to \(\frac{k-h}{\Gamma }\) for \(k - h \le \Gamma \) and it is 1 for \(k - h> \Gamma \). Thus, we have

$$\begin{aligned} E[WC_{\max }]\le E[\max _{s\in \mathbb {N}_{0}}OUT_{s,q}] + {w_{j}}_{J_{j}\in \mathcal{J}^{k}}\sum _{h<k}\min \{1,\frac{k-h}{\Gamma }\}\frac{p(\mathcal{J}^{h})}{m}. \end{aligned}$$

\(\square \)

Theorem 5.3

For a given \(0<\gamma < 1\), let \(\Gamma =\lceil \frac{4}{\varepsilon }\rceil \). Then, the expected value of the randomized approximation scheme schedule is

$$\begin{aligned} E[WC_{\max }]\le (1+\gamma )OPT. \end{aligned}$$

Proof

Without loss of generality, we also assume that job \(J_{j}\in \mathcal{J}^{k}\) achieves the weighted makespan in the randomized approximation scheme schedule, for some fixed \(k\in \mathbb {N}\). Based on Theorem 4.5, we have \(OUT_{s,q}\le (1+\gamma )\cdot OUT_{s,q*}\) for each q, where \(OUT_{s,q}\) and \(OUT_{s,q*}\) are the computed weighted makespan value using Theorem 4.5 and the optimal value in subset \(\mathcal{J}_{s,q}\), respectively. Based on Lemma 5.1, we have \(\max _{s\in \mathbb {N}_{0}}OUT_{s,q*}\le OPT\). Thus

$$\begin{aligned} E[\max _{{s\in \mathbb {N}_{0}}}OUT_{s,q}]\le (1+\gamma )\cdot OPT. \end{aligned}$$

In the last part of (2), we sum h for a fixed k. Note that for \(k\in \mathbb {N}\), we have

$$\begin{aligned} {w_{j}}_{J_{j}\in \mathcal{J}^{k}}\le \gamma ^{k-1}. \end{aligned}$$

For fixed \(k\in \mathbb {N}\), we divide \(h(h<k)\) into two partial sums \(Sum_{1}+Sum_{2}\). The first partial \(Sum_{1}\) is \(h=k-1\) and it is bounded by

$$\begin{aligned} Sum_{1}\le \gamma ^{k-1-h}\cdot \frac{1}{\Gamma }\cdot \frac{p(\mathcal{J}^{h})}{m}\gamma ^{h} \le \gamma OPT. \end{aligned}$$

The second partial \(Sum_{2}\) is \(k\ge h+2\). In this case we simply set \(\min \{1,\frac{k-h}{\Gamma }\}\) to 1 and it is bounded by

$$\begin{aligned} Sum_{2} \le \sum _{h\le k-2}\gamma ^{k-1-h} \cdot \frac{p(\mathcal{J}^{h})}{m}\gamma ^{h} \le \sum _{h\in \mathbb {N}}\gamma ^{h} OPT \le \frac{\gamma }{1-\gamma } OPT. \end{aligned}$$

We sum these two partial sums, i.e. \(Sum_{1}+Sum_{2}\), then the expected value of the randomized approximation scheme schedule is

$$\begin{aligned} E[WC_{\max }]\le & {} E[\max _{s\in \mathbb {N}_{0}}OUT_{s,q}]+ {w_{j}}_{J_{j}\in \mathcal{J}^{k}}\sum _{h<k}\min \{1,\frac{k-h}{\Gamma }\}\frac{p(\mathcal{J}^{h})}{m}\\\le & {} (1+2\gamma +\frac{\gamma }{1-\gamma }) OPT\le (1+\varepsilon )OPT. \end{aligned}$$

The second inequality follows from \(\gamma =\frac{1}{\Gamma }\) and \(\Gamma =\lceil \frac{4}{\varepsilon }\rceil \). \(\square \)

6 Conclusion

In this paper, we consider problem \(P||WC_{\max }\). We first design a (\(2-\frac{1}{m}\))-approximation algorithm for this problem. Then, we design an EPTAS and an FPTAS for the problem with a constant type of weight. Finally, based on the above approximation schemes, we propose a randomized EPTAS for the problem, and a randomized FPTAS for the special case when m is fixed. Moreover, it is also interesting to consider the online or semi-online versions of this problem. Finally, we will consider designing an EPTAS for this problem in the future.