1 Introduction

In a production situation, the phenomenon of past-sequence-dependent delivery times (denoted by psddt) can be found in electronic industry (Koulamas and Kyparisis 2010). Koulamas and Kyparisis (2010) introduced psddt to scheduling problems according to which the delivery time of a job depends on its waiting time of processing. Using the three-field notation of Graham et al. (1979), they proved that single-machine problem 1|psddt|X can be solved in polynomial time, where (\(X\in \{C_{\max },\sum _{i=1}^n C_{i},L_{\max },T_{\max }, \sum _{i=1}^n U_{i}\}\), \(C_{\max }\) is the makespan (i.e., maximal completion time value, represents the time interval needed to finish all jobs), \(\sum _{i=1}^n C_{i}\) is the total completion time (represents the work-in-process inventory cost), \(L_{\max }\) (\(T_{\max }\)) is the maximum lateness (tardiness) (represents the penalty cost in the case where there are job delays over its due date), \(\sum _{i=1}^n U_{i}\) is the number of tardy jobs (represents the cost penalty incurred by a tardy job does not depend on how late it is, but depends on the fact that it is late). Liu et al. (2012a) showed that the problem \(1|psddt,r_i,prmp|\sum _{i=1}^n C_{i}\) is polynomial time solvable if the jobs can be interrupted (i.e., job preemption is allowed), where \(r_i\) is the ready time of job \(J_i\), prmp denotes job preemption is allowed. Liu et al. (2012a) also proved that the non-preemptive problem \(1|psddt,r_i|\sum _{i=1}^n C_{i}\) is NP-hard, and proposed an approximation algorithm to solve the \(1|psddt,r_i|\sum _{i=1}^n C_{i}\) problem. Liu et al. (2012b) studied the problem \(1|psddt|\rho \), where \(\rho \in \{\sum _{i=1}^n w_iC_{i},\sum _{i=1}^n w_i(1-e^{-\gamma C_{i}}),TADC=\sum _{i=1}^n\sum _{j=1}^i|C_{i}-C_j|\}\), \(\gamma \) (\(0<\gamma <1\)) is the discount factor, they proved that these problems remain polynomially solvable.

Meeting due-dates has always been one of the most important objectives. There are two more commonly used due-date assignment methods below: The con due-date assignment method denotes that all jobs have a common due-date, i.e., the due-date of job \(J_i\) is \(d_i=d_{opt}\), and \(d_{opt}\) is a decision variable; The slk due-date assignment method means that all jobs have a common flow allowance, i.e., \(d_i=p_i+q_{opt}\), where \(p_i\) is the completion time of job \(J_i\), and \(q_{opt}\) is a decision variable. Liu et al. (2012b) also considered the con due-date assignment problem \(1|psddt,con|\sum _{i=1}^n (\alpha _1E_i+\alpha _2T_i+\alpha _3d_{opt})\), they proved that the problem is polynomial time solvable, where \(E_i=\max \{0, d_{opt}-C_i\}\) (\(T_i=\max \{0, C_i-d_{opt}\}\)) is the earliness (tardiness) of job \(J_i\), \(\alpha _1,\alpha _2,\alpha _3\) are given constant values, \(C_{i}\) is the completion time of job \(J_i\).

Liu (2013) and Wu and Wang (2016) studied learning effects problems with psddt. For following objectives: TADC, \(\sum _{h=1}^m C_{\max }^h\) (i.e., the sum of load on all machines, \(C_{\max }^h\) is the makespan of machine \(M_h\), \(h=1,2,\ldots ,m\)), \(\sum _{i=1}^n C_{i}\), under parallel-machine and a position-dependent learning effect, Liu (2013) proved that these problems (i.e., Pm|psddt|X, \(X\in \{TADC,\sum _{h=1}^m C_{\max }^h,\sum _{i=1}^n C_{i}\}\)) are polynomially time solvable. Under truncated sum-of-processing-times-based learning effect and single-machine, Wu and Wang (2016) showed that some minimizations (i.e., \(C_{\max }\), \(\sum _{i=1}^n C_{i}^\eta \) (\(\eta >0\)), \(\sum _{i=1}^n L_{i}\), \(\sum _{i=1}^n w_iC_{i},\) \(L_{\max }\)) are polynomially time solvable. Liu et al. (2013) and Yin et al. (2013) tackled problems with psd delivery times and deterioration effects. Under the parallel machine setting, Liu et al. (2013) presented polynomial algorithms for following objectives: TADC, \(\sum _{h=1}^m C_{\max }^h\), \(\sum _{i=1}^n C_{i}\). Under the single-machine setting, Yin et al. [8] showed that the makespan (total completion time) minimization is polynomial time solvable. Yin et al. (2013) also proved that some special cases of \(\sum _{i=1}^n w_iC_{i}\) and \(L_{\max }\) minimizations remain polynomially solvable. Yang and Yang (2012), and Zhao and Tang (2014) investigated problems with psddt and position-dependent processing times.

On the other hand, Kahlbacher (1992), Brucker (2001), Liu et al. (2017), Liu and Jiang (2020), and Jiang et al. (2020) considered scheduling models with pdw weights, i.e., the weight is not related to the job but to the position in which the job is scheduled. If the con due-date d is a given constant, Kahlbacher (1992) showed that problem \(1\left| d,pdw\right| \sum _{i=1}^n\nu _{i}|L_{S_{(i)}}|\) is NP-hard, where \(\nu _{i}\) is the weight of ith position in a sequence (i.e., pdw), \(L_{S_{(i)}}=C_{S_{(i)}}-d\) is the lateness of job \(J_{S_{(i)}}\), \(_{S_{(i)}}\) denotes the job scheduled in ith position under sequence S. Brucker (2001), concentrated on the con method, he proved that the problem \(1\left| pdw,con,d_{opt}\right| \sum _{i=1}^n\nu _{i}|L_{S_{(i)}}|+\nu _{0}d_{opt}\) (where \(d_{opt}\) is a decision variable, \(\nu _{0}\ge 0\)) is polynomial time solvable. Liu et al. (2017) addressed the slk method, they showed that \(1\left| pdw,slk,q_{opt}\right| \sum _{i=1}^n\nu _{i}|L_{S_{(i)}}|+\nu _{0}q_{opt}\) (\(\nu _{0}\ge 0\) is the weight of \(q_{opt}\)) can also be solved in polynomial time. Liu and Jiang (2020) dealed with resource allocation scheduling with learning effects. For single-machine setting, they showed that con and slk methods remain polynomial time solvable, respectively. Jiang et al. (2020) investigated problems with pdw. Under proportionate flowshop setting, they proved that con and slk methods remain polynomial time solvable, respectively.

In this article, we focus on scheduling under model with psddt and pdw weights. Motivated by the phenomena in practice such that the weights are position-dependent weights due to the importance of position in service production system, we introduce position-dependent weights into the psddt model. To the best of our knowledge, there are no results on scheduling with pdw weights and psddt in literature. The remainder of the article is organized as follows: Sect. 2 formulates the model. Section 3 considers two scheduling problems without due-date constraint. Section 4 studies two due-date assignment problems. Section 5 provides some extensions. Conclusions are presented in Sect. 6.

2 Model description

Consider a set \(\breve{N}=\{J_1,J_2,\ldots ,J_n\}\) of simultaneously available jobs to be processed by a single-machine. Let \(p_{i}\), \(s_{i}\), and \(q_{i}\) be the processing time, starting time, and psddt of job \(J_{i},\) respectively, \(i=1,2,\ldots ,n\), and let \(_{S_{(i)}}\) be the job scheduled in the ith position under sequence S. As in Koulamas and Kyparisis (2010), we have

$$\begin{aligned} {s_{S_{(i)}}}=\sum _{l=1}^{i-1}p_{S_{(l)}},q_{S_{(1)}}=0\ \text{ and }\ q_{S_{(i)}}= b {s_{S_{(i)}}}=b \sum _{l=1}^{i-1}p_{S_{(l)}}, \end{aligned}$$
(1)

where \(\sum _{l=1}^{0}p_{S_{(l)}}:=0\), and \(b \ge 0\) is a normalizing constant. Let \(C_{i}\) denote the completion time of job \(J_{i}\) and \(C_{S_{(i)}} \) can be defined analogously, then

$$\begin{aligned} {C_{S_{(i)}}} = {s_{S_{(i)}}} + p_{S_{(i)}}+ {q_{S_{(i)}}}=(1+b)\sum _{h=1}^{i-1}p_{S_{(h)}}+p_{S_{(i)}},i = 1,2,\ldots ,n. \end{aligned}$$
(2)

3 Scheduling without due-date

The aim of this section is to study \(1|psddt,pdw|\sum _{i=1}^n \varpi _i C_{S_{(i)}}\) and \(1|psddt,pdw|\sum _{i=1}^n (1-e^{-\gamma C_{S_{(i)}}})\), where \(\sum _{i=1}^n \varpi _i C_{S_{(i)}}\) is total weighted completion time and \(\sum _{i=1}^n \varpi _i (1-e^{-\gamma C_{S_{(i)}}}) \) is total discounted weighted completion time, \(\varpi _{i}\) is the weight of ith position in a sequence (i.e., pdw), \(\gamma \) (\(0<\gamma <1\)) is the discount factor.

Theorem 1

For \(1|psddt,pdw|\sum _{i=1}^n \varpi _i C_{S_{(i)}}\), an optimal sequence can be obtained in \(O(n\log n)\) time, i.e., by smallest processing time (SPT) first rule.

Proof

Let \(S = ({\tilde{S}}_1,J_i, J_j,{\tilde{S}}_2)\) and \(S^{\perp }=({\tilde{S}}_1, J_j, J_i,{\tilde{S}}_2)\) denote two sequences, where \(p_i\le p_j\), \({\tilde{S}}_1\) and \({\tilde{S}}_2\) are partial sequences (there are \(r-1\) jobs in \({\tilde{S}}_1\)). Under S and \(S^{\perp }\), we have

$$\begin{aligned}&\sum _{i=1}^n \varpi _i C_{S_{(i)}} = \sum _{i=1}^{r-1} \varpi _i C_{S_{(i)}}+\varpi _r[(1+b)\sum _{l=1}^{r-1}p_{[l]}+p_{i}]\nonumber \\&\quad + \, \varpi _{r+1}[(1+b) \sum _{l=1}^{r-1}p_{[l]} +(1+b)p_{i}+p_j]+\sum _{i=r+2}^{n} \varpi _i C_{S_{(i)}}. \end{aligned}$$
(3)
$$\begin{aligned}&\sum _{i=1}^n \varpi _i C_{S^{\perp }_{(i)}} = \sum _{i=1}^{r-1} \varpi _i C_{S^{\perp }_{(i)}}+\varpi _r[(1+b)\sum _{l=1}^{r-1}p_{[l]}+p_{j}]+ \varpi _{r+1}[(1+b)\sum _{l=1}^{r-1}p_{[l]}\nonumber \\&\quad +\, (1+b)p_{j}+p_i]+\sum _{i=r+2}^{n} \varpi _i C_{S^{\perp }_{(i)}}. \end{aligned}$$
(4)

From (3) and (4), we have \(\sum _{i=1}^{r-1} \varpi _i C_{S_{(i)}}=\sum _{i=1}^{r-1} \varpi _i C_{S^{\perp }_{(i)}}\), \(\sum _{i=r+2}^{n} \varpi _i C_{S_{(i)}}=\sum _{i=r+2}^{n} \varpi _i C_{S^{\perp }_{(i)}}\),

$$\begin{aligned} \sum _{i=1}^n \varpi _i C_{S_{(i)}} -\sum _{i=1}^n \varpi _i C_{S^{\perp }_{(i)}}= & {} (\varpi _r+b\varpi _{r+1})(p_i-p_j). \end{aligned}$$

From \(p_i\le p_j\), then \(\sum _{i=1}^n \varpi _i C_{S_{(i)}} -\sum _{i=1}^n \varpi _i C_{S^{\perp }_{(i)}}\le 0.\) \(\square \)

Theorem 2

For \(1|psddt,pdw|\sum _{i=1}^n \varpi _i (1-e^{-\gamma C_{S_{(i)}}})\), an optimal sequence can be obtained in \(O(n\log n)\) time, i.e., by the SPT rule.

Proof

It is same as Theorem 1, except that, if \(p_i\le p_j\), we have

$$\begin{aligned}&\sum _{i=1}^n \varpi _i (1-e^{-\gamma C_{S_{(i)}}}) -\sum _{i=1}^n \varpi _i (1-e^{-\gamma C_{S^{\perp }_{(i)}}})\\&\quad =\varpi _r(1-e^{-\gamma [(1+b)\sum _{l=1}^{r-1}p_{[l]}+p_{i}]}) +\varpi _{r+1}(1-e^{-\gamma [(1+b) \sum _{l=1}^{r-1}p_{[l]} +(1+b)p_{i}+p_j]})\\&\qquad -\varpi _r(1-e^{-\gamma [(1+b)\sum _{l=1}^{r-1}p_{[l]}+p_{j}]}) - \varpi _{r+1}(1-e^{-\gamma [(1+b) \sum _{l=1}^{r-1}p_{[l]} +(1+b)p_{j}+p_i]})\\&\quad = \varpi _re^{-\gamma (1+b)\sum _{l=1}^{r-1}p_{[l]}}\left( e^{-\gamma p_{j}}-e^{-\gamma p_{i}}) \right) \nonumber \\&\qquad +\, \varpi _{r+1}e^{-\gamma (1+b)\sum _{l=1}^{r-1}p_{[l]}+p_i+p_j}\left( e^{-\gamma b p_{j}}-e^{-\gamma b p_{i}}) \right) \\&\quad \le 0. \end{aligned}$$

\(\square \)

4 Due-date assignment problem

4.1 Con due-date

For the con method, we have \(d_i=d_{opt}\) for all jobs, the aim is to determine \(d_{opt}\) and a job sequence such that

$$\begin{aligned} \sum _{i=1}^n\varpi _{i}|L_{{S_{(i)}}}|+\varpi _{0}d_{opt} =\sum _{i=1}^n\varpi _{i}|C_{S_{(i)}}-d_{opt}|+\varpi _{0}d_{opt}, \end{aligned}$$
(5)

is minimized, where \(\varpi _i>0\) (\(i =0, 1,2,\ldots ,n\)) is the pdw weights. Brucker (2001) considered the problem \(1\left| pdw,con,d_{opt}\right| \sum _{i=1}^n\nu _{i}|L_{S_{(i)}}|+\nu _{0}d_{opt}\), i.e., he does not consider psddt. Obviously, for the problem \(1\left| psddt,pdw,con,d_{opt}\right| \) \(\sum _{i=1}^n \varpi _{i}|L_{S_{(i)}}|+\varpi _{0}d_{opt}\), there are no-idle time between the jobs and first job’s starting time is 0 (see Lemma 7.1 in Brucker 2001).

Now, a dummy job \(J_0\) is adopted such that its processing time is \(p_{0}=0\), position-dependent weight is \(\varpi _{0}\), starting time is 0, we have

$$\begin{aligned} \sum _{i=1}^n\varpi _{i}|C_{S_{(i)}}-d_{opt}|+\varpi _{0}d_{opt} =\sum _{i=0}^n\varpi _{i}|C_{S_{(i)}}-d_{opt}|, \end{aligned}$$
(6)

and an optimal schedule can be given by \(S=(S_{(0)},S_{(1)},\ldots ,S_{(n)})\), where \(S{(0)}=0\).

Lemma 1

For a given sequence \(S=(S_{(0)},S_{(1)},\ldots ,S_{(n)})\) of the problem \(1\left| psddt,pdw,con,d_{opt}\right| \) \(\sum _{i=1}^n\varpi _{i}|L_{S_{(i)}}| \) \(+\varpi _{0}d_{opt}\), \(d_{opt}=C_{S_{(k)}}=\gamma \sum _{i=1}^{k-1}p_{S_{(i)}}+p_{S_{(k)}}\), where k is a median for the sequence \(\varpi _{0},\varpi _{1},\ldots ,\varpi _{n}\),

$$\begin{aligned} \sum _{i=0}^{k-1}\varpi _{i}\le \sum _{i=k}^{n}\varpi _{i} \text{ and } \sum _{i=0}^{k}\varpi _{i}\ge \sum _{i=k+1}^{n}\varpi _{i}. \end{aligned}$$
(7)

Proof

Similar to the proof of Lemma 7.2 in Brucker (2001). \(\square \)

Lemma 2

For the problem \(1\left| psddt,pdw,d_{opt}\right| \sum _{i=1}^n\varpi _{i}|L_{S_{(i)}}|+\varpi _{0}d_{opt}\), we have

$$\begin{aligned} \sum _{i=1}^n\varpi _{i}|L_{S_{(i)}}|+\varpi _{0}d_{opt}=\sum _{i=1}^n\vartheta _{i} p_{{S_{(i)}}}, \end{aligned}$$
(8)

where

$$\begin{aligned} \vartheta _{i}=\left\{ \begin{array}{lll} \sum \limits _{v = 0}^i {b{\varpi _v}} + \sum \limits _{v = 0}^{i - 1} {{\varpi _v}}, &{} { i=1,2,\ldots ,k-1;} \\ \sum \limits _{v = 0}^k {{\varpi _v}} + \sum \limits _{v = k}^n {b{\varpi _v}}, &{} { i=k;} \\ \sum \limits _{v = i}^n {{\varpi _v}} + \sum \limits _{v = i + 1}^n {b{\varpi _v}}, &{} { i=k+1,k+2\ldots ,n.} \\ \end{array} \right. \end{aligned}$$
(9)

Proof

For \(S=(S_{(0)},S_{(1)},\ldots ,S_{(n)})\), from Lemma 1, \(d_{opt}=C_{S_{(k)}}\),

$$\begin{aligned}&\sum _{i=1}^n\varpi _{i}|L_{S_{(i)}}|+\varpi _{0}d_{opt}\\&\quad = \sum _{i=0}^{k}\varpi _{i}(C_{S_{(k)}}-C_{S_{(i)}}) +\sum _{i=k+1}^{n}\varpi _{i}(C_{S_{(i)}}-C_{S_{(k)}}) \\&\quad =\sum _{i=0}^{k}\varpi _{i}\left( bp_{S_{(i)}}\right. \\&\qquad \left. +(1+b)\sum _{v=i+1}^{k-1}p_{S_{(v)}}+p_{S_{(k)}}\right) +\sum \limits _{i = k + 1}^n \varpi _{i}\left( bp_{S_{(k)}} +(1+b)\sum _{v=k+1}^{i-1}p_{S_{(v)}}+p_{S_{(i)}}\right) \\&\quad =\sum \limits _{v = 1}^{k - 1} {{p_{S_{\left( v \right) }}}} \left( {\sum \limits _{i = 0}^v {b{\varpi _i}} + \sum \limits _{i = 0}^{v - 1} {{\varpi _i}} } \right) \\&\qquad + \, {p_{S_{\left( k \right) }}}\left( {\sum \limits _{i = 0}^k {{\varpi _i}} + \sum \limits _{i = k}^n {b{\varpi _i}} } \right) + \sum \limits _{v = k + 1}^n {{p_{S_{\left( v \right) }}}} \left( {\sum \limits _{i = v}^n {{\varpi _i}} + \sum \limits _{i = v + 1}^n {b{\varpi _i}} } \right) \\&\quad =\sum _{i=1}^{n}\vartheta _{i}p_{S_{(i)}}, \end{aligned}$$

where \( \vartheta _{i}\) (\(i=1,2,\ldots ,n\)) are given by (9). \(\square \)

4.2 Slack due-date assignment

For the slk method, we have \(d_{S_{(i)}}=p_{S_{(i)}}+q_{opt}\), where \(q_{opt}\) is a decision variable. The problem is to determine \(q_{opt}\) and a job sequence such that

$$\begin{aligned} \sum _{i=1}^n\varpi _{i}|L_{{S_{(i)}}}|+\varpi _{0}q_{opt} =\sum _{i=1}^n\varpi _{i}|C_{{S_{(i)}}}-d_{{S_{(i)}}}|+\varpi _{0}q_{opt} \end{aligned}$$
(10)

is minimized. Liu et al. (2017) considered the problem \(1\left| pdw,slk,q_{opt}\right| \sum _{i=1}^n\nu _{i}|L_{S_{(i)}}|+\nu _{0}q_{opt}\), i.e., they does not consider psddt. Obviously, for the problem \(1\left| psddt,pdw,slk,q_{opt}\right| \sum _{i=1}^n\varpi _{i}|L_{S_{(i)}}|+\varpi _{0}q_{opt}\), there are no-idle time between the jobs and the first job’s starting time is 0 (see Liu et al. 2017).

Similar to Sect. 4.1, a dummy job \(J_0\) is adopted such that its processing time is \(p_{0}=0\), position-dependent weight is \(\varpi _{0}\), starting time is 0, then

$$\begin{aligned} \sum _{i=1}^n\varpi _{i}|C_{{S_{(i)}}}-d_{{S_{(i)}}}|+\varpi _{0}q_{opt} =\sum _{i=0}^n\varpi _{i}|C_{{S_{(i)}}}-d_{{S_{(i)}}}|, \end{aligned}$$
(11)

and an optimal schedule is \(S=(S_{(0)},S_{(1)},\ldots ,S_{(n)})\).

Lemma 3

For a given sequence \(S=(S_{(0)},S_{(1)},\ldots ,S_{(n)})\) of the problem \(1\left| psddt,pdw,slk,q_{opt}\right| \) \(\sum _{i=1}^n\varpi _{i}|L_{S_{(i)}}|+\varpi _{0}q_{opt}\), \(q_{opt}=C_{{S_{(l)}}}=\gamma \sum _{i=1}^{l-1}p_{\rho (i)}+p_{\rho (l)}\), where l is a median for the sequence \(\varpi _{0},\varpi _{1},\ldots ,\varpi _{n}\),

$$\begin{aligned} \sum _{i=0}^{l}\varpi _{i}\le \sum _{i=l+1}^{n}\varpi _{i} \text{ and } \sum _{i=0}^{l+1}\varpi _{i}\ge \sum _{i=l+2}^{n}\varpi _{i}. \end{aligned}$$
(12)

Proof

Similar to the proof of Liu et al. (2017). \(\square \)

Lemma 4

For the problem \(1\left| psddt,pdw,slk,q_{opt}\right| \sum _{i=1}^n\varpi _{i}|L_{S_{(i)}}|+\varpi _{0}q_{opt}\), the optimal total cost can be written as:

$$\begin{aligned} \sum _{i=1}^n\varpi _{i}|L_{\rho (i)}|+\varpi _{0}q_{opt}= \sum _{i=1}^n\varpi _{i}|C_{S_{(i)}}-d_{S_{(i)}}|+\varpi _{0}q_{opt}=\sum _{i=1}^{n} \vartheta _{i}p_{S_{(i)}}, \end{aligned}$$
(13)

where

$$\begin{aligned} \vartheta _{i}=\left\{ \begin{array}{llll} \sum \limits _{v = 0}^{i + 1} {b{\varpi _v}} + \sum \limits _{v = 0}^i {{\varpi _v}}, &{} { i=1,2,\ldots ,l-1;} \\ \sum \limits _{v = 0}^{l + 1} {{\varpi _v}} + \sum \limits _{v = l + 1}^n {b{\varpi _v}} , &{} { i=l;} \\ \sum \limits _{v = i + 1}^n {{\varpi _v}} + \sum \limits _{v = i + 2}^n {b{\varpi _v}}, &{} { i=l+1,l+2,\ldots ,n-1;} \\ 0,&{} {i=n.} \end{array} \right. \end{aligned}$$
(14)

Proof

For \(S=(S_{(0)},S_{(1)},\ldots ,S_{(n)})\), from Lemma 3, \(q_{opt}=C_{S_{(l)}}\),

$$\begin{aligned}&\sum _{i=1}^n\varpi _{i}|L_{S_{(i)}}|+\varpi _{0}q_{opt}\\&\quad =\sum \limits _{i = 0}^{l + 1} {{\varpi _i}} \left( {{C_{S\left( l \right) }} - {C_{S\left( {i - 1} \right) }}} \right) + \sum \limits _{i = l + 2}^n {{\varpi _i}} \left( {{C_{S\left( {i - 1} \right) }} - {C_{S\left( l \right) }}} \right) \\&\quad =\sum \limits _{i = 0}^{l + 1} {{\varpi _i}} \left( {b{p_{S\left( {i - 1} \right) }} + \left( {1 + b} \right) \sum \limits _{v = i}^{l - 1} {{p_{S\left( v \right) }}} + {p_{S\left( l \right) }}} \right) \\&\qquad + \sum \limits _{i = l + 2}^n {{\varpi _i}} \left( {b{p_{S\left( l \right) }} + \left( {1 + b} \right) \sum \limits _{v = l + 1}^{i - 2} {{p_{S\left( v \right) }}} + {p_{S\left( {i - 1} \right) }}} \right) \\&\quad =\sum \limits _{v = 1}^{l - 1} {{p_{S\left( v \right) }}} \left( {\sum \limits _{i = 0}^{v + 1} {b{\varpi _i}} + \sum \limits _{i = 0}^v {{\varpi _i}} } \right) + {p_{S\left( l \right) }}\left( {\sum \limits _{i = 0}^{l + 1} {{\varpi _i}} + \sum \limits _{i = l + 1}^n {b{\varpi _i}} } \right) \\&\qquad + \sum \limits _{v = l + 1}^{n - 1} {{p_{S\left( v \right) }}} \left( {\sum \limits _{i = v + 1}^n {{\varpi _i}} + \sum \limits _{i = v + 2}^n {b{\varpi _i}} } \right) \\&\quad =\sum _{i=1}^{n}\vartheta _{i}p_{S_{(i)}}, \end{aligned}$$

where \( \vartheta _{i}\) (\(i=1,2,\ldots ,n\)) are given by (14). \(\square \)

4.3 Optimal solution

The terms (8) and (13) can be minimized by sequencing the vectors \(\vartheta _{i}\) and \(p_{S_{(i)}}\) in opposite order (see Hardy et al. 1967) in \(O(n\log n)\) time, therefore \(1\left| psddt,pdw,con,d_{opt}\right| \) \(\sum _{i=1}^n\varpi _{i}|L_{S_{(i)}}|+\varpi _{0}d_{opt}\) and \(1\left| psddt,pdw,slk,q_{opt}\right| \sum _{i=1}^n\varpi _{i}|L_{S_{(i)}}|+\varpi _{0}q_{opt}\) are solved by following algorithm:

Algorithm 1

Step 1. By Lemma 1 (Lemma 3), calculate k (l).

Step 2. By sequencing the vectors \(\vartheta _{i}\) and \(p_{i}\) in opposite order (see (9) and (14)) to identify the optimal sequence.

Step 3. For con (slk) due-date assignment, set \(d_{opt}=C_{S_{(k)}}=(1+b)\sum _{h=1}^{k-1}p_{S_{(h)}}+p_{S_{(k)}}\) (\(q_{opt}=C_{S_{(l)}}=(1+b)\sum _{h=1}^{i-1}p_{S_{(h)}}+p_{S_{(l)}}\)).

Theorem 3

Algorithm 1 solves \(1\left| psddt,pdw,con,d_{opt}\right| \sum _{i=1}^n\varpi _{i}|L_{S_{(i)}}|+\varpi _{0}d_{opt}\) and 1|psddtpdwslk\(q_{opt}|\sum _{i=1}^n\varpi _{i}|L_{S_{(i)}}|+\varpi _{0}q_{opt}\) in \(O(n\log n)\) time, respectively.

Proof

Steps 1 and 3 require O(n) time respectively, and Step 2 needs \(O(n\log n)\) time, thus the complexity of Algorithm 1 is \(O(n\log n)\). \(\Box \)

Note: If the con method d is a given constant, Kahlbacher (1992) proved that problem \(1\left| d,pdw\right| \) \( \sum _{i=1}^n\varpi _{i}|L_{S_{(i)}}|\) is NP-hard, hence, \(1\left| psddt,pdw,d\right| \sum _{i=1}^n\varpi _{i}|L_{S_{(i)}}|\) is also NP-hard. Hall and Posner (1991) showed that con due-date assignment problem \(1\left| con,d_{opt}\right| \) \( \sum _{i=1}^nw_{i}|L_{i}|\) is NP-hard, where \(w_{i}\) is the job-dependent weight of \(J_i\), hence, the job-dependent weights problem \(1\left| psddt,con, d_{opt}\right| \sum _{i=1}^nw_{i}|L_{i}|\) \(+w_{0}d_{opt}\) is also NP-hard, where \(w_{0}\) is a given constant.

5 Extensions

5.1 Positional-dependent processing times

The positional-dependent processing times have been given by Mosheiov (2011), i.e., the processing time of job \(J_i\), if scheduled in position r, is given by the function f(ir), (i.e., \(p_{i}^A=f(i,r), i,r=1,\ldots ,n\)). Biskup (1999) considered the model \(f(i,r)=p_ir^{a}\), Mosheiov and Sidney (2003) considered the model \(f(i,r)=p_ir^{a_i}\), Cheng et al. (2013) considered the model \(f(i,r)=p_i\max \{r^{a},\beta \}\), Wang et al. (2014) considered the model \(f(i,r)=p_i\max \{r^{a_i},\beta \}\), where \(a\le 0\) is the learning effect (Biskup 1999; Wang and Zhang 2015; Wang et al. 2020), \(a_i\le 0\) is the job-dependent learning effect, and \(\beta \) is a truncation parameter (\(0<\beta <1\)) (Cheng et al. 2013; Wang et al. 2014, 2019; Lu et al. 2015).

Obviously, from (2), we have

$$\begin{aligned} \sum _{i=1}^n\varpi _i C_{S_{(i)}}=\sum _{i=1}^n\varpi _i \left( (1+b)\sum _{h=1}^{i-1}p_{S_{(h)}}^A+p_{S_{(i)}}^A\right) =\sum _{i=1}^{n}\vartheta _{i}p_{S_{(i)}}^A=\sum _{i=1}^{n}\vartheta _{i}f(i,r),\nonumber \\ \end{aligned}$$
(15)

where

$$\begin{aligned} \vartheta _{i}=\left\{ \begin{array}{llll} \varpi _1 +\sum \limits _{v = 2}^{n} {(1+b){\varpi _v}}, &{} \hbox {for}\, i=1; \\ \varpi _2 +\sum \limits _{v = 3}^{n} {(1+b){\varpi _v}}, &{} \hbox {for}\, i=2; \\ \ldots &{}\\ \varpi _{n-1} +(1+b){\varpi _n}, &{} \hbox {for}\, i=n-1; \\ \varpi _n, &{} \hbox {for}\, i=n; \\ \end{array} \right. \end{aligned}$$
(16)

Let

$$\begin{aligned} y_{i,r}=\left\{ \begin{array}{ll} 1, &{} \quad \hbox {if}\, J_i\, \hbox {is assigned to}\, r\, \hbox {th position}, \\ 0, &{} \quad \hbox {otherwise.} \\ \end{array} \right. \end{aligned}$$

Then, we can formulate the sequence problem of \(1|psddt,pdw,p_{i}^A=f(i,r)|\sum _{i=1}^n \varpi _i C_{S_{(i)}}\) as following Assignment Problem:

$$\begin{aligned} \text{ min }&\sum _{i=1}^n\sum _{r=1}^n\vartheta _{r}f(i,r)y_{i,r} \end{aligned}$$
(17)
$$\begin{aligned} s.t.&\sum _{i=1}^n y_{i,r}=1;r=1,2,\ldots ,n \end{aligned}$$
(18)
$$\begin{aligned}&\sum _{r=1}^n y_{i,r}=1;i=1,2.\ldots ,n \end{aligned}$$
(19)
$$\begin{aligned}&y_{i,r}=\left\{ 0, 1 \right\} \end{aligned}$$
(20)

where

$$\begin{aligned} \vartheta _{r}=\left\{ \begin{array}{llll} \varpi _1 +\sum \limits _{v = 2}^{n} {(1+b){\varpi _v}}, &{} \hbox {for}\, r=1; \\ \varpi _2 +\sum \limits _{v = 3}^{n} {(1+b){\varpi _v}}, &{} \hbox {for}\, r=2; \\ \ldots &{}\\ \varpi _{n-1} +(1+b){\varpi _n}, &{} \hbox {for}\, r=n-1; \\ \varpi _n, &{} \hbox {for}\, r=n. \\ \end{array} \right. \end{aligned}$$
(21)

Obviously, Lemmas 1, 2, 3 and 4 still hold when positional-dependent processing times are introduced, we have

$$\begin{aligned} \sum _{i=1}^n\varpi _{i}|L_{S_{(i)}}|+\varpi _{0}d_{opt}/q_{opt} =\sum _{i=1}^{n}\vartheta _{i}p_{S_{(i)}}^A =\sum _{i=1}^n\vartheta _{i} f(i,r), \end{aligned}$$
(22)

where, for the con and slk models, \(\vartheta _{i}\) (\(i=1,2,\ldots ,n\)) are given by (9) and (14), respectively.

Similarly, we can formulate the sequence problem of \(1|psddt,pdw,con/slk,d_{opt}/q_{opt},p_{i}^A=f(i,r)| \) \(\sum _{i=1}^n\varpi _{i}|L_{S_{(i)}}|+\varpi _{0}d_{opt}/q_{opt}\) as the assignment problem (17)–(20), where, for the con due-date assignment,

$$\begin{aligned} \vartheta _{r}=\left\{ \begin{array}{lll} \sum \limits _{v = 0}^i {b{\varpi _v}} + \sum \limits _{v = 0}^{i - 1} {{\varpi _v}}, &{} { i=1,2,\ldots ,k-1;} \\ \sum \limits _{v = 0}^k {{\varpi _v}} + \sum \limits _{v = k}^n {b{\varpi _v}}, &{} { i=k;} \\ \sum \limits _{v = i}^n {{\varpi _v}} + \sum \limits _{v = i + 1}^n {b{\varpi _v}}, &{} { i=k+1,k+2\ldots ,n;} \\ \end{array} \right. \end{aligned}$$
(23)

for the slk due-date assignment,

$$\begin{aligned} \vartheta _{r}=\left\{ \begin{array}{llll} \sum \limits _{v = 0}^{i + 1} {b{\varpi _v}} + \sum \limits _{v = 0}^i {{\varpi _v}}, &{} { i=1,2,\ldots ,l-1;} \\ \sum \limits _{v = 0}^{l + 1} {{\varpi _v}} + \sum \limits _{v = l + 1}^n {b{\varpi _v}} , &{} { i=l;} \\ \sum \limits _{v = i + 1}^n {{\varpi _v}} + \sum \limits _{v = i + 2}^n {b{\varpi _v}}, &{} { i=l+1,l+2,\ldots ,n-1;} \\ 0,&{} { i=n.} \end{array} \right. \end{aligned}$$
(24)

Based on the above analysis, we have

Theorem 4

\(1|psddt,pdw,p_{i}^A=f(i,r)|\sum _{i=1}^n \varpi _i C_{S_{(i)}}\), \(1|psddt,pdw,con,d_{opt},p_{i}^A=f(i,r)| \sum _{i=1}^n\varpi _{i}|L_{S_{(i)}}|\) \(+\varpi _{0}d_{opt}\) and \(1|psddt,pdw,slk,q_{opt},p_{i}^A=f(i,r)| \) \(\sum _{i=1}^n\varpi _{i}|L_{S_{(i)}}|+\varpi _{0}q_{opt}\) are solvable in \(O(n^3)\) time, respectively.

If \(p_{i}^A=p_i\max \{r^{a},\beta \}\), the term \(\sum _{i=1}^{n}\vartheta _{i}f(i,r)=\sum _{i=1}^{n}\vartheta _{i}\max \{i^{a},\beta \}p_{S_{(i)}} \) can be minimized by sequencing the vectors \(\vartheta _{i}\max \{i^{a},\beta \}\) and \(p_{S_{(i)}}\) in opposite order (see Hardy et al. 1967), hence, we have

Theorem 5

\(1|psddt,pdw,p_{i}^A=p_i\max \{r^{a},\beta \}|\sum _{i=1}^n \varpi _i C_{S_{(i)}}\), \(1|psddt,pdw,con,d_{opt},p_{i}^A=p_i\max \{r^{a},\beta \}|\) \( \sum _{i=1}^n\varpi _{i}|L_{S_{(i)}}|\) \(+\varpi _{0}d_{opt}\) and \(1|psddt,pdw,slk,q_{opt},p_{i}^A=p_i\max \{r^{a},\beta \}| \) \(\sum _{i=1}^n\varpi _{i}|L_{S_{(i)}}|+\varpi _{0}q_{opt}\) are solvable in \(O(n\log n)\) time, respectively.

5.2 Time-dependent processing times

Time-dependent processing times (Gawiejnowicz 2008; Lu 2016) are adopted to scheduling model, i.e.,

$$\begin{aligned} p_{i}^A=p_i+cs_i, i=1,\ldots ,n, \end{aligned}$$
(25)

where \(c\ge 0\) (denotes common deterioration rate), \(s_i\) is starting time of \(J_i\) (Gawiejnowicz 2008; Lu 2016).

Obviously, for the problem \(1|psddt,pdw,p_{i}^A=p_i+cs_i|\sum _{i=1}^n \varpi _i C_{S_{(i)}}\), we have

$$\begin{aligned} s_{[1]}= & {} 0,s_{[2]}=0+p_{[1]}+c\times 0=p_{[1]}, \\ s_{[3]}= & {} s_{[2]}+p_{[2]}+c\times s_{[2]}=p_{[2]}+(1+c)p_{[1]}, \\&\ldots , \\ s_{[i]}= & {} p_{[i-1]}+(1+c)p_{[i-2]}+\ldots +(1+c)^{i-2}p_{[1]}, \\ p_{[i]}^A=p_{[i]}+cs_{[i]}= & {} p_{[i]}+cp_{[i-1]}+c(1+c)p_{[i-2]}+\ldots +c(1+c)^{i-2}p_{[1]}. \end{aligned}$$

For the objective functions \(\sum _{i=1}^n\varpi _i C_{S_{(i)}} =\sum _{i=1}^{n}\vartheta _{i}p_{S_{(i)}}^A\) and \(\sum _{i=1}^n\varpi _{i}|L_{S_{(i)}}|+\omega _{0} d_{opt}/q_{opt}=\sum _{i=1}^{n}\vartheta _{i}p_{S_{(i)}}^A\), from the above analysis, we have

$$\begin{aligned} \sum _{i=1}^{n}\vartheta _{i}p_{S_{(i)}}^A =\sum _{i=1}^n\psi _{i} p_{S_{(i)}}, \end{aligned}$$
(26)

where

$$\begin{aligned} \psi _1= & {} \vartheta _1+c\vartheta _2+c(1+c)\vartheta _3+\ldots +c(1+c)^{n-2}\vartheta _n \nonumber \\ \psi _2= & {} \vartheta _2+c\vartheta _3+c(1+c)\vartheta _4+\ldots +c(1+c)^{n-3}\vartheta _n \nonumber \\ \psi _3= & {} \vartheta _3+c\vartheta _4+c(1+c)\vartheta _5+\ldots +c(1+c)^{n-4}\vartheta _n \nonumber \\&\qquad \ldots \nonumber \\ \psi _{n-1}= & {} \vartheta _{n-1}+c\vartheta _n \nonumber \\ \psi _n= & {} \vartheta _n, \end{aligned}$$
(27)

where, for \(\sum _{i=1}^n\varpi _i C_{S_{(i)}}\), \(\vartheta _{i}\) (\(i=1,2,\ldots ,n\)) are given by (16); for the con due-date model, \(\vartheta _{i}\) (\(i=1,2,\ldots ,n\)) are given by (9); for the slk due-date model, \(\vartheta _{i}\) (\(i=1,2,\ldots ,n\)) are given by (14).

Similarly, we have:

Theorem 6

The problems \(1|psddt,pdw,p_{i}^A=p_i+cs_i|\sum _{i=1}^n \varpi _i C_{S_{(i)}}\), \(1|psddt,pdw,con,d_{opt},p_{i}^A=p_i+cs_i| \sum _{i=1}^n\varpi _{i}\) \(|L_{S_{(i)}}|+\varpi _{0}d_{opt}\) and \(1|psddt,pdw,slk,q_{opt},p_{i}^A=p_i+cs_i| \sum _{i=1}^n\varpi _{i}|L_{S_{(i)}}|+\varpi _{0}q_{opt}\) can be solved in \(O(n\log n)\) time, respectively.

6 Conclusions

We addressed the problems with psddt and pdw (see Table 1). We proved that the problems \(1|psddt,pdw|\sum _{i=1}^n \varpi _i C_{S_{(i)}}\), \(1|psddt,pdw|\sum _{i=1}^n \varpi _i (1-e^{-\gamma C_{S_{(i)}}})\), \(1\left| psddt,pdw,con,d_{opt}\right| \sum _{i=1}^n\varpi _{i}|L_{S_{(i)}}|+\varpi _{0}d_{opt}\), and \(1\left| psddt,pdw,slk,q_{opt}\right| \sum _{i=1}^n\varpi _{i} |L_{S_{(i)}}|+\varpi _{0}q_{opt}\) are solvable in \(O(n\log n)\) time, respectively. For the position-dependent and time-dependent processing times extensions, we showed that the objective functions \(\sum _{i=1}^n \varpi _i C_{S_{(i)}}\), and \(\sum _{i=1}^n\varpi _{i}|L_{S_{(i)}}|+\varpi _{0}d_{opt}/q_{opt}\) remain polynomially solvable respectively. Future research may consider the problem \(1|psddt,pdw,B|\sum _{i=1}^n \varpi _i (1-e^{-\gamma C_{S_{(i)}}})\), \(B\in \{p_{i}^A=f(i,r), p_{i}^A=p_i+cs_i\}\), study multi-machine settings (such as flow shop and job shop scheduling), investigate resource allocation scheduling (Lu and Liu 2018; Li et al. 2018), or optimize other objective functions with deterioration and learning effects (see Wang et al. 2012a, b and Lu et al. (2016)).

Table 1 Results of this paper