1 Introduction

In traditional scheduling problems, job processing times are assumed as having fixed and constant values. However, in many real-life situations, it is not scarce that the actual processing time of a job is shorter if it is scheduled in the later sequence (for example, the machines and workers’ performance can be improved by repeating the production operations), and this phenomenon is known as the “learning effect” [1]. An extensive survey of scheduling problems concerning learning effects is provided by Biskup [2]. More recently, Cheng et al. [3] considered scheduling problems with deteriorating jobs and learning effects, and the setup times are assumed to be proportional to the actual processing times of the already scheduled jobs. For some single-machine problems, they derived polynomial-time optimal solutions. Cheng et al. [4] considered a two-agent scheduling with truncated sum-of-prossing-times-based learning effect on a single machine. Wang and Wang [25, 26], Sun et al. [21, 22] and Wang et al. [33] considered flow-shop scheduling with learning effects. Wang et al. [23] considered two-machine makespan minimization a flow-shop scheduling with effects of deterioration and learning. Wang et al. [31] considered single-machine scheduling with a time-dependent learning effect and deteriorating jobs. They found some results to the total completion time minimization problem. Wang et al. [32] considered single-machine scheduling problems with truncated exponential learning effect, and they proved that some regular objective functions can be solved in polynomial time. Wu et al. [35] considered scheduling problems jobs with a truncated sum-of-processing-times-based learning effect, and they proved that some single-machine scheduling problems can be solved in polynomial time. Wu et al. [36] considered scheduling problems jobs with a truncated position-based learning effect, and they proved that some single-machine and two-machine flow-shop scheduling problems can be solved in polynomial time. Other types of scheduling problems and models with learning effects have also been discussed, and more references can be found in papers by Zhang et al. [44], Rudek [18, 19], Cheng et al. [5, 6], Wang et al. [24, 32], Lu et al. [16], Wang and Wang [2730], Yang et al. [37], Niu et al. [17], and Li et al. [10].

Koulamas and Kyparisis [9] and Liu et al. [14, 15] considered scheduling with past-sequence-dependent (p-s-d) job delivery times on a single machine. For some regular and non-regular objective functions, Koulamas and Kyparisis [9] and Liu et al. [15] showed that the problem can be solved by simple polynomial-time algorithms. Liu et al. [14] considered the problem with p-s-d job delivery times and the release times. Liu et al. [13] and Yin et al. [41] studied scheduling with p-s-d job delivery times and deteriorating jobs. However, to the best of our knowledge, apart from the recent paper of Yang et al. [38, 40], Yang and Yang [39], Liu [12], Yin et al. [42], Shen and Wu [20], and Zhao and Tang [43], it has not been investigated in the scheduling models by considering learning effect and the p-s-d delivery time simultaneously. Yang et al. [38] studied scheduling with (p-s-d) job delivery times and job-independent learning effect (i.e., the actual processing time of job \(J_j\) if it is scheduled in position \(r\) on a single machine is given by: \(p_{jr}^A = p_jr^a\), where \(a\le 0\) is a constant learning effect. Yang and Yang [39] studied scheduling with p-s-d job delivery times and position-dependent processing times (i.e., the actual processing time of job \(J_j\) if it is scheduled in position \(r\) on a single machine is given by: \(p_{jr}^A = p_jr^{a_j}\), where \(a_j\) is a job-dependent factor. Liu [12] considered the identical parallel-machine scheduling problem with p-s-d and job-independent learning effect (the same as in Yang et al. [38]). For the total absolute deviation of job completion times minimization, the total load on all machines minimization and the total completion time minimization, they proposed polynomial algorithms to optimally solve these problems. Yin et al. [42] considered the single-machine scheduling problem with p-s-d and job-independent learning effect (the same as in Yang et al. [38]). For four due date determination decisions, they proved that the proposed problems can be solved in polynomial time. Yang et al. [40] considered p-s-d delivery times scheduling problems with deterioration and learning effects simultaneously on a single machine. Shen and Wu [20] considered single-machine scheduling with p-s-d job delivery times and general position-dependent and time-dependent learning effects (i.e., the actual processing time of job \(J_j\) if it is scheduled in position \(r\) is given by: \(p_{j r}^A = p_jf (\sum _{i=1}^{r-1} p_{[i]})g(r)\), where \(f\) is a differentiable non-increasing function with \(f'\) non-decreasing on \([0,+\infty )\) and \(f(0) = 1\) and \(g\) is a non-increasing function with \(g(1) = 1\). Under the proposed learning model, they provided optimal solutions for some regular objective functions. Zhao and Tang [43] considered single-machine scheduling with p-s-d job delivery times and general position-dependent processing times (i.e., the actual processing time of job \(J_j\) if it is scheduled in position \(r\) is given by: \(p_{j r}^A = p_jg_j(r)\), where \(g_j(r)\) is a function that specifies a job-dependent positional effect. Under the proposed learning model, they provided optimal solutions for some objective functions.

However, subjected to the uncontrolled learning effect, the actual processing time of a job will plummet to zero dramatically due to the increasing number of jobs which are already processed or the emergence of jobs with long processing time. Hence, Cheng et al. [4], Wu et al. [35, 36], Wang et al. [24] and Li et al. [11] considered the truncated learning effect model. Hence, in this paper we extend the results of Yang et al. [38, 40], Yang and Yang [39], Liu [12], Yin et al. [42], Shen and Wu [20], and Zhao and Tang [43] to the single-machine scheduling problem with the truncated sum-of-processing-times-based learning effect and p-s-d delivery times. The phenomenon of the truncated sum-of-processing-times-based learning effect and past-sequence-dependent (p-s-d) job delivery times can be found in a production situation, “for example, an electronic component may be exposed to certain electromagnetic and/or radioactive fields while waiting in the machine’s preprocessing area, and regulatory authorities require the component to be ‘treated’ (e.g., in a chemical solution capable of removing/neutralizing certain effects of electro-magnetic/radioactive fields) for an amount of time proportional to the job’s exposure time to these fields. This treatment can be performed after the component has been processed by the machine, but before it is delivered to the customer so it can be delivered with a ‘guarantee.’ Such a postprocessing operation is usually called the job ‘delivery time,’ i.e., the past-sequence-dependent (p-s-d) job delivery times. In addition, if human interactions (the machines or workers) have a significant impact during the processing of the job, the processing time will add to the workers experience and cause an uncontrolled learning effect” [9, 20].

The rest part of the paper is organized as follows: Sect. 2 formulates the problem. Section 3 considers several single-machine scheduling. Section 4 provides some extensions. In Sect. 5, conclusions are drawn.

2 Problems description

The notations used throughout this paper are defined as follows.

\(n:\) :

The total number of jobs

\(p_{j}:\) :

The normal processing time of job \(J_{j},\) \(j=1,2,\ldots ,n\). The normal processing time of a job is incurred when the job is scheduled first in a sequence

\(v_j\)::

The weight of job \(J_{j}\), \(j=1,2,\ldots ,n\)

\(d_j\)::

The due date of job \(J_{j}\), \(j=1,2,\ldots ,n\)

\(p_{[r]}:\) :

The normal processing time of a job scheduled in the \(r\)th position in a sequence, \(r=1,2,\ldots ,n\)

\(q_{[r]}:\) :

The past-sequence-dependent (p-s-d) delivery time of a job scheduled in the \(r\)th position in a sequence, \(r=1,2,\ldots ,n\)

\(w_{[r]}:\) :

The waiting time of a job scheduled in the \(r\)th position in a sequence, \(r=1,2,\ldots ,n\)

\(p_{jr}^A:\) :

The actual processing time of job \(J_{j}\) scheduled in the \(r\)th position in a sequence, \(r,j=1,2,\ldots ,n\)

\(p_{[r]}^A:\) :

Be the actual processing time of a job if it is scheduled in the \(r\)th position in a sequence, \(r=1,2,\ldots ,n\)

\(\pi :\) :

A job sequence of \(n\) jobs

\(C_{j} = C_{j}(\pi ):\) :

The completion time for job \(J_{j}\) in \(\pi \)

\(C_{\max }:\) :

The makespan of a given permutation, i.e., \(C_\mathrm{max}=\mathrm{max}\{C_{j}|j=1,2,\ldots ,n\}\)

\(\sum C_{j}:\) :

The sum of completion times of jobs

\(\sum C_j^\theta :\) :

The sum of the \(\theta \)th (\(\theta \ge 0\)) power of job completion times

\(\sum v_jC_j:\) :

The total weighted completion time

\(\sum L_j\)::

The total lateness

\(L_{\max }=\max \{C_j-d_j|j=1,2,\ldots ,n\}:\) :

The maximum lateness

The model is described as follows. There are \(n\) independent and non-preemptive jobs \(J=\{J_1,J_2,\ldots ,J_n\}\) to be processed on a single machine. All the jobs are available at time 0; the machine will handle one job at a time, and there will be no idle time between the processes of any job. As in Cheng et al. [4], Wu et al. [35] and Li et al. [11], we assume that the actual processing time of job \(J_j\), scheduled in position \(r\), is given by:

$$\begin{aligned} p_{j r}^A=p_j\max \left\{ \left( 1 + \sum \limits _{i = 1}^{r - 1} {{p_{[i]}}} \right) ^a,\beta \right\}, \quad r, j=1,2,\ldots ,n, \end{aligned}$$
(1)

where \(a\le 0\) is a constant learning effect, \(p_{[0]}=0\), \(\sum _{i=1}^{0}p_{[i]}=0\) and \(\beta \) is a truncation parameter with \(0 <\beta < 1\). Moreover, following Koulamas and Kyparisis [9], Yang et al. [38], Yang and Yang [39] and Shen and Wu [20], the processing of the job \(J_{[r]}\) must be followed by the p-s-d delivery time \(q_{[r]}\), i.e.,

$$\begin{aligned} q_{[1]}=0\ \text{ and }\ q_{[r]}= \gamma {w_{[r]}}=\gamma \sum _{i=1}^{r-1}p_{[i]}^A, \quad r = 2,3,\ldots n, \end{aligned}$$
(2)

where \(\gamma \ge 0\) is a normalizing constant and \(w_{[r]}\) denotes the waiting time of job \(J_{[r]}\). Let \(C_{[j]}\) denote the completion time of job \(J_{[j]}\) and \(C_j \) is defined analogously. We assume that the postprocessing operation of any job \(J_{[j]}\) modeled by its delivery time \(q_{[j]}\) is performed “off-line,” i.e., \({C_{[1]}} = p_{[1]}\) and

$$\begin{aligned} {C_{[j]}} = {w_{[j]}} + p_{[j]}^A + {q_{[j]}}, \quad j = 2,3,\ldots,n, \end{aligned}$$

where \({w_{[1]}} =0\) and \({w_{[j]}} = \sum _{i=1}^{j-1}p_{[i]}^A, \; j = 2,3,\ldots,n\).

For convenience, we denote by \(TTLE\) the truncated time-dependent learning effect given by (1) and denote by \(q_{psd}\) the p-s-d delivery given by (2) [9]. In this paper, we consider the problem to minimize the makespan \(C_{\max }={\max }\{C_j| \; j=1,2,\ldots ,n\}\), the total completion time \(\sum C_j\), the sum of the \(\theta \)th (\(\theta \ge 0\)) power of job completion times \(\sum C_j^\theta \), the total lateness \(\sum L_j\), the total weighted completion time \(\sum v_jC_j\) and the maximum lateness \(L_{\max }=\max \{C_j-d_j| \; j=1,2,\ldots ,n\}\), where \(v_j\) and \(d_j\) denote the weight and due date of job \(J_{j}\). Using the three-field notation of Graham et al. [8], we denote the problems that are considered in this paper as \(1|TTLE, q_{psd}|\gamma \), where \(\gamma \in \{C_{\max },\sum C_j^\theta , \sum L_j, \sum v_jC_j, L_{\max }\}\).

3 Single-machine scheduling problems

Lemma 1

For the \(1|TTLE|C_{\max }\) problem, an optimal schedule can be obtained by the SPT rule (i.e., by sequencing the jobs in non-decreasing order of \(p_j\) ).

Proof

See the proof in Wu et al. [35]. \(\square \)

Theorem 1

For the \(1|TTLE, q_{psd}|C_{\max }\) problem, an optimal schedule can be obtained by the SPT rule (i.e., by sequencing the jobs in non-decreasing order of \(p_j\) ).

Proof

Let \(\pi = [S_1,J_j, J_k,S_2]\) and \(\pi '=[S_1, J_k, J_j,S_2]\) be two job schedules by interchanging two adjacent jobs \(J_j\) and \(J_k\), where \(S_1\) and \(S_2\) are partial sequences and \(S_1\) or \(S_2\) may be empty. We also assume that there are \(r- 1\) jobs in \(S_1\). To show \(\pi \) dominates \(\pi '\), it suffices to show that the \((r + 1)\)th jobs in \(\pi \) and \(\pi '\) satisfy the condition that \(C_k(\pi )\le C_j (\pi ')\) (since for any \(J_u\) in \(S_2\), from \(C_k(\pi )\le C_j (\pi ')\) we have \(C_u(\pi )\le C_u (\pi ')\), hence \(C_{[n]}(\pi )\le C_{[n]} (\pi ')\)). Under \(\pi \), the completion times of jobs \(J_j\) and \(J_k\) are

$$\begin{aligned} {C_j}(\pi )= & {} \sum \limits _{i = 1}^{r - 1} {p_{[i]}^A} + {p_j}\max \left\{ \left( 1 + \sum \limits _{i = 1}^{r - 1} {{p_{[i]}}} \right) ^a,\beta \right\} + \gamma \sum \limits _{i = 1}^{r - 1} {p_{[i]}^A}.\nonumber \\ {C_k}(\pi )= & {} \sum \limits _{i = 1}^{r - 1} {p_{[i]}^A} + {p_j}\max \left\{ \left( 1 + \sum \limits _{i = 1}^{r - 1} {{p_{[i]}}} \right) ^a,\beta \right\} + {p_k}\max \left\{ \left( 1 + \sum \limits _{i = 1}^{r - 1} {{p_{[i]}}} +p_j\right) ^a,\beta \right\} \nonumber \\&+ \; \gamma \left[ \sum \limits _{i = 1}^{r - 1} {p_{[i]}^A} + {p_j}\max \left\{ \left( 1 + \sum \limits _{i = 1}^{r - 1} {{p_{[i]}}} \right) ^a,\beta \right\} \right] , \end{aligned}$$
(3)

whereas under \(\pi '\), they are

$$\begin{aligned} {C_k}(\pi ') = \sum \limits _{i = 1}^{r - 1} {p_{[i]}^A} + {p_k}\max \left\{ \left( 1 + \sum \limits _{i = 1}^{r - 1} {{p_{[i]}}} \right) ^a,\beta \right\} + \gamma \sum \limits _{i = 1}^{r - 1} {p_{[i]}^A} \end{aligned}$$

and

$$\begin{aligned} {C_j}(\pi ')= & {} \sum \limits _{i = 1}^{r - 1} {p_{[i]}^A} + {p_k}\max \left\{ \left( 1 + \sum \limits _{i = 1}^{r - 1} {{p_{[i]}}} \right) ^a,\beta \right\} \nonumber \\&+ \; {p_j}\max \left\{ \left( 1 + \sum \limits _{i = 1}^{r - 1} {{p_{[i]}}} +p_k\right) ^a,\beta \right\} \nonumber \\&+ \; \gamma \left[ \sum \limits _{i = 1}^{r - 1} {p_{[i]}^A} + {p_k}\max \left\{ \left( 1 + \sum \limits _{i = 1}^{r - 1} {{p_{[i]}}} \right) ^a,\beta \right\} \right] . \end{aligned}$$
(4)

Stem from Eqs. (3) and (4), we have

$$\begin{aligned} C_j(\pi ')-C_k(\pi )= & {} \gamma \ (p_k-p_j)\max \left\{ \left( 1 + \sum \limits _{i = 1}^{r - 1} {{p_{[i]}}} \right) ^a,\beta \right\} \\& + \; p_k\max \left\{ \left( 1 + \sum \limits _{i = 1}^{r - 1} {{p_{[i]}}} \right) ^a,\beta \right\} + p_j\max \left\{ \left( 1 + \sum \limits _{i = 1}^{r - 1} {{p_{[i]}}} +p_k\right) ^a,\beta \right\} \\& - \; p_j\max \left\{ \left( 1 + \sum \limits _{i = 1}^{r - 1} {{p_{[i]}}} \right) ^a,\beta \right\} - p_k\max \left\{ \left( 1 + \sum \limits _{i = 1}^{r - 1} {{p_{[i]}}} +p_j\right) ^a,\beta \right\} . \end{aligned}$$

From \(p_j \le p_k\) and Lemma 1, we have \(\gamma \ (p_k-p_j)\left( 1+{\sum _{i=1}^{r-1}p_{[i]}}\right) ^a \ge 0\) and \(p_k\max \left\{ (1 + \sum \limits _{i = 1}^{r - 1} {{p_{[i]}}} )^a,\beta \right\} + p_j\max \left\{ (1 + \sum \limits _{i = 1}^{r - 1} {{p_{[i]}}} +p_k)^a,\beta \right\} -p_j\max \left\{ (1 + \sum \limits _{i = 1}^{r - 1} {{p_{[i]}}} )^a,\beta \right\} - p_k\max \left\{ (1 + \sum \limits _{i = 1}^{r - 1} {{p_{[i]}}} +p_j)^a,\beta \right\} \ge 0\), hence \(C_j(\pi ')-C_k(\pi )\ge 0\). \(\square \)

Theorem 2

For the problem \(1|TTLE, q_{psd}|\sum C_ j^\theta \) , an optimal schedule can be obtained by the SPT rule.

Proof

We also use the same notations as in Theorem 1. In order to show \(\pi \) dominates \(\pi '\), we should prove that (i) \(C_k(\pi )\le C_j (\pi ')\) and (ii) \(C_j^\theta (\pi )+C_k^\theta (\pi )\le C_k^\theta (\pi ')+C_j^\theta (\pi ')\) (since for any \(J_u\) in \(S_2\), from \(C_k(\pi )\le C_j (\pi ')\) we have \(C_u(\pi )\le C_u (\pi ')\), hence \(\sum C_j^\theta (\pi )\le \sum C_j^\theta (\pi ')\)).

Case (i) has been proved by Theorem 1. From \(p_j \le p_k\), we have \(C_j(\pi )\le C_k(\pi ')\), hence

$$\begin{aligned} C_j^\theta (\pi )+C_k^\theta (\pi )\le C_k^\theta (\pi ')+C_j^\theta (\pi '). \end{aligned}$$

It completes the proof of case (ii). \(\square \)

Corollary 1

For the \(1|TTLE, q_{psd}|\sum C_j\) problem, an optimal schedule can be obtained by the SPT rule.

Theorem 3

For the \(1|TTLE, q_{psd}|\sum L_j\) problem, an optimal schedule can be obtained by the SPT rule.

Proof

Obviously,

$$\begin{aligned} \sum _{j=1}^n L_j=\sum _{j=1}^n (C_j-d_j)=\sum _{j=1}^n C_j-\sum _{j=1}^n d_j. \end{aligned}$$

For \(\sum _{j=1}^n d_j\) is a constant, \(\sum _{j=1}^n L_j\) is minimized if \(\sum _{j=1}^n C_j\) is minimized. From Theorem 2, the result can be easily obtained. \(\square \)

Theorem 4

For the \(1|TTLE, q_{psd}|\sum v_jC_j\) problem, if all the jobs have agreeable weights, i.e., \(p_j \le p_k\) implies \(v_j \ge v_k\) for all the jobs \(J_j\) and \(J_k\) , an optimal schedule can be obtained by the WSPT rule (i.e., by sequencing the jobs in non-decreasing order of \(p_j/v_j\) ).

Proof

As in Theorem 1, the same notations are also used. In order to show \(\pi \) dominates \(\pi '\), we should prove that (1) \(C_k(\pi )\le C_j (\pi ')\) and (2) \(v_jC_j(\pi )+v_kC_k(\pi )\le v_kC_k (\pi ')+v_jC_j (\pi ')\) with \(p_j/v_j \le p_k/ v_k\), \(p_j \le p_k\) and \(v_j \ge v_k\).

Case (1) has been proved by Theorem 1. We provide the proof of case (2) as follows.

From Theorem 1, we have \(C_j(\pi )\le C_k (\pi ')\), hence

$$\begin{aligned}&v_kC_k (\pi ')+v_j C_j(\pi ')-v_j C_j(\pi )-v_kC_k (\pi ) \\&\quad \ge v_kC_j (\pi )+v_j C_k(\pi )-v_j C_j(\pi )-v_kC_k (\pi )\\&\quad =(v_j-v_k)(C_k (\pi )-C_j (\pi ))\\&\quad \ge 0. \end{aligned}$$

Hence, \(v_j C_j(\pi )+v_kC_k (\pi )\le v_kC_k (\pi ')+v_j C_j(\pi ')\). It completes the proof of part (2) and thus the theorem. \(\square \)

Theorem 5

For the problem \(1|TTLE, q_{psd}|L_{\max }\) , if the jobs have agreeable conditions, i.e., \(p_j\le p_k\) implies \(d_j\le d_k\) for all the jobs \(J_j\) and \(J_k\) , an optimal schedule can be obtained by the EDD rule (i.e., by sequencing the jobs in non-decreasing order of \(d_j\) ).

Proof

We still use the same notations mentioned above. From the proof of Theorem 1, under \(\pi \), the lateness of jobs \(J_{j}\) and \(J_{k}\) are

$$\begin{aligned} L_{j}(\pi )= & {} \sum \limits _{i = 1}^{r - 1} {p_{[i]}^A} + {p_j}\max \left\{ \left( 1 + \sum \limits _{i = 1}^{r - 1} {{p_{[i]}}} \right) ^a,\beta \right\} + \gamma \sum \limits _{i = 1}^{r - 1} {p_{[i]}^A}-d_j,\\ L_{k}(\pi )= & {} \sum \limits _{i = 1}^{r - 1} {p_{[i]}^A} + {p_j}\max \left\{ \left( 1 + \sum \limits _{i = 1}^{r - 1} {{p_{[i]}}} \right) ^a,\beta \right\} + {p_k}\max \left\{ \left( 1 + \sum \limits _{i = 1}^{r - 1} {{p_{[i]}}} +p_j\right) ^a,\beta \right\} \\&+ \, \gamma \left[ \sum \limits _{i = 1}^{r - 1} {p_{[i]}^A} + {p_j}{\left( 1 + \sum \limits _{i = 1}^{r - 1}{{p_{[i]}}} \right) ^a}\right] -d_{k}, \end{aligned}$$

whereas under \(\pi '\), they are

$$\begin{aligned} L_{k}(\pi ')= & {} \sum \limits _{i = 1}^{r - 1} {p_{[i]}^A} + {p_k}\max \left\{ \left( 1 + \sum \limits _{i = 1}^{r - 1} {{p_{[i]}}} \right) ^a,\beta \right\} + \gamma \sum \limits _{i = 1}^{r - 1} {p_{[i]}^A}-d_{k},\\ L_{j}(\pi ')= & {} \sum \limits _{i = 1}^{r - 1} {p_{[i]}^A} + {p_k}\max \left\{ \left( 1 + \sum \limits _{i = 1}^{r - 1} {{p_{[i]}}} \right) ^a,\beta \right\} + {p_j}\max \left\{ \left( 1 + \sum \limits _{i = 1}^{r - 1} {{p_{[i]}}}+p_k \right) ^a,\beta \right\} \\&+ \, \gamma \left[ \sum \limits _{i = 1}^{r - 1} {p_{[i]}^A} + {p_k}{(1 + \sum \limits _{i = 1}^{r - 1} {{p_{[i]}}} )^a}\right] -d_{j}. \end{aligned}$$

If \(d_{j}\le d_{k}\), we have \(L_k(\pi ')\le L_j(\pi ')\). In addition, if \(d_{j}\le d_{k}\) and \(p_{j}\le p_{k}\), according to Theorem 1, we get \(L_k(\pi )\le L_j(\pi ')\) and \(L_j(\pi )\le L_j(\pi ')\). Therefore, we obtain

$$\begin{aligned} \max \{L_{j}(\pi '),L_{k}({\pi '})\}\ge \max \{L_{j}(\pi ),L_{k}(\pi )\}. \end{aligned}$$

It completes the proof of the theorem. \(\square \)

The following example illustrates the results of the theorems.

Example 1

Consider the instance with \(n=3,\; p_1=10,\; p_2=12,\; p_3=14,\; v_1=8,\; v_2=5,\; v_3=3,\; d_1=11,\; d_2=14,\; d_3=16\), \(a=-0.3\), \(\beta =0.8,\) \(\gamma =0.5\) and \(\theta =2\). Now applying Theorems 15, we know that the optimal schedule is \([J_1,J_2,J_3]\) for the following objective functions: \(C_{\max }\), \(\sum C_j^2\), \(\sum L_j\), \(\sum v_jC_j\) and \(L_{\max }\). In addition, we have

$$\begin{aligned}&C_{1}=10, C_{2}=10+12*\max \left\{ \left( 1+10\right) ^{-0.3},0.8\right\} +0.5*10=24.6,\\&C_{3}=24.6+14*\max \left\{ \left( 1+10+12\right) ^{-0.3},0.8\right\} +0.5*24.6=48.1. \end{aligned}$$
  • \(C_\mathrm{max}=C_{3}= 48.1\),

  • \(\sum C_j^2=10^2+24.6^2+48.1^2=3018.8\),

  • \(\sum L_j=10+24.6+48.1-11-14-16= 41.7\),

  • \(\sum v_jC_j=8*10+5*24.6+3*48.1= 347.3\) and

  • \(L_{\max }=\max \{10-11,24.6-14,48.1-16\}=32.1\).

4 Extensions

Similar to the proof of Sect. 3 and Shen and Wu [20], our results (Theorems 15) can be extended to the following learning effect models:

  1. (a)

    The model of Zhang et al. [44]:

    $$\begin{aligned} p_{jr}^A=p_{j}\left( q_r+\sum _{l=1}^{r-1}\beta _l\ln p_{[l]}\right) ^a, \end{aligned}$$
    (5)

    where \(a\) is the learning indices with \(a\le 0\), \(\beta _r\) is the weight of position \(r\), \(0\le \beta _{1}\le \beta _{2}\le \cdots \le \beta _{n}\) and \(0\le q_{1}\le q_{2}\le \cdots \le q_{n}\).

  2. (b)

    The model of Rudek [18]:

    $$\begin{aligned} p_{jr}^A=p_{j}f\left( \sum _{l=1}^{r-1} p_{[l]}\right) , \end{aligned}$$
    (6)

    where \(f: [1, +\infty )\rightarrow (0, 1]\) is the non-increasing function (i.e., learning curve) common for all jobs.

  3. (c)

    The model of Cheng et al. [5]:

    $$\begin{aligned} p_{jr}^A=p_{j}\left( 1+\sum _{l=1}^{r-1}\beta _l\ln p_{[l]}\right) ^a r^b, \end{aligned}$$
    (7)

    where \(\sum _{l=1}^{0}\beta _l\ln p_{[l]}:=0\), \(a\) and \(b\) are the learning indices with \(a\le 0\) and \(b\le 0\), and \(\beta _r\) is the weight of position \(r\).

  4. (d)

    The model of Wang et al. [32]:

    $$\begin{aligned} p_{j r}^A=p_j\max \left\{ \left( \mu a^{-\frac{\sum _{i = 1}^{r - 1} p_{[i]}}{\sum _{i = 1}^{n} p_{[i]}}}+\nu \right) ,\beta \right\} , \quad r, j=1,2,\ldots ,n, \end{aligned}$$
    (8)

    where \(\mu \le 0\), \(\nu \le 0\), \(\mu +\nu =1\), \(\sum _{i=1}^{0}p_{[i]}=0\) and \(\beta \) is a truncation parameter with \(0 <\beta < 1\).

  5. (e)

    The model of Cheng et al. [6]:

    $$\begin{aligned} p_{jr}^A=p_{j}\left( 1+\sum _{l=1}^{r-1} \varphi _{l} p_{[l]}\right) ^a r^b, \end{aligned}$$
    (9)

    where \(a\le 0\) and \(b\le 0\) and \(\varphi _{l}>0\) is the associated weight of the \(l\)th position.

  6. (f)

    The model of Niu et al. [17]:

    $$\begin{aligned} p_{jr}^A=p_{j}f\left( \sum _{l=1}^{r-1}g( p_{[l]})\right) h(r), \end{aligned}$$
    (10)

    where \(\frac{{\rm d}f(x)}{{\rm d}x}\le 0,\frac{{\rm d}^2f(x)}{{\rm d}x^2}\ge 0\), \(\frac{{\rm d}^2g(x)}{{\rm d}x^2}\le 0\) and \(h(r)\) is a non-increasing function with \(g(0)=0\) and \(h(1)=1\).

5 Conclusions

This research studied the scheduling problems with truncated sum-of-processing-times-based learning effect and past-sequence-dependent job delivery times on a single machine. To some single-machine minimization problems, it is proved that the proposed problems were polynomial time solvable. In future research, more realistic settings such as the job-shop scheduling [7, 34] and unrelated parallel machines scheduling, or optimizing other performance measures with truncated sum-of-processing-times-based learning effect and past-sequence-dependent job delivery times will be explored.