1 Introduction

In many real-manufacturing systems, the job processing times can be shortened by the learning effects, such as firms and employees perform a job (task) over and over, so they learn how to perform more efficiently (Biskup 1999; Wang 2010; Wang et al. 2010; Wu et al. 2013; Cheng et al. 2014; Wang et al. 2022). For details on the scheduling problems with learning effects, the reader may refer to the comprehensive survey (Azzouz et al. 2018). Kuo and Yang (2007), Lee (2011), and Soroush (2014) considered single-machine scheduling with learning effects and past sequence dependent setup times. Hsu et al. (2011) studied unrelated parallel machine scheduling with learning effects and past sequence-dependent setup times. Niu et al. (2015) addressed scheduling problems with a more general learning effect. They proved that some single-machine scheduling problems remain polynomially solvable. They also proved that some special cases of the flow shop scheduling problems remain polynomially solvable. Liang et al. (2019) considered scheduling problems with learning effects. Under flow shop setting, for some regular objectives, they proposed heuristic algorithms to solve these problems. Wang et al. (2020) examined single-machine total completion time minimization with a position-weighted learning effect. Under job release dates, they proposed a branch-and-bound algorithm and a heuristic algorithm to solve the problem. Sun et al. (2021) studied flow shop problem with general position weighted learning effects. For the total weighted completion time minimization, they proposed several heuristics and a branch-and-bound algorithm to solve the problem. Several recent articles, Liu and Jiang (2020b) and Lu et al. (2021) studied resource allocation single-machine scheduling problems with learning effects. Geng et al. (2019), Liu and Jiang (2020a), Zhao (2021) and Lv and Wang (2021) considered resource allocation flow shop scheduling problems with learning effects. Zhao (2022) addressed single-machine scheduling with general truncated learning effects and past-sequence-dependent setup times. They proved that some regular function minimizations remain polynomially solvable.

However, subject to uncontrolled learning effects (see above articles), the actual job processing time will plummet to zero dramatically due to the increasing number of jobs already processed. Hence, Wu et al. (2011), Wu et al. (2012), Li et al. (2013), Wang et al. (2013), Wu et al. (2013), and Lu et al. (2015) proposed scheduling models with truncated learning effects. Wang et al. (2019) studied flow shop problems with truncated learning effects. For the makespan and total weighted completion time minimizations, they proposed several heuristics and a branch-and-bound algorithm. Wang et al. (2021) addressed single-machine resource allocation scheduling with truncated learning effects. Under convex resource allocation, a bicriteria analysis was provided for the total weighted flow time and total resource consumption cost. Recently, Jiang et al. (2022) investigated single-machine scheduling problems with the sum of actual processing times based and a job-dependent truncation parameter based learning effects. The objective is to determine the optimal job schedule such that the makespan, the total weighted completion time and the maximum lateness are to be minimized. They proved that these three problems are NP-hard. They proved that some special cases of the problems remain polynomial solvable. For the general case of the problems, they only proposed approximation algorithms. It is natural and interesting to continue the work of Jiang et al. (2022) but study the heuristic and exact algorithms to solve the general problems with the sum of actual processing times based and a job-dependent truncation parameter based learning effects. The contributions of this research are summarized as follows: (1) we study single-machine scheduling problems with general truncated learning effects. (2) For three regular objective costs (i.e., the makespan, total weighted completion time and maximum lateness), we propose some solution algorithms (including the heuristic, tabu search, simulated annealing and branch-and-bound algorithms). (3) Extensive numerical experiments are tested to evaluate the accuracy and efficiency of the proposed algorithms.

This paper is organized as follows. The scheduling model and notations are presented in Sect. 2. Sections 3 and 4 address the branch-and-bound algorithm and meta-heuristic algorithms. Section 4 conducts the numerical experiments. The conclusions are presented in the last section.

2 Problem description

There is a set of \(\ddot{n}\) independent and non-preemptive jobs \({\widetilde{J}}=\{{\ddot{J}}_1,{\ddot{J}}_2,\ldots ,{\ddot{J}}_{\ddot{n}}\}\) to be processed on a single-machine and that are available for processing at time zero. Let [h] denote hth position in a schedule, \(P_{[h]}^A\) be the actual processing time of job \({\ddot{J}}_h\). As in Jiang et al. (2022), if job \({\ddot{J}}_h\) is scheduled in the lth position, its actual processing time is:

$$\begin{aligned} P_{h,l}^A={\tilde{P}}_h\max \left\{ \left( 1+\sum _{h=1}^{l-1}P_{[h]}^A\right) ^{a_h},b_h\right\} , \end{aligned}$$
(1)

where \({\tilde{P}}_h\) is the normal processing time of job \({\ddot{J}}_h\) (i.e., the processing time of a job without any learning effects), \(a_h\) (\(a_h\le 0\)) is a learning factor of job \({\ddot{J}}_h\) and \(b_h \) (\(0 \le b_h\le 1\)) is a truncation parameter of job \({\ddot{J}}_h\).

For a given sequence \({\tilde{\pi }}=[{\ddot{J}}_1,{\ddot{J}}_2,\ldots ,{\ddot{J}}_{\ddot{n}}]\), let \({\tilde{C}}_h = {\tilde{C}}_h({\tilde{\pi }})\) (\(d_h\), \(w_h\)) be the completion time (due date, weight) of job \({{\ddot{J}}_h}\), then \({\tilde{C}}_{\max }=\max \{{\tilde{C}}_h,h=1,2,\ldots ,\ddot{n}\}\) is the makespan of all jobs, \(\sum w_h {\tilde{C}}_h\) is the total weighted completion time and \(L_{\max }=\max \{{\tilde{C}}_h-d_h|h=1,2,\ldots ,\ddot{n}\}\) is the maximum lateness. Our goal is to determine a schedule such that \(\gamma \) (\(\gamma \in \{{\tilde{C}}_{\max },\sum w_h{\tilde{C}}_h,L_{\max }\}\)) is minimized. The problem can be denoted by \(1\left| P_{h,l}^A={\tilde{P}}_h\max \left\{ \left( 1+\sum _{h=1}^{l-1}P_{[h]}^A\right) ^{a_h},b_h\right\} \right| \gamma \), where \(\gamma \in \{{\tilde{C}}_{\max },\) \(\sum w_h{\tilde{C}}_h,L_{\max }\}\), where the first field (i.e., 1) denotes the single-machine, the second field (i.e., \(P_{h,l}^A={\tilde{P}}_h\max \left\{ \left( 1+\sum _{h=1}^{l-1}P_{[h]}^A\right) ^{a_h},b_h\right\} \)) denotes the job characteristics, and the last field (i.e., \(\gamma \)) is the objective cost.

3 Branch-and-bound algorithm

From Jiang et al. (2022), the problem \(1\left| P_{h,l}^A={\tilde{P}}_h\max \left\{ \left( 1+\sum _{h=1}^{l-1}P_{[h]}^A\right) ^{a_h},b_h\right\} \right| \gamma \) (\(\gamma \in \{{\tilde{C}}_{\max },\) \(\sum {{w_h}} {{{{\tilde{C}}}}_h},L_{\max }\}\)) is NP-hard. Hence, branch-and-bound (denoted by B &B, where we need a lower bound and an upper bound) and heuristic algorithms are chosen to solve the problem \(1\left| P_{h,l}^A={\tilde{P}}_h\max \left\{ \left( 1+\sum _{h=1}^{l-1}P_{[h]}^A\right) ^{a_h},b_h\right\} \right| \gamma \). If \(a_h=a,b_h=b\), some lemmas are given.

Lemma 1

(Jiang et al. 2022). For the problem \(1\left| P_{h,l}^A={\tilde{P}}_h\max \left\{ \left( 1+\sum _{h=1}^{l-1}P_{[h]}^A\right) ^{a},b\right\} \right| {\tilde{C}}_{\max }\), an optimal sequence can be obtained by the SPT rule, i.e., sequencing the jobs in non-decreasing order of \({\tilde{P}}_h\).

Lemma 2

(Jiang et al. 2022). For the problem \(1\left| P_{h,l}^A={\tilde{P}}_h\max \left\{ \left( 1+\sum _{h=1}^{l-1}P_{[h]}^A\right) ^{a},b\right\} \right| \sum {\tilde{C}}_{h}\), an optimal sequence can be obtained by the SPT rule, i.e., sequencing the jobs in non-decreasing order of \({\tilde{P}}_h\).

By Lemma 1 and the pairwise interchange method, we have the following results.

Lemma 3

For the problem \(1\left| P_{h,l}^A={\tilde{P}}_h\max \left\{ \left( 1+\sum _{h=1}^{l-1}P_{[h]}^A\right) ^{a},b\right\} \right| \sum w_h{\tilde{C}}_{h}\), if \({\tilde{P}}_h\le {\tilde{P}}_j\) implies \(w_h\ge w_j\) for all jobs \({\ddot{J}}_h\) and \({\ddot{J}}_j\), an optimal sequence can be obtained by the WSPT rule, i.e., sequencing the jobs in non-decreasing order of \(\frac{{\tilde{P}}_h}{w_h}\).

Lemma 4

For the problem \(1\left| P_{h,l}^A={\tilde{P}}_h\max \left\{ \left( 1+\sum _{h=1}^{l-1}P_{[h]}^A\right) ^{a},b\right\} \right| L_{\max }\), if \({\tilde{P}}_h\le {\tilde{P}}_j\) implies \(d_h\le d_j\) for all jobs \({\ddot{J}}_h\) and \({\ddot{J}}_j\), an optimal sequence can be obtained by the EDD rule, i.e., sequencing the jobs in non-decreasing order of \(d_h\).

Lemma 5

(Hardy et al. 1967). The term \(\sum \nolimits _{j = 1}^n {{x_j}} {y_j}\) gets the minimum, when sequence \({x_j}\) and sequence \({y_j}\) are arranged in opposite monotonous order.

3.1 Lower bounds

3.1.1 Criterion \(C_{\max }\)

Let \({\tilde{\pi }}=({\tilde{\pi }}_{SP}, {\tilde{\pi }}_{UP})\) be a sequence, where \({\tilde{\pi }}_{SP}\) (\({\tilde{\pi }}_{UP}\)) is the scheduled (unscheduled) part, and there are \(\theta \) jobs in \({\tilde{\pi }}_{SP}\); hence, the completion time of the \(\theta +1\)th job is

$$\begin{aligned} {\tilde{C}}_{[\theta +1]}({\tilde{\pi }}_{UP})= & {} {\tilde{C}}_{[\theta ]}({\tilde{\pi }}_{SP})+{\tilde{P}}_{[\theta +1]} \max \left\{ \left( 1+\sum _{h=1}^{\theta }P_{[h]}^A\right) ^{a_{[\theta +1]}},b_{[\theta +1]}\right\} . \end{aligned}$$

Similarly,

$$\begin{aligned}{} & {} {\tilde{C}}_{[\theta +j]}({\tilde{\pi }}_{UP})={\tilde{C}}_{[\theta ]}({\tilde{\pi }}_{SP})+\sum _{v=1}^{j} {\tilde{P}}_{[\theta +v]}\nonumber \\{} & {} \quad \times \max \left\{ \left( 1+\sum _{h=1}^{\theta }P_{[h]}^A+\sum _{h=\theta +1}^{\theta +v-1} P_{[h]}^A\right) ^{a_{[\theta +v]}},b_{[\theta +v]}\right\} {,}\nonumber \\ \end{aligned}$$
(2)

where \(\sum _{h=k}^{k-1} P_{[h]}^A=0\) and \(1\le j\le \ddot{ n}-\theta .\)

Let \(a_{\min }=\min \{a_{h}|h=\theta +1,\theta +2,\ldots ,{\ddot{n}}\}\) and \(b_{\min }=\min \{b_{h}|h=\theta +1,\theta +2,\ldots ,{\ddot{n}}\}\), from Eq. (2), we have

$$\begin{aligned} {\tilde{C}}_{\max }({\tilde{\pi }})= & {} {\tilde{C}}_{[\ddot{ n}]}({\tilde{\pi }})\nonumber \\\ge & {} {{{{\tilde{C}}}}_{[\theta ]}}({{{{\tilde{\pi }}} }_{SP}})+\sum _{v=1}^{\ddot{ n}-\theta }{\tilde{P}}_{[\theta +v]} \max \left\{ \left( 1+\sum _{h=1}^{\theta }P_{[h]}^A+\sum _{h=\theta +1}^{\theta +v-1} P_{[h]}^A\right) ^{a_{\min }},b_{\min }\right\} . \end{aligned}$$
(3)

From Eq. (3), the terms \( {\tilde{C}}_{[\theta ]}({\tilde{\pi }}_{SP})\) and \(\sum _{h=1}^{\theta }P_{[h]}^A\) are fixed constants, the term \(\sum _{v=1}^{\ddot{ n}-\theta }{\tilde{P}}_{[\theta +v]} \max \) \(\left\{ \left( 1+\sum _{h=1}^{\theta }P_{[h]}^A+\sum _{h=\theta +1}^{\theta +v-1} P_{[h]}^A\right) ^{a_{\min }},b_{\min }\right\} \) can be minimized by the SPT rule of the unscheduled jobs (see Lemma 1). Therefore, the lower bound of \(1\left| P_{h,l}^A={\tilde{P}}_h\max \left\{ \left( 1+\sum _{h=1}^{l-1}P_{[h]}^A\right) ^{a_h},b_h\right\} \right| {\tilde{C}}_{\max }\) can be obtained:

$$\begin{aligned} {LB}_{{\tilde{C}}_{\max }}= {{{{\tilde{C}}}}_{[\theta ]}}({{{{\tilde{\pi }}} }_{SP}})+\sum _{v=1}^{\ddot{ n}-\theta }{\tilde{P}}_{<\theta +v>} \max \left\{ \left( 1+\sum _{h=1}^{\theta }P_{[h]}^A+\sum _{h=\theta +1}^{\theta +v-1} P_{<h>}^A\right) ^{a_{\min }},b_{\min }\right\} , \end{aligned}$$
(4)

where \({\tilde{P}}_{<\theta +1>}\le {\tilde{P}}_{<\theta +2>}\le \cdots \le {\tilde{P}}_{<\ddot{n}>}\) and \(P_{<h>}^A\) (\(h=\theta +1,\theta +2,\ldots ,\ddot{n}\)) is the actual processing time of hth position by the order of \({\tilde{P}}_{<\theta +1>}\le {\tilde{P}}_{<\theta +2>}\le \cdots \le {\tilde{P}}_{<\ddot{n}>}\).

3.1.2 Criterion \(\sum {{w_h}} {{{{\tilde{C}}}}_h}\)

From Eq. (2), we have

$$\begin{aligned}{} & {} \sum w_h{\tilde{C}}_{h}({\tilde{\pi }})\nonumber \\{} & {} \quad =\sum _{h=1}^\theta w_{[h]} {\tilde{C}}_{[h]}({\tilde{\pi }}_{SP})+\sum _{h=\theta +1}^{\ddot{n}}w_{[h]}{\tilde{C}}_{[h]}({\tilde{\pi }}_{UP})\nonumber \\{} & {} \quad = \sum _{h=1}^\theta w_{[h]} {\tilde{C}}_{[h]}({\tilde{\pi }}_{SP})+\sum _{h=\theta +1}^{\ddot{n}}{\tilde{C}}_{[\theta ]} w_{[h]} \nonumber \\{} & {} \qquad +\sum _{h=\theta +1}^{\ddot{n}}w_{[h]}\sum _{v=1}^{h-\theta }{\tilde{P}}_{[\theta +v]} \max \left\{ \left( 1+\sum _{h=1}^{\theta }P_{[h]}^A+\sum _{j=\theta +1}^{\theta +v-1} P_{[j]}^A\right) ^{a_{[\theta +v]}},b_{[\theta +v]}\right\} . \end{aligned}$$
(5)

Let \(a_{\min }=\min \{a_{h}|h=\theta +1,\theta +2,\ldots ,{\ddot{n}}\}\), \(b_{\min }=\min \{b_{h}|h=\theta +1,\theta +2,\ldots ,{\ddot{n}}\}\) and \(w_{\min }=\min \{w_{h}|h=\theta +1,\theta +2,\ldots ,{\ddot{n}}\}\), from Eq. (5), we have

$$\begin{aligned}{} & {} \sum w_h{\tilde{C}}_{h}({\tilde{\pi }})\nonumber \\{} & {} \quad \ge \sum _{h=1}^\theta w_{[h]} {\tilde{C}}_{[h]}({\tilde{\pi }}_{SP})+\sum _{h=\theta +1}^{\ddot{n}}{\tilde{C}}_{[\theta ]} w_{[h]} \nonumber \\{} & {} \qquad +w_{\min }\sum _{h=\theta +1}^{\ddot{n}}\sum _{v=1}^{h-\theta }{\tilde{P}}_{[\theta +v]} \max \left\{ \left( 1+\sum _{h=1}^{\theta }P_{[h]}^A+\sum _{j=\theta +1}^{\theta +v-1} P_{[j]}^A\right) ^{a_{\min }},b_{\min }\right\} . \end{aligned}$$
(6)

From Eq. (6), the terms \(\sum _{h=1}^\theta w_{[h]} {\tilde{C}}_{[h]}({\tilde{\pi }}_{SP})\), \(\sum _{h=\theta +1}^{\ddot{n}}{\tilde{C}}_{[\theta ]} w_{[h]} \) and \(\sum _{h=1}^{\theta }P_{[h]}^A\) are fixed constants, the term \(\sum _{h=\theta +1}^{\ddot{n}}\sum _{v=1}^{h-\theta }{\tilde{P}}_{[\theta +v]} \max \left\{ \left( 1+\sum _{h=1}^{\theta }P_{[h]}^A+\sum _{j=\theta +1}^{\theta +v-1} P_{[j]}^A\right) ^{a_{\min }},b_{\min }\right\} \) can be minimized by the SPT rule of the unscheduled jobs (see Lemma 2). Therefore, the first lower bound of \(1\left| P_{h,l}^A={\tilde{P}}_h\max \left\{ \left( 1+\sum _{h=1}^{l-1}P_{[h]}^A\right) ^{a_h},b_h\right\} \right| \sum w_{h} {\tilde{C}}_{h}\) can be obtained:

$$\begin{aligned}{} & {} {LB}_{\sum w_h{\tilde{C}}_{h}}^1\nonumber \\{} & {} \quad =\sum _{h=1}^\theta w_{[h]} {\tilde{C}}_{[h]}({\tilde{\pi }}_{SP})+{\tilde{C}}_{[\theta ]}\sum _{h=\theta +1}^{\ddot{n}} w_{h} \nonumber \\{} & {} \qquad +w_{\min }\sum _{h=\theta +1}^{\ddot{n}}\sum _{v=1}^{h-\theta }{\tilde{P}}_{<\theta +v>} \max \left\{ \left( 1+\sum _{h=1}^{\theta }P_{[h]}^A+\sum _{j=\theta +1}^{\theta +v-1} P_{<j>}^A\right) ^{a_{\min }},b_{\min }\right\} , \end{aligned}$$
(7)

where \(_{<h>}\) denotes hth position in a sequence such that \({\tilde{P}}_{<\theta +1>}\le {\tilde{P}}_{<\theta +2>}\le \cdots \le {\tilde{P}}_{<\ddot{n}>}\) and \(P_{<h>}^A\) (\(h=\theta +1,\theta +2,\ldots ,\ddot{n}\)) is the actual processing time of hth position by the order of \({\tilde{P}}_{<\theta +1>}\le {\tilde{P}}_{<\theta +2>}\le \cdots \le {\tilde{P}}_{<\ddot{n}>}\).

From Eq. (5), the term \(\sum \nolimits _{h = \theta + 1}^{\ddot{n}}{{w_{[h]}}} \sum \nolimits _{v = 1}^{h - \theta } {\tilde{P}}_{[\theta + v]} \max \left\{ \left( 1 + \sum \nolimits _{h = 1}^\theta P_{[h]}^A + \sum \nolimits _{j = \theta + 1}^{\theta + v - 1}\right. \right. \) \( \left. \left. {P_{[j]}^A}\right) ^{{a_{[\theta + v]}}}, b_{[\theta + v]} \right\} = \sum \nolimits _{h = \theta + 1}^{\ddot{n}} {{w_{[h]}}} \sum \nolimits _{v = 1}^{h - \theta } {P_{[\theta + v]}^A} \). According to HLP rule (see Lemma 5), we have

$$\begin{aligned}{} & {} {\sum \limits _{h = \theta + 1}^{\ddot{n}} {{w_{[h]}}} \sum \limits _{v = 1}^{h - \theta } {P_{[\theta + v]}^A} }\\{} & {} \quad = {w_{[\theta + 1]}}P_{[\theta + 1]}^A + {w_{[\theta + 2]}}\left( {P_{[\theta + 1]}^A + P_{[\theta + 2]}^A} \right) + \cdots \\{} & {} \qquad + {w_{[n]}}\left( {P_{[\theta + 1]}^A + P_{[\theta + 2]}^A + \cdots + P_{[n]}^A} \right) \\{} & {} \quad \ge {w_{(\theta + 1)}}P_{[\theta + 1]}^A + {w_{(\theta + 2)}}\left( {P_{[\theta + 1]}^A + P_{[\theta + 2]}^A} \right) + \cdots \\{} & {} \qquad + {w_{(n)}}\left( {P_{[\theta + 1]}^A + P_{[\theta + 2]}^A + \cdots + P_{[n]}^A} \right) , \end{aligned}$$

where \(w_{(\theta +1)}\ge w_{(\theta +2)}\ge \cdots \ge w_{(\ddot{n})}\).

Let \(a_{\min }=\min \{a_{h}|h=\theta +1,\theta +2,\ldots ,{\ddot{n}}\}\), \(b_{\min }=\min \{b_{h}|h=\theta +1,\theta +2,\ldots ,{\ddot{n}}\}\), according to HLP rule (see Lemma 5), we have

$$\begin{aligned}{} & {} {w_{(\theta + 1)}}P_{[\theta + 1]}^A + {w_{(\theta + 2)}}\left( {P_{[\theta + 1]}^A + P_{[\theta + 2]}^A} \right) + \cdots + {w_{(n)}}\left( {P_{[\theta + 1]}^A + P_{[\theta + 2]}^A + \cdots + P_{[n]}^A} \right) \\{} & {} \quad = P_{[\theta + 1]}^A\left( {{w_{(\theta + 1)}} + {w_{(\theta + 2)}} + \cdots + {w_{(n)}}} \right) \\{} & {} \qquad + P_{[\theta + 2]}^A\left( {{w_{(\theta + 2)}} +{ {w_{(\theta + 3)}}} + \cdots + {w_{(n)}}} \right) + \cdots + P_{[n]}^A{w_{(n)}}\\{} & {} \quad \ge P_{< \theta + 1> }^A\left( {{w_{(\theta + 1)}} + {w_{(\theta + 2)}} + \cdots + {w_{(n)}}} \right) \\{} & {} \qquad + P_{< \theta + 2> }^A\left( {{w_{(\theta + 2)}} +{ {w_{(\theta + 3)}}} + \cdots + {w_{(n)}}} \right) + \cdots + P_{ < n > }^A{w_{(n)}}, \end{aligned}$$

where \(P_{< \theta + v> }^A = {{{{\tilde{P}}}}_{< \theta + v> }}\max \left\{ {{{\left( {1 + \sum \nolimits _{h = 1}^\theta {P_{[h]}^A} + \sum \nolimits _{j = \theta + 1}^{\theta + v - 1} {P_{ < j > }^A} } \right) }^{{a_{\min }}}},{b_{\min }}} \right\} ,v = 1,2, \ldots ,\ddot{n} - \theta \), \({\tilde{P}}_{<\theta +1>}\le {\tilde{P}}_{<\theta +2>}\le \cdots \le {\tilde{P}}_{<\ddot{n}>}\).

Therefore, the second lower bound of \(1\left| P_{h,l}^A={\tilde{P}}_h\max \left\{ \left( 1+\sum _{h=1}^{l-1}P_{[h]}^A\right) ^{a_h},b_h\right\} \right| \sum w_{h} {\tilde{C}}_{h}\) can be obtained:

$$\begin{aligned}{} & {} {LB}_{\sum w_h{\tilde{C}}_{h}}^2\nonumber \\{} & {} \quad =\sum _{h=1}^\theta w_{[h]} {\tilde{C}}_{[h]}({\tilde{\pi }}_{SP})+{\tilde{C}}_{[\theta ]}\sum _{h=\theta +1}^{\ddot{n}} w_{h}\nonumber \\{} & {} \qquad +\sum _{h=\theta +1}^{\ddot{n}}w_{(h)}\sum _{v=1}^{h-\theta }{\tilde{P}}_{<\theta +v>} \max \left\{ \left( 1+\sum _{h=1}^{\theta }P_{[h]}^A+\sum _{j=\theta +1}^{\theta +v-1} P_{<j>}^A\right) ^{a_{\min }},b_{\min }\right\} , \end{aligned}$$
(8)

where \(w_{(\theta +1)}\ge w_{(\theta +2)}\ge \cdots \ge w_{(\ddot{n})}\), \({\tilde{P}}_{<\theta +1>}\le {\tilde{P}}_{<\theta +2>}\le \cdots \le {\tilde{P}}_{<\ddot{n}>}\) (note that \({\tilde{P}}_{<h>}\) and \(w_{(h)}\) do not necessarily correspond to the same job), \(P_{< h> }^A = {{{{\tilde{P}}}}_{< h> }}\max \left\{ {{{\left( {1 + \sum \nolimits _{j = 1}^\theta {P_{[j]}^A} + \sum \nolimits _{j = \theta + 1}^{h - 1} {P_{ < j > }^A} } \right) }^{{a_{\min }}}},{b_{\min }}} \right\} \), \(h = \theta + 1,\theta + 2,\ldots ,\ddot{n} \), and \(P_{<h>}^A\) (\(h=\theta +1,\theta +2,\ldots ,\ddot{n}\)) is the actual processing time of hth position by the order of \({\tilde{P}}_{<\theta +1>}\le {\tilde{P}}_{<\theta +2>}\le \cdots \le {\tilde{P}}_{<\ddot{n}>}\).

Remark

If \({\tilde{P}}_{h}\) and \(w_{h}\) \((h=\theta + 1,\theta + 2,...,\ddot{n})\) satisfy agreeable condition (i.e., \({\tilde{P}}_{h}\le {\tilde{P}}_{j}\) implies \(w_{h}\ge w_{j}\)), then \({\tilde{P}}_{<h>}\) and \(w_{(h)}\) correspond to the same job. Otherwise, \({\tilde{P}}_{<h>}\) and \(w_{(h)}\) do not necessarily correspond to the same job.

Combining \({LB}_{\sum w_h{\tilde{C}}_{h}}^1\) and \({LB}_{\sum w_h{\tilde{C}}_{h}}^2\), the lower bound of \(1\left| P_{h,l}^A={\tilde{P}}_h\max \left\{ \left( 1+\sum _{h=1}^{l-1}\right. \right. \right. \left. \left. \left. P_{[h]}^A\right) ^{a_h},b_h\right\} \right| \) \(\sum w_{h} {\tilde{C}}_{h}\) is

$$\begin{aligned} {LB}_{\sum w_h{\tilde{C}}_{h}}=\max \left\{ {LB}_{\sum w_h{\tilde{C}}_{h}}^1,{LB}_{\sum w_h{\tilde{C}}_{h}}^2\right\} . \end{aligned}$$
(9)

3.1.3 Criterion \(L_{\max }\)

Similar to Sects. 3.1.1 and 3.1.2, let \(a_{\min }=\min \{a_{h}|h=\theta +1,\theta +2,\ldots ,{\ddot{n}}\}\), \(b_{\min }=\min \{b_{h}|h=\theta +1,\theta +2,\ldots ,{\ddot{n}}\}\) and \(d_{\max }=\max \{d_{h}|h=\theta +1,\theta +2,\ldots ,\ddot{n}\}\), it follows that

$$\begin{aligned} L_{[\theta +j]}({\tilde{\pi }})= & {} {\tilde{C}}_{[\theta ]}({\tilde{\pi }}_{SP})+\sum _{v=1}^{j} {\tilde{P}}_{[\theta +v]} \max \left\{ \left( 1+\sum _{h=1}^{\theta }P_{[h]}^A+\sum _{h=\theta +1}^{\theta +v-1} P_{[h]}^A\right) ^{a_{[\theta +v]}},b_{[\theta +v]}\right\} \nonumber \\{} & {} -d_{[\theta +j]}, \end{aligned}$$
(10)

where \(\sum _{h=k}^{k-1} P_{[h]}^A=0\) and \(1\le j\le \ddot{ n}-\theta .\)

Hence,

$$\begin{aligned}{} & {} L_{[\theta +j]}({\tilde{\pi }})\nonumber \\{} & {} \quad ={\tilde{C}}_{[\theta ]}({\tilde{\pi }}_{SP})+\sum _{v=1}^{j} {\tilde{P}}_{[\theta +v]} \max \left\{ \left( 1+\sum _{h=1}^{\theta }P_{[h]}^A+\sum _{h=\theta +1}^{\theta +v-1} P_{[h]}^A\right) ^{a_{[\theta +v]}},b_{[\theta +v]}\right\} -d_{[\theta +j]}\nonumber \\{} & {} \quad \ge {\tilde{C}}_{[\theta ]}({\tilde{\pi }}_{SP})+\sum _{v=1}^{j} {\tilde{P}}_{[\theta +v]} \max \left\{ \left( 1+\sum _{h=1}^{\theta }P_{[h]}^A+\sum _{h=\theta +1}^{\theta +v-1} P_{[h]}^A\right) ^{a_{\min }},b_{\min }\right\} -d_{\max }, \end{aligned}$$
(11)

where \(1\le j\le \ddot{ n}-\theta .\)

It is noticed that \(\sum _{v=1}^{j} {\tilde{P}}_{[\theta +v]} \max \left\{ \left( 1+\sum _{h=1}^{\theta }P_{[h]}^A+\sum _{h=\theta +1}^{\theta +v-1} P_{[h]}^A\right) ^{a_{\min }},b_{\min }\right\} \) can be minimized by the SPT rule (see Lemma 1). Therefore, the first lower bound of the problem \(1|P_{h,l}^A={\tilde{P}}_h\) \(\times \max \left\{ \left( 1+\sum _{h=1}^{l-1}P_{[h]}^A\right) ^{a_h},b_h\right\} |L_{\max }\) is:

$$\begin{aligned} LB_{L_{\max }}^1= & {} \max \left\{ {{{{{\tilde{C}}}}}_{[1]}}-d_{[1]},{{{\tilde{C}}_{[2]}}}-d_{[2]}, \ldots ,{{{{{\tilde{C}}}}_{[\theta ]}}}-d_{[\theta ]},{\tilde{C}}_{[\theta ]}({\tilde{\pi }}_{SP})\right. \nonumber \\{} & {} \left. +\sum _{v=1}^{{\ddot{n}-\theta }} {\tilde{P}}_{<\theta +v>} \max \left\{ \left( 1+\sum _{h=1}^{\theta }P_{[h]}^A+\sum _{h=\theta +1}^{\theta +v-1} P_{<h>}^A\right) ^{a_{\min }},b_{\min }\right\} -d_{\max }\right\} ,\nonumber \\ \end{aligned}$$
(12)

where \({\tilde{P}}_{<\theta +1>}\le {\tilde{P}}_{<\theta +2>}\le \cdots \le {\tilde{P}}_{<\ddot{n}>}\) and \(P_{<h>}^A\) (\(h=\theta +1,\theta +2,\ldots ,\ddot{n}\)) is the actual processing time of hth position by the order of \({\tilde{P}}_{<\theta +1>}\le {\tilde{P}}_{<\theta +2>}\le \cdots \le {\tilde{P}}_{<\ddot{n}>}\).

According to the definition of maximum lateness, we have \({L_{\max }} = \max \{ {{{{\tilde{C}}}}_{[1]}} - {d_{[1]}},{{{{\tilde{C}}}}_{[2]}} - {d_{[2]}}, \ldots ,{{\tilde{C}}_{[\theta ]}} - {d_{[\theta ]}},{{{{\tilde{C}}}}_{[\theta + 1]}} - {d_{[\theta + 1]}}, \ldots ,{{{{\tilde{C}}}}_{[\ddot{n}]}} - {d_{[\ddot{n}]}}\}\). Assuming that the completion time \({{\tilde{C}}_{[\theta + j]}}\) \((j = 1,2, \ldots ,\ddot{n} - \theta )\) is a fixed value, according to EDD rule (see Lawler 1973), we have

$$\begin{aligned}{} & {} \max \{ {{{{\tilde{C}}}}_{[\theta + 1]}} - {d_{[\theta + 1]}},{{{{\tilde{C}}}}_{[\theta + 2]}} - {d_{[\theta + 2]}}, \ldots ,{{{{\tilde{C}}}}_{[\ddot{n}]}} - {d_{[\ddot{n}]}}\} \\{} & {} \quad \ge \max \{ {{{{\tilde{C}}}}_{[\theta + 1]}} - {d_{(\theta + 1)}},{{{{\tilde{C}}}}_{[\theta + 2]}} - {d_{(\theta + 2)}}, \ldots ,{{{{\tilde{C}}}}_{[\ddot{n}]}} - {d_{(\ddot{n})}}\}, \end{aligned}$$

where \({d_{(\theta + 1)}} \le {d_{(\theta + 2)}} \le \cdots \le {d_{(\ddot{n})}}\).

Let \(a_{\min }=\min \{a_{h}|h=\theta +1,\theta +2,\ldots ,{\ddot{n}}\}\), \(b_{\min }=\min \{b_{h}|h=\theta +1,\theta +2,\ldots ,{\ddot{n}}\}\), according to Lemma 1, we have

$$\begin{aligned}{} & {} \mathop {\max }\limits _{j = 1}^{\ddot{n} - \theta } \left\{ {{{{{\tilde{C}}}}_{[\theta + j]}} - {d_{(\theta + j)}}} \right\} \\{} & {} \quad = \mathop {\max }\limits _{j = 1}^{\ddot{n} - \theta } \left\{ {{{{{\tilde{C}}}}_{[\theta ]}} + \sum \limits _{v = 1}^j {P_{[\theta + v]}^A} - {d_{(\theta + j)}}} \right\} \\{} & {} \quad \ge {{{{\tilde{C}}}}_{[\theta ]}} + \mathop {\max }\limits _{j = 1}^{\ddot{n} - \theta } \left\{ {\sum \limits _{v = 1}^j {P_{ < \theta + v > }^A} - {d_{(\theta + j)}}} \right\} , \end{aligned}$$

where \(P_{< \theta + v> }^A = {{{{\tilde{P}}}}_{< \theta + v> }}\max \left\{ {{{\left( {1 + \sum \nolimits _{h = 1}^\theta {P_{[h]}^A} + \sum \nolimits _{h = \theta + 1}^{\theta + v - 1} {P_{ < h > }^A} } \right) }^{{a_{\min }}}},{b_{\min }}} \right\} \), \(v = 1,2, \ldots ,j\), \(j = 1,2, \ldots ,\ddot{n} - \theta \) and \({\tilde{P}}_{<\theta +1>}\le {\tilde{P}}_{<\theta +2>}\le \cdots \le {\tilde{P}}_{<\ddot{n}>}\).

Therefore, the second lower bound of the problem \(1\Bigg |P_{h,l}^A={\tilde{P}}_h\max \Bigg \{\left( 1+\sum _{h=1}^{l-1}P_{[h]}^A\right) ^{a_h},b_h\Bigg \} \Bigg |L_{\max }\) is

$$\begin{aligned} LB_{L_{\max }}^2= & {} \max \left\{ {{{{{\tilde{C}}}}_{[1]}}}-d_{[1]},{{{\tilde{C}}_{[2]}}}-d_{[2]}, \ldots ,{{{{{\tilde{C}}}}_{[\theta ]}}}-d_{[\theta ]},{\tilde{C}}_{[\theta ]}({\tilde{\pi }}_{SP})\right. \nonumber \\{} & {} \left. +{\max _{j=1}^{\ddot{n}-\theta }\left\{ \sum _{v=1}^{j} {\tilde{P}}_{<\theta +v>} \max \left\{ \left( 1+\sum _{h=1}^{\theta }P_{[h]}^A+\sum _{h=\theta +1}^{\theta +v-1} P_{<h>}^A\right) ^{a_{\min }},b_{\min }\right\} -d_{(\theta +j)}\right\} }\right\} ,\nonumber \\ \end{aligned}$$
(13)

where \({\tilde{P}}_{<\theta +1>}\le {\tilde{P}}_{<\theta +2>}\le \cdots \le {\tilde{P}}_{<\ddot{n}>}\), \(d_{(\theta +1)}\le d_{(\theta +2)}\le \cdots \le d_{(\ddot{n})}\) (note that \({\tilde{P}}_{<h>}\) and \(d_{(h)}\) do not necessarily correspond to the same job) and \(P_{<h>}^A\) (\(h=\theta +1,\theta +2,\ldots ,\ddot{n}\)) is the actual processing time of hth position by the order of \({\tilde{P}}_{<\theta +1>}\le {\tilde{P}}_{<\theta +2>}\le \cdots \le {\tilde{P}}_{<\ddot{n}>}\).

Combining \({LB}_{L_{\max }}^1\) and \({LB}_{L_{\max }}^2\), the lower bound of \(1\Bigg |P_{h,l}^A={\tilde{P}}_h\max \left\{ \left( 1+\sum _{h=1}^{l-1}P_{[h]}^A\right) ^{a_h},\right. \) \( \left. b_h\Bigg \} \right| \) \(L_{\max }\) is

$$\begin{aligned} {L{B_{{L_{\max }}}} = \max \left\{ LB_{{L_{\max }}}^1,LB_{{L_{\max }}}^2\right\} }. \end{aligned}$$
(14)

3.2 Upper bound

For the optimization of minimizing regular objective costs, any feasible solution can be proposed as an upper bound. Obviously, the SPT rule is a heuristic for\(1\left| P_{h,l}^A={\tilde{P}}_h\max \left\{ \left( 1+\sum _{h=1}^{l-1}P_{[h]}^A\right) ^{a_h},b_h\right\} \right| \) \({\tilde{C}}_{\max }\), the WSPT rule is a heuristic for \(1\left| P_{h,l}^A={\tilde{P}}_h\max \left\{ \left( 1+\sum _{h=1}^{l-1}P_{[h]}^A\right) ^{a_h},b_h\right\} \right| \sum w_h{\tilde{C}}_{h}\), the EDD rule is a heuristic for \(1\left| P_{h,l}^A={\tilde{P}}_h\max \left\{ \left( 1+\sum _{h=1}^{l-1}P_{[h]}^A\right) ^{a_h},b_h\right\} \right| L_{\max }\). In addition, an improvement scheme (i.e., an efficient constructive scheme for the m-machine flow shop problem \(Fm||\sum {\tilde{C}}_h\), see Framinan and Leisten 2003) can be adopt for \(1\left| P_{h,l}^A={\tilde{P}}_h\max \left\{ \left( 1+\sum _{h=1}^{l-1}P_{[h]}^A\right) ^{a_h},b_h\right\} \right| \gamma \), where \(\gamma \in \{{\tilde{C}}_{\max },\sum w_h{\tilde{C}}_h,L_{\max }\}\). Details of the upper bound (UB) algorithm is presented as follows:

Table 1 Data for Example 1

Algorithm 1 (UB)

  • Step 1. Arrange the jobs by the SPT/WSPT/EDD rule (SPT for \({\tilde{C}}_{\max }\), WSPT for \(\sum w_h{\tilde{C}}_h\), and EDD for \(L_{\max }\)).

  • Step 2. Set \(r = 2\). Select the first two jobs from the sorted list and select the better of the two possible sequences.

  • Step 3. Increment \(r, r=r+1\) (where r denotes the rth job from the sorted list using Step 1). Select the rth job from the sorted list and insert it into r possible positions of the best partial sequence obtained so far. Among the r sequences, the best r-job partial sequence is selected based on minimum \({\tilde{C}}_{\max }\) (\(\sum w_h{\tilde{C}}_h\), \(L_{\max }\)). Next, determine all possible sequences by interchanging jobs in positions i and j of the above partial sequence for all ij \((1\le i<r, i < j \le r)\). Select the best partial sequence among \(r(r-1)/2\) sequences having minimum \({\tilde{C}}_{\max }\) (\(\sum w_h{\tilde{C}}_h\), \(L_{\max }\)).

  • Step 4. If \(r = \ddot{n}\), then STOP; otherwise, go to Step 3.

Fig. 1
figure 1

Search tree of the B &B algorithm for Example 1 (\(\times \) denotes pruning)

3.3 Branch and bound algorithm

B &B algorithm search follows a depth-first strategy, this algorithm assigns jobs in a forward manner starting from the first job position (assign a job to a node).

Algorithm 2 (B &B)

  • Step 1. (Upper bound) Calculate the initial upper bound by Algorithm 1 (UB).

  • Step 2. (Bounding) Calculate the lower bound (see Eq. (4) for \({\tilde{C}}_{\max }\), Eq. (9) for \(\sum w_h{\tilde{C}}_h\), and Eq. (14) for \(L_{\max }\)) for the node. If the lower bound for an unfathomed partial sequence of jobs is larger than or equal to the value of \({\tilde{C}}_{\max }\) (\(\sum w_h{\tilde{C}}_h\) and \(L_{\max }\)) of the initial solution, eliminate the node and all the nodes following it in the branch. Calculate the value \({\tilde{C}}_{\max }\) (\(\sum w_h{\tilde{C}}_h\) and \(L_{\max }\)) of the completed sequence, if it is less than the initial solution, replace it as the new solution; otherwise, eliminate it.

  • Step 3. (Termination) Continue until all nodes have been explored.

For better understanding of B &B algorithm, an example is proposed to explain the B &B algorithm as follows.

Example 1

Consider the problem \(1\left| P_{h,l}^A={\tilde{P}}_h\max \left\{ \left( 1+\sum _{h=1}^{l-1}P_{[h]}^A\right) ^{a_h},b_h\right\} \right| {\tilde{C}}_{\max }\), where \(\ddot{n}=\)5 and the other corresponding parameters are given in Table 1.

Table 2 CPU time (ms) results of the algorithms for \({\tilde{C}}_{\max }\)
Table 3 Error results of the algorithms for \({\tilde{C}}_{\max }\)
Table 4 CPU time (ms) results of the algorithms for \(\sum w_h{\tilde{C}}_{h}\)
Table 5 Error results of the algorithms for \(\sum w_h{\tilde{C}}_{h}\)
Table 6 CPU time (ms) results of the algorithms for \(L_{\max }\)
Table 7 Error results of the algorithms for \(L_{\max }\)
Table 8 Computational results of the heuristics for \({\tilde{C}}_{\max }\)
Table 9 Computational results of the heuristics for \(\sum w_h{\tilde{C}}_{h}\)
Table 10 Computational results of the heuristics for \(L_{\max }\)

Solution

According to the SPT rule, the sequence is \({\ddot{J}}_1\rightarrow {\ddot{J}}_2\rightarrow {\ddot{J}}_3\rightarrow {\ddot{J}}_4\rightarrow {\ddot{J}}_5\), and the corresponding objective function value is \({\tilde{C}}_{\max }=28.4000\). For the \({\tilde{C}}_{\max }\) minimization, the search tree of the B &B algorithm can be seen from Fig. 1. The numbers in Fig. 1 represent the lower bound values, and \({J_0}\) is defined as the level 0.

At level 1, for job \({{\ddot{J}}_1}\): \(\theta = 1\), from Eq. (4), we have

$$\begin{aligned} LB({\ddot{J}}_1)= & {} {{\tilde{C}}_{[1]}}+\sum _{v=1}^{4}{\tilde{P}}_{<\theta +v>} \max \left\{ \left( 1+\sum _{h=1}^{1}P_{[h]}^A+\sum _{h=2}^{v} P_{<h>}^A\right) ^{a_{\min }},b_{\min }\right\} \\= & {} 4+7\times \max \left\{ (1+4)^{-0.25},0.65\right\} +8\times \max \left\{ (1+4+4.6812)^{-0.25},0.65\right\} \\{} & {} +9\times \max \left\{ (1 + 4 + 4.6812 + 5.2)^{-0.25},0.65\right\} \\{} & {} +10\times \max \left\{ (1 + 4 + 4.6812 + 5.2 + 5.85)^{-0.25},0.65\right\} \\= & {} 4+4.6812+5.2+5.85+6.5\\= & {} 26.2312. \end{aligned}$$

The calculation process of the remaining lower bounds and values of \({\tilde{C}}_{\max }\) can be obtained similarly. From Fig. 1, we can obtain that the optimal job sequence is \(\pi ^* =({\ddot{J}}_1,{\ddot{J}}_2,{\ddot{J}}_3,{\ddot{J}}_4,{\ddot{J}}_5)\), \(({\ddot{J}}_1,{\ddot{J}}_2,{\ddot{J}}_3,{\ddot{J}}_5,{\ddot{J}}_4)\), \(({\ddot{J}}_1,{\ddot{J}}_2,{\ddot{J}}_5,{\ddot{J}}_3,{\ddot{J}}_4)\), \(({\ddot{J}}_1,{\ddot{J}}_2,{\ddot{J}}_5,{\ddot{J}}_4,{\ddot{J}}_3)\), \(({\ddot{J}}_1,{\ddot{J}}_5,{\ddot{J}}_2,{\ddot{J}}_3,{\ddot{J}}_4)\), \(({\ddot{J}}_1,{\ddot{J}}_5,{\ddot{J}}_2,{\ddot{J}}_4,{\ddot{J}}_3)\), \(({\ddot{J}}_1,{\ddot{J}}_5,{\ddot{J}}_3,{\ddot{J}}_2,{\ddot{J}}_4)\), \(({\ddot{J}}_1,{\ddot{J}}_5,{\ddot{J}}_3,{\ddot{J}}_4,{\ddot{J}}_2)\), \(({\ddot{J}}_1,{\ddot{J}}_5,{\ddot{J}}_4, {\ddot{J}}_2,{\ddot{J}}_3)\), \(({\ddot{J}}_1,{\ddot{J}}_5,{\ddot{J}}_4,{\ddot{J}}_3,{\ddot{J}}_2)\), and the optimal value \({\tilde{C}}_{\max }=28.4000\).

4 Meta-heuristic algorithms

4.1 Tabu search

In this subsection, the tabu search (TS) algorithm is used to find a near-optimal solution. The initial sequence used in the TS algorithm is chosen by arranging the jobs by the SPT/WSPT/EDD rule (SPT for \({\tilde{C}}_{\max }\), WSPT for \(\sum w_h{\tilde{C}}_h\), and EDD for \(L_{\max }\)), and the maximum number of iterations for the TS algorithm is set at \(1000 \ddot{n}\). The same as Wu et al. (2014), the implementation of the TS algorithm is given below:

Algorithm 3 (TS)

  • Step 1. Let the tabu list be empty and the iteration number be zero.

  • Step 2. Set the initial sequence of the TS algorithm, calculate its objective function and set the current sequence as the best solution \({\tilde{\pi }}^*\).

  • Step 3. Search the associated neighborhood (the neighborhood is generated by the random exchange of any two jobs) of the current sequence and resolve if there is a sequence \({\tilde{\pi }}^{**}\) with the smallest objective function in associated neighborhoods and it is not in the tabu list.

  • Step 4. If \({\tilde{\pi }}^{**}\) is better than \({\tilde{\pi }}^{*}\), then let \({\tilde{\pi }}^*={\tilde{\pi }}^{**}\). Update the tabu list and the iteration number.

  • Step 5. If there is not a sequence in associated neighborhoods but it is not in the tabu list or the maximum number of iterations is reached, then output the local optimal sequence \({\tilde{\pi }}\) and objective function value \({\tilde{C}}_{\max }\) (\(\sum w_h{\tilde{C}}_h\), \(L_{\max }\)). Otherwise, update the tabu list and go to Step 3.

4.2 Simulated annealing

Simulated annealing is a good choice to solve \(1\left| P_{h,l}^A={\tilde{P}}_h\max \left\{ \left( 1+\sum _{h=1}^{l-1}P_{[h]}^A\right) ^{a_h},b_h\right\} \right| \gamma \), where \(\gamma \in \{{\tilde{C}}_{\max },\sum w_h{\tilde{C}}_h,L_{\max }\}\). The details of the SA algorithm is summarized as follows:

Algorithm 4 (SA)

  • Step 1. Initial schedule: Arranging the jobs by the SPT/WSPT/EDD rule (SPT for \({\tilde{C}}_{\max }\), WSPT for \(\sum w_h{\tilde{C}}_h\), and EDD for \(L_{\max }\))

  • Step 2. The pairwise interchange (PI) neighborhood generation method was used in the algorithms.

  • Step 3. Acceptance probability: When a new schedule is generated, it is accepted if its objective value is smaller than that of the original schedule; otherwise, it is accepted with some probability that decreases as the process evolves. The probability of acceptance is generated from an exponential distribution,

    $$\begin{aligned} P(accept) = exp(-\alpha \times \bigtriangleup WC/L), \end{aligned}$$

    where \(\alpha \) is a control parameter and \(\bigtriangleup WC/L\) is the change in the objective value (i.e., \({\tilde{C}}_{\max },\sum w_h{\tilde{C}}_h\) and \(L_{\max }\)). In addition, the method was adopted to change \(\alpha \) in the kth iteration as follows:

    $$\begin{aligned} \alpha =\frac{k}{\delta } \end{aligned}$$

    where \(\delta \) is an experimental constant. After preliminary trials, \(\delta =1\) was used in our experiments.

    If \(\tilde{C}_\textrm{max} (\sum \mathcal {W} \tilde{C}_{h}, L_\textrm{max})\)) increases as a result of a random pairwise interchange, the new sequence is accepted when \(P(accept) > \theta \), where \(\theta \) is randomly sampled from the uniform distribution U(0, 1).

  • Step 4. Stopping condition: The schedule is stable after 1000\(\ddot{n}\) iterations (see Lai et al. 2014).

5 Computational experiments

In this section, computational experiments are conducted to evaluate the accuracy and efficiency of the SPT, WSPT, EDD, UB, TS, SA and B &B algorithms. The C++ language was used to implement both the heuristic and B &B algorithms. Detailed programming and testing configurations are as follows.

  • C++ version: Visual studio 2019, installed on Windows 10 system, occupies 800 MB–210 GB of free space on the hard disk, depending on the features installed. Typical installations require 20–50 GB of free space.

  • Testing computer: One desktop workstation with one AMD R5-5500U CPU (2.1–4.0 GHz, 6 cores, 12 threads). The computer has 16G of running memory and 512G of solid-state hard disks.

Similar to Jiang et al. (2022), the following test parameters are randomly generated:

  1. (1)

    \({\tilde{P}}_h\) is uniformly integers distributed over [1, 10];

  2. (2)

    \(a_h=a\), where \(a=-0.05,-0.1,-0.15,-0.2,-0.25\);

  3. (3)

    \(b_h\) is uniformly distributed over (0, 1);

  4. (4)

    \(w_h\) is uniformly integers distributed over [1, 10];

  5. (5)

    \(d_h\) is uniformly integers distributed over [1, 20];

  • Small sized problem instances: \(\ddot{n}=9\), 10, 11, 12, 13, also use B &B to solve to global optimum;

  • Large sized problem instances: \(\ddot{n}=100\), 125, 150, 175, 200, B &B was disabled.

For each parameters’ combination (\(\ddot{n}\) and a), 20 randomly generated instances were evaluated.

5.1 Small-sized instances

For \({\tilde{C}}_{\max }\), the error of the solution produced by heuristics is calculated as

$$\begin{aligned} \frac{{\tilde{C}}_{\max }{(H)}}{{\tilde{C}}_{\max }(H^*)}, \end{aligned}$$

where H is a sequence obtained by the SPT, UB, TS, SA and the optimal sequence \(H^*\) is obtained by the B &B algorithm. For \(\sum \mathcal {W} \tilde{C}_h\), the error of the solution produced by heuristics is calculated as

$$\begin{aligned} \frac{\sum w_h{\tilde{C}}_h{(H)}}{\sum w_h{\tilde{C}}_h(H^*)}, \end{aligned}$$

where H is a sequence obtained by the WSPT, UB, TS, SA and the optimal sequence \(H^*\) is obtained by the B &B algorithm. For \(L_{\max }\), the error of the solution produced by heuristics is calculated as

$$\begin{aligned} \frac{L_{\max }{(H)}}{L_{\max }(H^*)}, \end{aligned}$$

where H is a sequence obtained by the EDD, UB, TS, SA and the optimal schedule \(H^*\) is obtained by the B &B algorithm. It is also defined that “CPU time” as the running time of the SPT (WSPT, EDD), UB, TS, SA, B &B algorithm and time unit is milli-seconds (1/1000 seconds). The corresponding results are summarized in Tables 2, 3, 4, 5, 6 and 7. From Tables 2, 4 and 6, we can see that the CPU time increase rapidly, i.e., grows exponentially with the number of jobs (\(\ddot{n}\)). From Table 3, for the \({\tilde{C}}_{\max }\) minimization, the results of SA appear to be more accurate than the SPT, UB and TS. For the \(\sum w_h{\tilde{C}}_{h}\) and \(L_{\max }\) minimizations, Tables 5 and 7 show that UB appear to be more accurate than the SPT, SA and TS.

5.2 Large-sized instances

The performance of the heuristics SPT (WSPT, EDD), UB, TS and SA was verified by the ratios \(\frac{{\tilde{C}}_{\max }{(H)}}{{\tilde{C}}_{\max }(SPT)},\) \(\frac{\sum w_h{\tilde{C}}_h(H)}{\sum w_h{\tilde{C}}_h(WSPT)}\) and \(\frac{L_{\max }{(H)}}{L_{\max }(EDD)},\) where H is a sequence obtained by the UB, TS and SA. The corresponding results are summarized in Tables 8, 9 and 10. Tables 8, 9 and 10 shows that the error of UB outperforms TS and SA. Tables 8, 9 and 10 also shows that the CPU time (ms) of the TS is much longer than the UB and AS. The running times for the heuristics SPT, WSPT and EDD are almost the same for \(\ddot{n}=100, 125, 150,175,200\).

6 Conclusions

In this article, we addressed a single-machine problem with truncated learning effects: how to minimise the makespan, total-weighted completion time and maximum lateness. We proposed a branch-and-bound algorithm and several heuristics. Numerical studies showed that the UB performs best for large sized instances. Future research can focus on flow shop or job shop scheduling problems with truncated learning effects or deterioration effects (see Huang and Wang 2015; Huang 2019). Given that the single-machine problems, effective exact algorithms will be considered.