1 Introduction

In the actual production and processing environment, workers, machines and other processing conditions have a great impact on the time when the job is handed over to the customer (i.e., the delivery times, see Zhang et al. 2022; Maecker et al. 2023). As the number of processing increases, the processing conditions deteriorate gradually, and the total processing time of the job to the customer increases. This kind of situation related to the previously processed jobs is called past-sequence-dependent delivery time. Koulamas and Kyparisis (2010) considered the single machine scheduling problems with past-sequence-dependent delivery times (denoted by \(Q_{psddt}\)), they proved that the \(1|Q_{psddt}|\varphi \) problem can be solved in polynomial time, where \(\varphi \) is the objective function, \(\varphi \in \{C_{\max } (\text {makespan})\), \(TCT(=\sum C_j, \text {total completion time})\), \(L_{\max } (\text {maxumum lateness})\), \(T_{\max }\) \( (\text {maxumum tardiness})\), \(\sum U_j (\text {number of tardy jobs})\}\), and \(C_j\) is the completion time of job \(J_j\). Liu et al. (2012) showed that the problem \(1|Q_{psddt}|\varphi \) can be solved in polynomial time, where \(\varphi \in \{TWCT(=\sum v_jC_j\), total weighted completion time, where \(v_j\) is the weight of job \(J_j\)), \(\sum v_j\left( 1-e^{-{\eta } C_{j}}\right) \) \( (0<\eta <1, \text {total discounted} \) \(\text { weighted completion time}),TADC(=\sum _{i=1}^n\sum _{j=i}^n|C_{i}-C_{j}|),\sum (A_1E_j+A_2T_j+A_3d)\}\), \(E_j=\max \{0,d-C_{j}\}\) and \(T_j=\max \{0,C_j-d\}\) are earliness and tardiness of job \(J_j\), d is the common due date, \(A_1\ge 0,A_2\ge 0,A_3\ge 0\) are given constants (i.e., the unit cost of earliness, tardiness and common due date respectively). Wang et al. (2021) proved that both the problems \(1|Q_{psddt}|\sum \varpi _jC_{[j]}\) and \(1|Q_{psddt}|\sum \varpi _j\left( 1-e^{-{\eta } C_{[j]}}\right) \) can be solved in \(O(n\log n)\) time, where \(\varpi _j\) is the weight of jth position (position-dependent weight). Qian and Han (2022) integrated the single-machine due date assignment scheduling with \(Q_{psddt}\) and simple linear deterioration (SLD). For the earliness-tardiness cost (i.e., \(\sum (A_1 E_j+A_2 T_j+A_3 d_j)\), where \(d_j\) is the due date of job \(J_j\)) minimization, they proved that the problem is polynomially solvable. Qian and Zhan (2022) and Mao et al. (2023) studied the single-machine due window assignment scheduling with \(Q_{psddt}\) and SLD. For the earliness-tardiness cost (i.e., \(\sum (A_1 E_j+A_2 T_j+A_3 d_j^1+A_4 D)\), where \(d_j^1\) is the starting time of due window of job \(J_j\), D is the size of common due window, and \(A_4\) is the unit cost of D) minimization, Qian and Zhan (2022) proved that the problem is polynomially solvable. For the general total weighted earliness-tardiness cost minimization, Mao et al. (2023) proved that the problem is polynomially solvable, where the weights are positional-dependent weights.

In addition, a related research area focuses on the learning effects and scheduling (see Azzouz et al. 2018; Wang et al. 2022b; Heuser and Tauer 2023; Lei et al. 2023; Ma et al. 2023; Paredes-Astudillo et al. 2023; Pei et al. 2023; Wang and Wang 2023; Xin et al. 2023). Yang and Yang (2012) explained the single-machine scheduling problems with \(Q_{psddt}\) and learning effects. They proved that \(1|p_{i,[j]}=p_ij^{a_i},Q_{psddt}|\sum (A_1 E_j+A_2 T_j+A_3 d_1+A_4 D)\) and \(1|p_{i,[j]}=p_ij^{a_i},Q_{psddt}|TADC\) can be solved in polynomial time, where \(p_{i,[j]}=p_ij^{a_i}\) is the actual processing time of job \(J_i\) is scheduled at jth position, and \(p_i\) (resp. \(a_i\)) is the basic processing time (resp. learning rate) of job \(J_i\). Shen and Wu (2013) considered single-machine scheduling with \(Q_{psddt}\) and general learning effects. They proved that some regular objective functions minimizations can be solved in polynomial time. Zhao and Tang (2014) examined the single-machine scheduling problems with \(Q_{psddt}\) and general position-dependent processing times. Recently, Toksari et al. (2022) considered the single-machine scheduling problems with general \(Q_{psddt}\) and learning effects. For the objective functions \(C_{\max }\) and TCT, they proved that the problem remains polynomially solvable. For the special cases of \(T_{\max }\) and TWCT, they proved that the problem remains polynomially solvable. In 2023, Ren et al. (2023) studied the same problem as Toksari et al. (2022), for the general cases of \(T_{\max }\) and TWCT, Ren et al. (2023) proposed a branch-and-bound algorithm and some heuristic algorithms. Wang et al. (2023) combined \(Q_{psddt}\) and truncated learning effect. They proved that three due date assignment problems can be solved in polynomial time.

However, subjected to the uncontrolled learning effects, the actual job processing times will plummet to zero dramatically due to the increasing number of jobs which are already processed or the emergence of jobs with long processing time. Hence, Wu et al. (2012, 2013), Wu and Wang (2016), Wang et al. (2022a) and Zhao (2022) studied the truncated learning effects. In this article, we extend the results of Toksari et al. (2022), Ren et al. (2023) and Wang et al. (2023), but consider the scheduling model with general \(Q_{psddt}\) and truncated learning effects. The objective functions are given as follows: \(C_{\max }\), TCT and TWCT. We prove that the \(C_{\max }\) and TCT minimizations are polynomial solvable. Under a special case of the TWCT minimization, we prove that it remains polynomially solvable. To solve the general case of the TWCT minimization, the heuristic, tabu search and branch-and-bound algorithms are proposed. We also deal with the computational performances of these algorithms for the general case of the TWCT minimization.

The structure of this paper is given as follows: Sect. 2 describes the problem; Sect. 3 puts forward several useful properties; Sect. 4 analyzes the single-machine \(C_{\max }\) and TCT problems; Sect. 5 solves the TWCT problem in general case. Conclusions are given in the last section.

2 Problem description

Considering that n jobs are to be processed on a single machine, jobs can be processed at time 0 and can only be processed one job at a time. Let \(p_i\) be the basic processing time (i.e., the processing time without any learning effects) of the job \(J_i\in \{J_1,J_2,\ldots ,J_n\}\), Toksari et al. (2022) studied the following model: the actual processing time of job \(J_i\) scheduled at position j is

$$\begin{aligned} p_{i,[j]}=p_i\left( 1+\sum _{s=1}^{j-1}\ln p_{[s]}\right) ^\vartheta , (i=1,\ldots ,n), \end{aligned}$$

where \(\vartheta \le 0\) is the learning index. Wang et al. (2023) studied the following model: the actual processing time of job \(J_i\) scheduled at position j is

$$\begin{aligned} p_{i,[j]}=p_i\max \left\{ \left( 1+\sum _{s=1}^{j-1} p_{[s]} \right) ^\vartheta ,\delta \right\} ,(i=1,\ldots ,n). \end{aligned}$$

This paper generalizes the model of Toksari et al. (2022) and Wang et al. (2023), i.e., the logarithm truncation time function with position-dependent weight is:

$$\begin{aligned} p_{i,[j]}=p_i\max \left\{ \alpha a^{\sum _{s=1}^{j-1}w_s\ln p_{[s]}} +\beta ,\delta \right\} , \end{aligned}$$
(1)

where \(0<a\le 1\), \(\alpha \ge 0\), \(\beta \ge 0\), \(\alpha +\beta =1\), \(\delta \) is a truncation parameter (\(0<\delta \le 1\)), and \(w_s\ge 1\) is the position-dependent weight (\(s=1,2,\ldots ,n\)). As in Toksari et al. (2022), the processing of job \(p_{i,[j]}\) must be followed the general \(Q_{psddt}\) delivery time:

$$\begin{aligned} Q_{i,[j]}=\gamma \left( 1+\sum _{s=1}^{j-1}p_{[s]}\right) ^b, \end{aligned}$$
(2)

where \(\gamma >0\) is a constant and \(b\ge 1\) is the index of delivery times. The objective is to minimize \(C_{\max }\), TCT, and TWCT. As in Brucker (2007) and Pinedo (2015), the corresponding problem can be expressed as:

$$\begin{aligned} 1|GQ_{psddt},p_{i,[j]}=p_i\max \left\{ \alpha a^{\sum _{s=1}^{j-1}w_s \ln p_{[s]}}+\beta ,\delta \right\} |\varphi , \end{aligned}$$

where \(\varphi \in \{C_{\max },TCT,TWCT\}\), and \(GQ_{psddt}\) denotes the general delivery time (2).

3 Basic properties

Lemma 1

\(\lambda (\alpha a^x+Y^{-1}\beta )-(\alpha a^{w_j\ln \lambda +x}+Y^{-1}\beta )\le 0\) if \(0<a\le 1\), \(\lambda \ge 1\), \(Y\ge 0\), \(w_j\ge 1\) and \(x\ge 1\).

Proof

Let \(F(\lambda )=\lambda (\alpha a^x+Y^{-1}\beta )-(\alpha a^{w_j\ln \lambda +x}+Y^{-1}\beta )\), and taking the first and second partial derivative with respect to \(\lambda \), it follows that

$$\begin{aligned}{} & {} F'(\lambda )=(\alpha a^x+Y^{-1}\beta )-{\frac{\alpha w_j\ln a \cdot a^{w_j\ln \lambda +x}}{\lambda }},\\{} & {} F''(\lambda )=-\frac{\alpha w_j\cdot w_j\ln a \ln a \cdot a^{w_j\ln \lambda +x}+a w_j\ln a \cdot a^{w_j\ln \lambda +x}}{\lambda ^2}. \end{aligned}$$

According to the given conditions \(0<a\le 1\), \(\lambda \ge 1\), \(w_j\ge 1\) and \(x\ge 1\), \(F''(\lambda )\le 0\), that is, \(F'(\lambda )\) is a decreasing function with respect to \(\lambda \ge 1\), and \(F'(\lambda )\le F'(1)\le 0\) can be obtained. Hence, \(F(\lambda )\) is decreasing function for \(0<a\le 1\), \(\lambda \ge 1\), \(w_j\ge 1\) and \(x\ge 1\), that is

$$\begin{aligned} F(\lambda )\le F(1)=(\alpha a^x+Y^{-1}\beta ) -(\alpha a^{x}+Y^{-1}\beta )=0. \end{aligned}$$

\(\square \)

Lemma 2

$$\begin{aligned} {\left( \begin{array}{l} (\lambda _1-\lambda _2\lambda )\left( e^x\right) ^{\frac{1}{w_j}} (\alpha +Y^{-1}\beta )+(\lambda _1-\lambda _2)\gamma \varphi ^bY^{-1} +\lambda _2\gamma Y^{-1}\left( \varphi +\left( e^x \right) ^{\frac{1}{w_j}}\right) ^b\\ -\lambda _1\gamma Y^{-1} \left( \varphi +\lambda \left( e^x\right) ^{\frac{1}{w_j}}\right) ^b +\lambda _2\lambda \left( e^x\right) ^{\frac{1}{w_j}}(\alpha a^{x} +Y^{-1}\beta )-\lambda _1\left( e^x\right) ^{\frac{1}{w_j}}(\alpha a^{w_j\ln \lambda +x}+Y^{-1}\beta ) \end{array}\right) } \le 0 \end{aligned}$$

if \(0<a\le 1\), \(b\ge 1\), \(\lambda \ge 1\), \(0\le \lambda _1\le 1,0\le \lambda _2 \le 1\), \(Y\ge 0\), \(w_j\ge 1\) and \(x\ge 1\).

Proof

The proof is similar to Lemma 1. Let

$$\begin{aligned} F=\left( \begin{array}{l} (\lambda _1-\lambda _2\lambda )\left( e^x\right) ^{\frac{1}{w_j}} (\alpha +Y^{-1}\beta )+(\lambda _1-\lambda _2)\gamma \varphi ^b Y^{-1}+\lambda _2\gamma Y^{-1}\left( \varphi +\left( e^x\right) ^{\frac{1}{w_j}}\right) ^b\\ -\lambda _1\gamma Y^{-1} \left( \varphi +\lambda \left( e^x\right) ^{\frac{1}{w_j}}\right) ^b +\lambda _2\lambda \left( e^x\right) ^{\frac{1}{w_j}} (\alpha a^{x}+Y^{-1}\beta )-\lambda _1 \left( e^x\right) ^{\frac{1}{w_j}}(\alpha a^{w_j \ln \lambda +x}+Y^{-1}\beta ) \end{array}\right) , \end{aligned}$$

and take the first derivative with respect to \(\lambda \):

$$\begin{aligned} F'(\lambda )={\left( \begin{array}{l}-\lambda _2 \left( e^x\right) ^{\frac{1}{w_j}} (\alpha +Y^{-1}\beta )-\lambda _1 b\gamma Y^{-1} \left( e^x\right) ^{\frac{1}{w_j}}(\varphi +\lambda \left( e^x\right) ^{\frac{1}{w_j}}) ^{b-1}\\ +\lambda _2\left( e^x\right) ^{\frac{1}{w_j}} (\alpha a^x+Y^{-1}\beta )-\frac{\lambda _1 \left( e^x\right) ^{\frac{1}{w_j}}\alpha w_j \ln a \cdot a^{w_j \ln \lambda +x}}{\lambda } \end{array}\right) } \end{aligned}$$

Continue to take the second derivative with respect to \(\lambda \),

$$\begin{aligned} F''(\lambda )={\left( \begin{array}{l} -\lambda _1\gamma Y^{-1}b(b-1)\left( e^x\right) ^{\frac{2}{w_j}} (\varphi +\lambda \left( e^x\right) ^{\frac{1}{w_j}})^{b-2}\\ -\frac{\lambda _1\alpha w_j^2\ln a\ln a\cdot a^{w_j\ln \lambda +x} -\lambda _1\left( e^x\right) ^{\frac{1}{w_j}}\alpha w_j\ln a \cdot a^{w_j\ln \lambda +x}}{\lambda ^2}\end{array}\right) }. \end{aligned}$$

Obviously, when \(0<a\le 1\), \(b\ge 1\), \(\lambda \ge 1\), \(0\le \lambda _1,\lambda _2 \le 1\), \(w_j\ge 1\) and \(x\ge 1\) are satisfied, \(F''(\lambda )\le 0\), that is, \(F'(\lambda )\) is a decreasing function of \(\lambda \ge 1\), and \(F'(\lambda )\le F'(1)\le 0\).

Therefore, \(F(\lambda )\) is decreasing for \(0<a\le 1\), \(b\ge 1\), \(\lambda \ge 1\), \(0\le \lambda _1,\lambda _2 \le 1\), \(w_j\ge 1\) and \(x\ge 1\), that is \(F(\lambda )\le F(1)\le 0\). \(\square \)

4 Main results

Theorem 4.1

For the problem \(1|GQ_{psddt},p_{i,[j]}=p_i\max \left\{ \alpha a^{\sum _{s=1}^{j-1}w_s\ln p_{[s]}}+\beta ,\delta \right\} |C_{\max }\), the optimal sequence can be obtained according to the SPT (smallest processing time first) rule.

Proof

Assume that the optimal sequence is \(\sigma =[\sigma _1,J_h,J_k,\sigma _2]\), where \(J_h\) and \(J_k\) meet \(p_h \le p_k\), and \(J_h\) is at the jth position of the sequence, and \(J_k\) is immediately after \(J_h\). The sequence \(\sigma '=[\sigma _1,J_k,J_h,\sigma _2]\) can be obtained by exchanging \(J_h\) and \(J_k\).

Let X be the completion time before the job \(J_h\), we have

$$\begin{aligned} C_{h,[j]}(\sigma )= & {} X+p_h\max \left\{ \alpha a^{\sum _{s=1}^{j-1} w_s\ln p_{[s]}}+\beta ,\delta \right\} +\gamma \left( 1+\sum _{s=1}^{j-1}p_{[s]}\right) ^b, \end{aligned}$$
(3)

and

$$\begin{aligned} C_{k,[j+1]}(\sigma )= & {} X+p_h\max \left\{ \alpha a^{\sum _{s=1}^{j-1} w_s\ln p_{[s]}}+\beta ,\delta \right\} \nonumber \\{} & {} +p_k\max \left\{ \alpha a^{\sum _{s=1}^{j-1}w_s\ln p_{[s]} +w_j\ln p_h}+\beta ,\delta \right\} +\gamma \left( 1+\sum _{s=1}^{j-1}p_{[s]}+p_h\right) ^b.\nonumber \\ \end{aligned}$$
(4)

For sequence \(\sigma '=[\sigma _1,J_k,J_h,\sigma _2]\), we have

$$\begin{aligned} C_{k,[j]}(\sigma ')= & {} X+p_k\max \left\{ \alpha a^{\sum _{s=1}^{j-1} w_s\ln p_{[s]}}+\beta ,\delta \right\} +\gamma \left( 1+\sum _{s=1}^{j-1}p_{[s]}\right) ^b, \end{aligned}$$
(5)

and

$$\begin{aligned} C_{h,[j+1]}(\sigma ')= & {} X+p_k\max \left\{ \alpha a^{\sum _{s=1}^{j-1}w_s\ln p_{[s]}}+\beta ,\delta \right\} \nonumber \\{} & {} +p_h\max \left\{ \alpha a^{\sum _{s=1}^{j-1}w_s\ln p_{[s]}+w_j\ln p_k} +\beta ,\delta \right\} +\gamma \left( 1+\sum _{s=1}^{j-1}p_{[s]} +p_k\right) ^b.\nonumber \\ \end{aligned}$$
(6)

According to the truncated learning effects, the problem can be proved in the following two cases:

Case 1. When \(\alpha a^{\sum _{s=1}^{j-1}w_s\ln p_{[s]}+w_j\ln p_k}+\beta \ge \delta \), the difference between formulas (4) and (6) is

$$\begin{aligned} C_{k,[j+1]}(\sigma )-C_{h,[j+1]}(\sigma ')= & {} (p_h-p_k) \left( \alpha a^{\sum _{s=1}^{j-1}w_s\ln p_{[s]}}+\beta \right) \\{} & {} +p_k \left( \alpha a^{\sum _{s=1}^{j-1}w_s\ln p_{[s]}+w_j\ln p_h} +\beta \right) \\{} & {} -p_h \left( \alpha a^{\sum _{s=1}^{j-1}w_s\ln p_{[s]}+w_j\ln p_k} +\beta \right) \\{} & {} +\gamma \left( 1+\sum _{s=1}^{j-1}p_{[s]}+p_h\right) ^b-\gamma \left( 1+\sum _{s=1}^{j-1}p_{[s]}+p_k\right) ^b. \end{aligned}$$

From \(p_h\le p_k\), and \((p_h-p_k)\left( \alpha a^{\sum _{s=1}^{j-1}w_s\ln p_{[s]}}+\beta \right) \le 0\), we have

$$\begin{aligned} \gamma \left( 1+\sum _{s=1}^{j-1}p_{[s]}+p_h\right) ^b-\gamma \left( 1+\sum _{s=1}^ {j-1}p_{[s]}+p_k\right) ^b\le 0. \end{aligned}$$

Let \(\lambda =p_k/p_h\), \(x=w_j\ln p_h\) and \(Y=a^{\sum _{s=1}^{j-1}w_s\ln p_{[s]}}\), then \(\ln \lambda =\ln p_k-\ln p_h\) and \(e^x=(p_h)^{w_j}\). According to Lemma 1, we have

$$\begin{aligned}{} & {} \left( p_k \left( \alpha a^{\sum _{s=1}^{j-1}w_s \ln p_{[s]}+w_j\ln p_h}+\beta \right) -p_h \left( \alpha a^{\sum _{s=1}^{j-1}w_s\ln p_{[s]}+w_j\ln p_k}+\beta \right) \right) /p_h\cdot Y\\{} & {} \quad =\lambda (\alpha a^x+Y^{-1}\beta )-(\alpha a^{w_j \ln \lambda +x}+Y^{-1}\beta )\\{} & {} \quad \le 0. \end{aligned}$$

To sum up, when \(\alpha a^{\sum _{s=1}^{j-1}w_s\ln p_{[s]}+w_j\ln p_k}+\beta \ge \delta \), we have \(C_{k,[j+1]}(\sigma )-C_{h,[j+1]}(\sigma ')\le 0\).

Case 2. When \(\alpha a^{\sum _{s=1}^{j-1}w_s\ln p_{[s]}+w_j\ln p_k}+\beta < \delta \), let \(T=\sum _{s=1}^{j-1}w_s \ln p_{[s]}\), from (4) and (6), we have

$$\begin{aligned}{} & {} \left( C_{k,[j+1]}(\sigma )-C_{h,[j+1]}(\sigma ')\right) /p_h \nonumber \\{} & {} \quad =\max \{\alpha a^T+\beta ,\delta \}-\delta +\lambda \left[ \max \{\alpha a^{T+x}+\beta ,\delta \} -\max \{\alpha a^T+\beta ,\delta \}\right] . \end{aligned}$$
(7)

Taking the first derivative of \(\lambda \), the formula (7) is a decreasing function with respect to \(\lambda \ge 1\). In this case, we also have \(C_{k,[j+1]}(\sigma )-C_{h,[j+1]}(\sigma ')\le 0\).

From the above cases, the optimal sequence of \(1|GQ_{psddt},p_{i,[j]}=p_i\max \left\{ \alpha a^{\sum _{s=1}^{j-1}w_s\ln p_{[s]}}+\beta ,\delta \right\} |C_{\max }\) can be obtained by the SPT rule. \(\square \)

Theorem 4.2

For the problem \(1|GQ_{psddt},p_{i,[j]}=p_i\max \left\{ \alpha a^{\sum _{s=1}^{j-1}w_s\ln p_{[s]}}+\beta ,\delta \right\} |TCT\), the optimal sequence can be obtained by the SPT rule.

Proof

From Theorem 4.1, if \(p_h \le p_k\), we have \(C_{h,[j]}(\sigma )\le C_{k,[j]}(\sigma ')\) and \(C_{k,[j+1]}(\sigma )\le C_{h,[j+1]}(\sigma ')\), then \(C_{h,[j]}(\sigma )+C_{k,[j+1]}(\sigma )\le C_{k,[j]}(\sigma ')+C_{h,[j+1]}(\sigma ')\). That is, the optimal sequence of the problem \(1|GQ_{psddt},p_{i,[j]}=p_i\max \left\{ \alpha a^{\sum _{s=1}^{j-1}w_s\ln p_{[s]}}+\beta ,\delta \right\} |TCT\) can be obtained by the SPT rule. \(\square \)

Theorem 4.3

For the problem \(1|GQ_{psddt},p_{i,[j]}=p_i\max \left\{ \alpha a^{\sum _{s=1}^{j-1}w_s\ln p_{[s]}}+\beta ,\delta \right\} |TWCT\), if the processing times and weights are anti-agreeable, i.e., \(p_h\le p_k\) implies \(v_h\ge v_k\) for all the jobs \(J_h\) and \(J_k\), the optimal sequence can be obtained by sequencing the jobs in non-decreasing order of \(p_h/v_h\) (WSPT rule).

Proof

Similar to the above theorems, for sequence \(\sigma =[\sigma _1,J_h,J_k,\sigma _2]\), we have

$$\begin{aligned}{} & {} v_hC_{h,[j]}(\sigma )+v_kC_{k,[j+1]}(\sigma )\\{} & {} \quad = v_h\left( X+p_h\max \left\{ \alpha a^{\sum _{s=1}^{j-1} w_s\ln p_{[s]}}+\beta ,\delta \right\} +\gamma \left( 1+\sum _{s=1}^{j-1} p_{[s]}\right) ^b\right) \\{} & {} \qquad +v_k \left( \begin{array}{l} X+p_h\max \left\{ \alpha a^{\sum _{s=1}^{j-1}w_s\ln p_{[s]}}+\beta ,\delta \right\} \\ +p_k\max \left\{ \alpha a^{\sum _{s=1}^{j-1}w_s\ln p_{[s]}+w_j \ln p_h}+\beta ,\delta \right\} +\gamma \left( 1+\sum _{s=1}^{j-1}p_{[s]}+p_h\right) ^b \end{array}\right) . \end{aligned}$$

For the sequence \(\sigma '=[\sigma _1,J_k,J_h,\sigma _2]\), we have

$$\begin{aligned}{} & {} v_kC_{k,[j]}(\sigma ')+v_hC_{h,[j+1]}(\sigma ')\\{} & {} \quad = v_k\left( X+p_k\max \left\{ \alpha a^{\sum _{s=1}^{j-1} w_s\ln p_{[s]}}+\beta ,\delta \right\} +\gamma (1+\sum _{s=1}^{j-1} p_{[s]})^b\right) \\{} & {} \qquad +v_h\left( \begin{array}{l} X+p_k\max \left\{ \alpha a^{\sum _{s=1}^{j-1}w_s\ln p_{[s]}}+\beta ,\delta \right\} \\ +p_h\max \left\{ \alpha a^{\sum _{s=1}^{j-1}w_s\ln p_{[s]}+w_j \ln p_k}+\beta ,\delta \right\} +\gamma \left( 1+\sum _{s=1}^{j-1}p_{[s]}+p_k\right) ^b \end{array}\right) . \end{aligned}$$

The proof is similar to Theorem 4.1 and can be divided into the following two cases:

Case 1. When \(\alpha a^{\sum _{s=1}^{j-1}w_s\ln p_{[s]}+w_j\ln p_k}+\beta \ge \delta \), we have

$$\begin{aligned}{} & {} TWCT(\sigma )-TWCT(\sigma ')\\{} & {} \quad = (v_h p_h-v_k p_k)\left( \alpha a^{\sum _{s=1}^{j-1} w_s\ln p_{[s]}}+\beta \right) +(v_k p_h-v_h p_k) \left( \alpha a^{\sum _{s=1}^{j-1}w_s\ln p_{[s]}}+\beta \right) \\{} & {} \qquad +(v_h-v_k)\gamma \left( 1+\sum _{s=1}^{j-1}p_{[s]}\right) ^b +v_k\gamma \left( 1+\sum _{s=1}^{j-1}p_{[s]}+p_h\right) ^b-v_h \gamma \left( 1+\sum _{s=1}^{j-1}p_{[s]}+p_k\right) ^b\\{} & {} \qquad +v_k p_k\left( \alpha a^{\sum _{s=1}^{j-1} w_s\ln p_{[s]}+w_j\ln p_h}+\beta \right) -v_h p_h \left( \alpha a^{\sum _{s=1}^{j-1}w_s\ln p_{[s]} +w_j\ln p_k}+\beta \right) . \end{aligned}$$

Let \(\lambda _1=\frac{v_h}{v_h+v_k}\), \(\lambda _2=\frac{v_k}{v_h+v_k}\), \(x=w_j\ln p_h\), \(Y=a^{\sum _{s=1}^{j-1}w_s\ln p_{[s]}}\) and \(\varphi =(1+\sum _{s=1}^{j-1}p_{[s]})\), then according to Lemma 2, we have

$$\begin{aligned}{} & {} \frac{{\left( \begin{array}{l} (v_h p_h-v_k p_k) \left( \alpha a^{\sum _{s=1}^{j-1}w_s \ln p_{[s]}}+\beta \right) +(v_h-v_k)\left( 1+\sum _{s=1}^{j-1}p_{[s]}\right) ^b\\ +v_k \gamma \left( 1+\sum _{s=1}^{j-1}p_{[s]}+p_h\right) ^b -v_h\gamma \left( 1+\sum _{s=1}^{j-1}p_{[s]}+p_k\right) ^b\\ +v_k p_k \left( \alpha a^{\sum _{s=1}^{j-1}w_s\ln p_{[s]} +w_j\ln p_h}+\beta \right) -v_h p_h \left( \alpha a^{\sum _{s=1}^{j-1}w_s\ln p_{[s]}+w_j\ln p_k}+\beta \right) \end{array}\right) }}{(v_h+v_k)Y}\\{} & {} \quad = v_k\left( X+p_k\max \left\{ \alpha a^{\sum _{s=1}^{j-1} w_s\ln p_{[s]}}+\beta ,\delta \right\} +\gamma \left( 1+\sum _{s=1}^{j-1} p_{[s]}\right) ^b\right) \\{} & {} \qquad +v_h\left( \begin{array}{l} X+p_k\max \left\{ \alpha a^{\sum _{s=1}^{j-1}w_s\ln p_{[s]}}+\beta ,\delta \right\} \\ +p_h\max \left\{ \alpha a^{\sum _{s=1}^{j-1}w_s\ln p_{[s]}+w_j\ln p_k} +\beta ,\delta \right\} +\gamma \left( 1+\sum _{s=1}^{j-1}p_{[s]}+p_k\right) ^b \end{array}\right) \\{} & {} \quad \le 0. \end{aligned}$$

According to the WSPT rule (i.e., \(p_h/v_h \le p_k/v_k\)), we have

$$\begin{aligned} (v_k p_h-v_h p_k)\cdot \left( \alpha a^{\sum _{s=1}^{j-1}w_s\ln p_{[s]}}+\beta \right) \le 0, \end{aligned}$$

that is, \(v_hC_{h,[j]}(\sigma )+v_kC_{k,[j+1]}(\sigma )\le v_kC_{k,[j]}(\sigma ')+v_hC_{h,[j+1]}(\sigma ')\).

Case 2. When \(\alpha a^{\sum _{s=1}^{j-1}w_s\ln p_{[s]}+w_j\ln p_k}+\beta < \delta \), similarly, let \(T=\sum _{s=1}^{j-1}w_s\ln p_{[s]}\), we have

$$\begin{aligned}{} & {} TWCT(\sigma )-TWCT(\sigma ')\\{} & {} \quad = (v_h p_h-v_k p_k)\max \{\alpha a^{T}+\beta ,\delta \} +(v_k p_h-v_h p_k)\max \{\alpha a^{T}+\beta ,\delta \} \\{} & {} \qquad +(v_h-v_k)\gamma \varphi ^b+(v_h-v_k)\gamma \varphi ^b \\{} & {} \qquad +v_k \gamma \left( 1+\sum _{s=1}^{j-1}p_{[s]}+p_h\right) ^b -v_h\gamma \left( 1+\sum _{s=1}^{j-1}p_{[s]}+p_k\right) ^b \\{} & {} \qquad +v_k p_k\max \{\alpha a^{T+x}+\beta ,\delta \} -v_h p_h\delta -v_h\gamma \left( \varphi +\lambda \left( e^x\right) ^{\frac{1}{w_j}}\right) ^b. \end{aligned}$$

According to the above formula, we have

$$\begin{aligned}{} & {} \frac{{\left( \begin{array}{l}(v_h p_h-v_k p_k) \max \{\alpha a^{T}+\beta ,\delta \}+v_k p_k\max \{\alpha a^{T+x}+\beta ,\delta \} \\ -v_h p_h \delta - v_h\gamma \left( \varphi +\lambda \left( e^x\right) ^{\frac{1}{w_j}}\right) ^b \end{array}\right) }}{(v_h+v_k)p_h}\\{} & {} \quad = \lambda _1(1-\delta )-\lambda _1 \left( e^x\right) ^{\frac{1}{w_j}}\gamma \left( \varphi +\lambda \left( e^x\right) ^{\frac{1}{w_j}}\right) ^b. \end{aligned}$$

Similarly, by taking the derivative of this formula with respect to \(\lambda \), it can be obtained that this formula is a decreasing function with respect to \(\lambda \ge 1\), and

$$\begin{aligned} \lambda _1(1-\delta )-\lambda _1\left( e^x\right) ^{\frac{1}{w_j}} \gamma \left( \varphi +\lambda \left( e^x\right) ^{\frac{1}{w_j}}\right) ^b \le 0 \end{aligned}$$

can be easily obtained. According to the WSPT rule, if \(p_h/v_h \le p_k/v_k\), we have \((v_k p_h-v_h p_k)\max \{\alpha a^T+\beta ,\delta \}\le 0\), that is, \(v_hC_{h,[j]}(\sigma )+v_kC_{k,[j+1]}(\sigma )\le v_kC_{k,[j]}(\sigma ')+v_hC_{h,[j+1]}(\sigma ')\).

In summary, the theorem holds. \(\square \)

5 General case of TWCT

For the general case, the complexity of the \(1|GQ_{psddt},p_{i,[j]}=p_i\max \left\{ \alpha a^{\sum _{s=1}^{j-1}w_s\ln p_{[s]}}+\beta ,\delta \right\} |TWCT\) problem is still open. We will propose the following heuristic, Tabu search and branch-and-bound algorithms to solve this problem.

5.1 Heuristic algorithm

According to Theorem 4.3 and Nawaz et al. (1983), the following heuristic algorithm can be proposed.

Algorithm 1 (H-A)

Part 1.

  • Step 1. Sequence the jobs by the SPT rule.

  • Step 2. Sequence the jobs in non-increasing order of \(v_h\).

  • Step 3. Sequence the jobs by the WSPT rule.

  • Step 4. Choose the better solution from Steps 1–3.

Part 2.

  • Step 1. Let \(\sigma _0\) be the sequence obtained in Part 1.

  • Step 2. Set \(t=2\), select the first two jobs in \(\sigma _0\), and select the partial sequence with the best objective function value among the two possible sequences.

  • Step 3. Iterate t in sequence, that is, \(t=t+1\), select the tth job, insert it in all possible t positions, and select a sequence from t sequences to minimize TWCT. Next, by exchanging the jobs at positions j and i, where \(1\le j <t\), \(j <i\le t\). A better partial sequence is selected from \(t(t-1)/2\) sequences to minimize the value of TWCT.

  • Step 4. If \(t=n\), stop the algorithm; otherwise, go back to Step 3.

Since the time of Part 1 is O(n) and the time of Part 2 is \(O(n^2)\) (see Nawaz et al. (1983)), then the time of Algorithm 1 is \(O(n^2)\).

5.2 Tabu search algorithm

Problem \(1|GQ_{psddt},p_{i,[j]}=p_i\max \left\{ \alpha a^{\sum _{s=1}^{j-1}w_s\ln p_{[s]}}+\beta ,\delta \right\} |TWCT\) can also be solved by tabu search algorithm. Let the initial sequence can be sorted by the order of increasing \(p_h/v_h\), and let the maximum number of iterations be 100n. The steps of tabu search algorithm are as follows:

Algorithm 2 (T-S)

  • Step 1. Let the tabu list empty and the iteration number be 0.

  • Step 2. The initial sequence can be obtained according to the WSPT rule, and calculate the value of TWCT, and let the current sequence is the optimal sequence \(\sigma ^*\).

  • Step 3. Search the relevant neighborhood of the current sequence, and determine whether there is a sequence \(\sigma \) with the smallest objective function in the relevant neighborhood, and the sequence is not in the tabu list.

  • Step 4. If \(TWCT(\sigma )<TWCT(\sigma ^*)\), then set \(\sigma ^*=\sigma \), and update the tabu list and the number of iteration.

  • Step 5. If there is no sequence in the associated neighborhood, but it is not in the tabu list, or the given maximum number of iteration is reached, then the final sequence is output. Otherwise, update the tabu list and turn Step 3.

5.3 Branch-and-bound algorithm

Let \(\sigma =[\sigma _v,\sigma _u]\) be a job sequence, where \(\sigma _v\) is the scheduled part, and \(\sigma _u\) is the unscheduled part. Suppose that there are l jobs in \(\sigma _v\), then the completion time for the \((l+1)\)th job in the unscheduled part is:

$$\begin{aligned} C_{[l+1]}=C_{[l]}+p_{[l+1]}\max \left\{ \alpha a^{\sum _{s=1}^{l} w_s\ln p_{[s]}}+\beta ,\delta \right\} +\gamma \left( 1+\sum _{s=1}^{l}p_{[s]} \right) ^b, \end{aligned}$$

and for the \((l+2)\)th job, the completion time is:

$$\begin{aligned} C_{[l+2]}= & {} C_{[l]}+p_{[l+1]}\max \left\{ \alpha a^{\sum _{s=1}^{l}w_s\ln p_{[s]}}+\beta ,\delta \right\} \\{} & {} +p_{[l+2]}\max \left\{ \alpha a^{\sum _{s=1}^{l}w_s \ln p_{[s]}+w_{l+1}\ln p_{[l+1]}}+\beta ,\delta \right\} +\gamma \left( 1+\sum _{s=1}^{l}p_{[s]}+p_{[l+1]}\right) ^b. \end{aligned}$$

By analogy, the formula for the completion time of the nth job can be obtained as follows:

$$\begin{aligned} C_{[n]}= & {} C_{[l]}+p_{[l+1]}\max \left\{ \alpha a^{\sum _{s=1}^{l}w_s\ln p_{[s]}}+\beta ,\delta \right\} +\cdots \\{} & {} +p_{[n]}\max \left\{ \alpha a^{\sum _{s=1}^{l}w_s\ln p_{[s]} +\sum _{s=l+1}^{n-1}w_s\ln p_{[s]}}+\beta ,\delta \right\} +\gamma \left( 1+\sum _{s=1}^{l}p_{[s]}+\sum _{s=l+1}^{n-1}p_{[s]}\right) ^b, \end{aligned}$$

then the objective function can be expanded as follows:

$$\begin{aligned} TWCT= & {} \sum _{i=1}^lv_{i}C_{i}+\sum _{i=l+1}^nv_{[i]}C_{[i]}=\sum _{i=1}^lv_{i}C_{i}\nonumber \\{} & {} \quad +\sum _{i=l+1}^n\left[ \begin{array}{l} (v_{[i]}+\cdots +v_{[n]})p_{[i]}\max \left\{ \alpha a^{\sum _{s=1}^{l} w_s\ln p_{[s]}+\sum _{s=l+1}^{i-1}w_s\ln p_{[s]}}+\beta ,\delta \right\} \\ +v_{[i]}\gamma \left( 1+\sum _{s=1}^{l}p_{[s]}+\sum _{s=l+1}^{i-1} p_{[s]}\right) ^b\end{array}\right] ,\nonumber \\ \end{aligned}$$
(8)

where \(\sum _{i=1}^lv_{i}C_{i}\) is known, and the second item is the function value of unsorted part. Let \(v_{\min }=\min \{v_{j}|j\in \sigma _u\}\), from Theorems 1 and 2, a lower bound of formula (8) can be obtained by:

$$\begin{aligned} LB_1= & {} \sum _{i=1}^lv_{i}C_{i}\nonumber \\{} & {} \quad +\sum _{i=l+1}^n\left( \begin{array}{l} (n-i+1)v_{\min }p_{(i)}\max \left\{ \alpha a^{\sum _{s=1}^{l}w_s\ln p_{s}+\sum _{s=l+1}^{i-1}w_s\ln p_{(s)}}+\beta ,\delta \right\} \\ +v_{\min }\gamma \left( 1+\sum _{s=1}^{l}p_{s}+\sum _{s=l+1}^{i-1} p_{(s)}\right) ^b\end{array}\right) ,\nonumber \\ \end{aligned}$$
(9)

where \(p_{(l+1)}\le p_{(l+2)}\le ,\ldots ,\le p_{(n)}\).

Let \(p_{\min }=\min \{p_{j}|j\in \sigma _u\}\), then the second lower bound can be obtained as follows:

$$\begin{aligned} LB_2= & {} \sum _{i=1}^lv_{i}C_{i}\nonumber \\{} & {} \quad +\sum _{i=l+1}^n\left( \begin{array}{l} (v_{(i)}+\cdots +v_{(n)})p_{\min }\max \left\{ \alpha a^{\sum _{s=1}^{l}w_s\ln p_{s}+\sum _{s=l+1}^{i-1}w_s\ln p_{\min }}+\beta ,\delta \right\} \\ +v_{(i)}\gamma \left( 1+\sum _{s=1}^{l}p_{s}+(i-l-1)p_{\min }\right) ^b \end{array}\right) ,\nonumber \\ \end{aligned}$$
(10)

where \(v_{(l+1)}\ge v_{(l+2)}\ge \cdots \ge v_{(n)}\).

When \(p_{<l+1>}\le p_{<l+2>}\le \cdots \le p_{<n>}\) and \(v_{(l+1)}\ge v_{(l+2)}\ge \cdots \ge v_{(n)}\), (\(p_{<s>}\) and \(v_{(s)}\) do not necessarily correspond to the same job), from Theorem 3, the following lower bound can be obtained:

$$\begin{aligned} LB_3= & {} \sum _{i=1}^lv_{i}C_{i}\nonumber \\{} & {} \quad +\sum _{i=l+1}^n\left( \begin{array}{l} (v_{(i)}+\cdots +v_{(n)})p_{<i>}\max \left\{ \alpha a^{\sum _{s=1}^{l}w_s\ln p_{s}+\sum _{s=l+1}^{i-1}w_s\ln p_{<s>}}+\beta ,\delta \right\} \\ +(v_{(i)}+\cdots +v_{(n)})\gamma \left( 1+\sum _{s=1}^{l}p_{s} +\sum _{s=l+1}^{i-1} p_{<s>}\right) ^b \end{array}\right) .\nonumber \\ \end{aligned}$$
(11)

Now, in order to get a tighter lower bound, the maximum value of (9), (10) and (11) can be taken as the lower bound, namely:

$$\begin{aligned} LB=\max \{LB_1,LB_2,LB_3\}. \end{aligned}$$
(12)

The following branch-and-bound algorithm is proposed to specifically solve the problem \(1|GQ_{psddt}, p_{i,[j]}=p_i\max \left\{ \alpha a^{\sum _{s=1}^{j-1}w_s\ln p_{[s]}}+\beta ,\delta \right\} |TWCT\).

Algorithm 3 (B-B)

  • Step 1. The optimal sequence obtained according to the above Algorithm 1 is taken as the initial sequence, and the objective function value is taken as the upper bound.

  • Step 2. Calculate the lower bound of each node according to (12). If the lower bound of the unfinished part is greater than or equal to the objective function value of the initial solution, the node and all nodes after the node are deleted. Calculate the objective function value of the completed sequence. If it is less than the initial solution, the sequence replaces the initial sequence, otherwise the sequence is deleted.

  • Step 3. Traverse all nodes.

5.4 An example for B-B algorithm

Now consider an example with \(n=5\), the processing time vector and weight vector are given as follows: \((p_1,p_2,p_3,p_4,p_5)=(12,4,7,11,6)\), \((v_1,v_2,v_3,v_4,v_5)=(4,2,5,3,6)\), and \((w_1,w_2,w_3,w_4,w_5)=(1,1,1,1,1)\). And \(\alpha =\beta =0.5\), \(\gamma =\delta =0.5\) and \(a=0.8\), and \(b=1\). The example is first solved according to Algorithm 1 (H-A), in order to obtain the initial sequence required by the B-B Algorithm.

Part 1.

  • Step 1. The sequence \(\sigma _1=[J_2,J_5,J_3,J_4,J_1]\) is obtained according to SPT rule, and the objective function value is \(TWCT=462.0505\).

  • Step 2. According to the descending order of \(v_h\), the sequence \(\sigma _2=[J_5,J_3,J_1,J_4,J_2]\) is obtained. And the corresponding value of objective function is \(TWCT=442.7471\).

  • Step 3. The sequence \(\sigma _3=[J_5,J_3,J_2,J_1,J_4]\) is obtained according to \(p_h/v_h\) increasing order, and the objective function value is \(TWCT=418.0759\).

  • Step 4. By comparison, the sequence corresponding to the value of the minimum objective function is \(\sigma _3=[J_5,J_3,J_2,J_1,J_4]\), that is, \(\sigma _0=[J_5,J_3,J_2,J_1,J_4]\).

Part 2.

  • Step 1. Let \(\sigma _0=[J_5,J_3,J_2,J_1,J_4]\) be the initial sequence.

  • Step 2. Set \(t=2\), the two possible sequences are \([J_5,J_3,J_2,J_1,J_4]\) and \([J_3,J_5,J_2,J_1,J_4]\), and the corresponding objective function values are \(TWCT=418.0759\) and \(TWCT=436.3744\), respectively. Then the better partial sequence is \([J_5,J_3,J_2,J_1,J_4]\).

  • Step 3. Set \(t=3\), and the three possible sequences are \([J_2,J_5,J_3,J_1,J_4]\), \([J_5,J_2,J_3,J_1,J_4]\), and \([J_5,J_3,J_2,J_1,J_4]\), separately. And the corresponding objective function values are \(TWCT=435.7654\), \(TWCT=422.8465\) and \(TWCT=418.0759\), respectively. Then the better one is \([J_5,J_3,J_2,J_1,J_4]\).

Now, swap the jobs at positions 1 and 2, 1 and 3; and 2 and 3 to obtained the sequences \([J_3,J_5,J_2,J_1,J_4]\), \([J_2,J_3,J_5,J_1,J_4]\) and \([J_5,J_2,J_3,J_1,J_4]\), and the corresponding objective function values are \(TWCT=436.3744\), \(TWCT=452.1878\) and \(TWCT=422.8465\). After comparison, the better sequence is \([J_5,J_3,J_2,J_1,J_4]\).

Step 4. Carry out the above step for the remaining jobs, and the optimal sequence can be obtained as: \([J_5,J_3,J_2,J_1,J_4]\), and the objective function value is \(TWCT=418.0759\).

Then the objective function value \(TWCT=418.0759\) corresponding to the sequence is used as the upper bound of the following Algorithm 3 (B-B). For the given parameters of the problem, according to Algorithm 3 (B-B), the following search tree can be obtained, which is defined as Fig. 1. The numbers in Fig. 1 represent the values of lower bound, and \(J_0\) is defined as the level 0.

Fig. 1
figure 1

Search tree of the Algorithm 3 (B-B) for the example (\(\times \) denotes pruning)

As level 1, i.e., \(j=1\), for the job \(J_1\), the lower bounds \(LB_1\), \(LB_2\) and \(LB_3\) can be obtained as follows according to formulas (9), (10) and (11), respectively:

$$\begin{aligned} LB_1= & {} \sum _{i=1}^1v_{i}C_{i} +\sum _{i=2}^5\left( \begin{array}{l} (5-i+1)v_{\min }p_{(i)}\max \left\{ \alpha a^{\sum _{s=1}^{1}\ln p_{s}+\sum _{s=2}^{i-1}\ln p_{[s]}}+\beta ,\delta \right\} \\ +v_{\min }\gamma \left( 1+\sum _{s=1}^{1}p_{s}+\sum _{s=2}^{i-1} p_{(s)}\right) ^b\end{array}\right) \\= & {} 12.5\times 4+21.6487\times 2+27.9133\times 2+35.4025 \times 2+45.4094\times 2=310.7478. \\ LB_2= & {} \sum _{i=1}^1v_{i}C_{i}+\sum _{i=2}^5\left( \begin{array}{l} (v_{(i)}+\cdots +v_{(5)})p_{\min }\max \left\{ \alpha a^{\sum _{s=1}^{1}\ln p_{s}+\sum _{s=2}^{i-1}\ln p_{\min }}+\beta ,\delta \right\} \\ +v_{(i)}\gamma \left( 1+\sum _{s=1}^{1}p_{s}+(i-1-1)p_{\min }\right) ^b \end{array}\right) \\= & {} 12.5\times 4+21.6487\times 6+26.4918\times 5+31.1106 \times 3+35.5647\times 2=476.8124. \\ LB_3= & {} \sum _{i=1}^1v_{i}C_{i}+\sum _{i=2}^5\left( \begin{array}{l} (v_{(i)}+\cdots +v_{(5)})p_{<i>}\max \left\{ \alpha a^{\sum _{s=1}^{1}w_s\ln p_{s}+\sum _{s=2}^{i-1}w_s\ln p_{<s>}}+\beta ,\delta \right\} \\ +(v_{(i)}+\cdots +v_{(5)})\gamma \left( 1+\sum _{s=1}^{1}p_{s} +\sum _{s=2}^{i-1} p_{<s>}\right) ^b\end{array}\right) \\= & {} 12.5\times 4+21.6487\times 6+27.9133\times 5+35.4025 \times 3+45.4094\times 2=516.4850. \end{aligned}$$

Then the lower bound \(LB=516.4850\) can be obtained according to \(LB=\max \{LB_1,LB_2,LB_3\}\). And the calculation process of other nodes is similar to the calculation process of this node. Then from the Fig. 1, the optimal sequence is \([J_5,J_3,J_2,J_1,J_4]\), and the optimal value of the objective function is \(TWCT=418.0759\).

5.5 Data simulation

The H-A algorithm (i.e. Algorithm 1), T-S algorithm (i.e. Algorithm 2), and B-B algorithm (i.e. Algorithm 3) were programmed in CLion 2020.x64 and carried out on a CPU Intel(R) Core(TM) i5-10500 3.10GHz, 8GB RAM with Windows 10 operating system. The number of jobs \(n=15, 16, 17, 18, 19, 20\) were tested. The parameters setting can be obtained as follows: \(w_s=1 (s=1,\ldots ,n)\), \(\alpha =\beta =0.5\), \(\gamma =\delta =0.5\), \(a=0.8\) and \(b=1\), and the range of processing time, delivery time and weight are \(p_j\in [1,100]\), \(q_j\in [1,100]\) and \(v_j\in [1,100]\), separately.

For Algorithm B-B, set the maximum CPU time at 3600 s for each instance of the number of jobs. And the percentage relative error of the solution produced by H-A algorithm and B-B algorithm is calculated as

$$\begin{aligned} \frac{TWCT(\text {H-A})-TWCT(\text {B-B})}{TWCT (\text {B-B})}\times 100\%, \end{aligned}$$

and the percentage relative error of the solution produced by T-S algorithm and B-B algorithm is

$$\begin{aligned} \frac{TWCT(\text {T-S})-TWCT(\text {B-B})}{TWCT(\text {B-B})}\times 100\%, \end{aligned}$$

where \(TWCT(\text {H-A})\), \(TWCT(\text {B-B})\) and \(TWCT(\text {T-S})\) are the value of objective function TWCT generated by Algorithm 1, Algorithm 3 and Algorithm 2, separately. The experimental results are shown in Table 1. It can be seen from table 1 that when \(n\ge 20\), the average CPU time of \(\text {B-B}\) algorithm is 3600 s, and the CPU time of \(\text {T-S}\) algorithm is greater than that of \(\text {H-A}\) algorithm. From Table 1, it can be seen that H-A algorithm is better, and the maximum error is less than 0.9%.

Table 1 Results of Algorithms

6 Conclusion

In this paper, single machine scheduling problems with delivery times and the truncated logarithm processing time have been studied. The problems include makespan, total completion time, and total weighted completion time, and the results showed that the problems of minimizing the makespan and the total completion time can be solved in polynomial time. In addition, the general problem of minimizing the total weighted completion time is analyzed in details, and the heuristic algorithm (i.e., Algorithm 1), tabu search algorithm (i.e., Algorithm 2) and branch-and-bound algorithm (i.e., Algorithm 3) were proposed. Computational study demonstrated that H-A algorithm is more effective than T-S algorithm. Future research can consider (1) scheduling with group technology (Liu and Wang 2023; 2) job-rejection scheduling with setup times (Mor and Shapira 2020; Liu et al. 2024; 3) scheduling with deterioration effects (Sun et al. 2011; Wang et al. 2011; Sun et al. 2023; Lv and Wang 2024; Lv et al. 2024; Wang et al. 2024c; Zhang et al. 2024; 4) scheduling with position-dependent weights (Wang et al. 2024a); flow shop or parallel machines scheduling (Wang et al. 2011; Wang and Wang 2012; Wang et al. 2019; Sun et al. 2021; Kovalev et al. 2023; Wang et al. 2024b) with \(GQ_{psddt}\).