1 Introduction

In classical scheduling theory, it is assumed that the job processing times fixed and constant values. In practice, however, we often encounter settings in which job processing times may be subject to change due to the phenomenon of deterioration and/or learning. Extensive surveys of scheduling problems involving deteriorating jobs (time-dependent processing times) can be found in Alidaee and Womer (1999), Cheng et al. (2004), Yin et al. (2015) and Gawiejnowicz (2020a, 2020b). More recently, Pei et al. (2019) considered the parallel-batching scheduling with step-deteriorating jobs. For the objective of maximising the total net revenue, they proposed some solution algorithms. Qian and Han (2022) (resp. Miao et al. 2023; Mao et al. 2023; Lu et al. 2024) studied the due-date (resp. due-window) assignment scheduling with delivery times and deterioration effects. Sun et al. (2023) considered the single-machine maintenance activity scheduling with deterioration effects. Wang et al. (2023b) and Wang et al. (2024d) addressed single-machine resource allocation scheduling with deterioration effects. Zhang et al. (2023) and Li et al. (2024b) studied the single-machine two-agent scheduling with resource allocations and deterioration effects. Lv and Wang (2024a) considered the no-idle flow shop scheduling with deterioration effects. Under common due date, they proved that some special cases are polynomially solvable. Lv et al. (2024) studied the single-machine scheduling with ready times and deterioration effects. For the total weighted completion time minimization, they proposed a branch-and-bound algorithm and some heuristic algorithms. Mao et al. (2024) considered the single-machine delivery times scheduling with general deterioration effects. They proved that the makespan minimization is polynomially solvable. Zhang et al. (2024) studied the single-machine scheduling problems with deteriorating jobs. For the slack due window, they proved that the minmax type problem is polynomially solvable.

In addition, extensive surveys of research related to scheduling with learning effects were presented by Biskup (2008) and Azzouz et al. (2018). More recently, Qian and Zhan (2021) and Wang et al. (2023a) studied the single-machine scheduling with delivery times and truncated learning effect. Under the due date assignment, they proved that some problems can be solved in polynomial time. Sun et al. (2021) (resp. Li et al. 2024c) considered the flow shop scheduling with learning effects (resp. truncated learning effects). For the total weighted completion time (resp. makespan) minimization, Sun et al. (2021) (resp. Li et al. 2024c) proposed some solution algorithms. Wang et al. (2021), Wang and Wang (2023) and Qian et al. (2024) considered the single-machine resource allocation scheduling with learning effects. Zhao (2022) (resp. Wang et al. 2024b) studied the single-machine scheduling problems with truncated learning effects and setup (resp. delivery) times. Ma et al. (2023) considered the single-machine online scheduling with learning effect. For the sum of completion times, they designed an optimal online algorithm. Lv and Wang (2024b) addressed the two-machine flow shop scheduling with release dates and truncated learning effects. For the total completion time minimization, they proposed a branch-and-bound algorithm and some heuristic algorithms.

On the other hand, the production efficiency can be increased by grouping various parts and products with similar designs and/or production processes, this phenomenon is known as group technology. Group technology that groups similar products into families helps increase the efficiency of operations and decrease the requirement of facilities (Potts and Van Wassenhove 1992; Kuo and Yang 2006; Wu and Lee 2008a; Wu et al. 2008b; Wang et al. 2008; Yan et al. 2008; Wang et al. 2009 and Lee and Wu 2009). He and Sun (2015) considered single-machine group scheduling problems with deterioration and learning effects. They showed that the makespan and some special cases of total completion time minimizations remain polynomially solvable. Huang (2019) and Liu et al. (2019) studied single-machine group scheduling with deteriorating jobs. For the primary (resp. secondary) criterion of minimizing the total weighted completion time (resp. maximum cost), they proved that the bicriterion problem is polynomially solvable. Under ready times and makespan minimization, Liu et al. (2019) proposed some solution algorithms. Miao (2019) studied parallel-batch makespan minimization scheduling with deteriorating jobs. Under group technology, she proved that two single-machine problems are polynomially solvable. Xu et al. (2021) investigated single-machine group scheduling with deteriorating jobs and nonperiodical maintenance. He et al. (2023) dealt with single-machine group scheduling with due-date assignment and resource allocation. To solve the generalized case of minimizing earliness-tardiness cost, they proposed a branch-and-bound and heuristic algorithms. Yan et al. (2023) and Qian (2023) considered single-machine group scheduling problems with learning effects and resource allocation. Li et al. (2024a) addressed single-machine group scheduling with convex resource allocation, learning effects and due date assignment. To solve the generalized case of minimizing the weighted sum of earliness, tardiness, common due date, resource consumption, and makespan, they proposed the heuristic and branch-and-bound algorithms. Liu and Wang (2023) and Wang and Liu (2024) studied single-machine group scheduling with resource allocation. Under the due date, they proved that some special cases of the problem are polynomially solvable.

Recently, Sun and Ma (2020) considered single-machine group scheduling with learning effects and deteriorating jobs. They proved that the makespan and the total completion time (under some special cases) problems remain polynomially solvable. This paper extends the results of Sun and Ma (2020), by focusing on group setup times are general linear functions of their starting times and the jobs in the same group have general truncated learning effects. The main contributions of this article are given as follows: i) Single-machine group scheduling with general linear deterioration and truncated learning effects is modeled and studied; ii) For the makespan minimization, we prove the problem is polynomially solvable; iii) For general case of the total completion time minimization, we propose some heuristics and a branch-and-bound algorithm. The remaining part of the paper is organized as follows. In the next section, a precise formulation is given. The problem of minimizing the makespan (resp. total completion time) is given in Sect. 3 (resp. Sect. 4). The last section contains conclusions.

2 Problem formulation

In this section, we first define the notation that is used throughout this paper (see the symbols before the Introduction), followed by the description of the problem.

There are n jobs grouped into m groups, and these n jobs are to be processed on a single machine. A setup time is required if the machine switches from one group to another and all setup times of groups for processing at time \(t_0\ge 0\). The machine can handle one job at a time and job preemption is not allowed. Let \(n_i\) be the number of jobs belonging to group \(G_i\), thus, \(n_1+n_2+\cdots +n_m=n\). Let \(J_{ij}\) denote the jth job in group \(G_i\), \(i=1,2,\ldots ,m;j=1,2,\ldots ,n_i\). As in Browne and Yechiali (1990), the actual setup time of \(G_i\) is:

$$\begin{aligned} s_i^{AST} = a_i+b_it_i,i = 1, 2, \ldots , m, \end{aligned}$$
(1)

where \(a_i\) (resp. \(b_i\)) is the normal setup time (resp. setup deterioration rate) of \(G_i\) and \(t_i\) denotes the start setup time of \(G_i\). As in Zhao (2022), if job \(J_{ij}\) in group \(G_i\) is scheduled in the rth position, then the actual job processing time of it is

$$\begin{aligned} p_{ij r}^{APT}=p_{ij} \max \left\{ f_i\left( \sum _{l=1}^{r-1}h_i(p_{i[l]})\right) g_i(r),\theta _i\right\} , i=1,2, \ldots , m; r,j=1,2,\ldots ,n_i,\nonumber \\ \end{aligned}$$
(2)

where \(p_{ij}\) is the normal (basic) processing time of job \(J_{ij}\), \(\frac{df_i(x)}{dx}\le 0,\frac{d^2f_i(x)}{dx^2}\ge 0\), \(\frac{d^2h_i(x)}{dx^2}\le 0\), \(g_i(r)\) is a non-increasing function with \(f_i(0)=1\), \(h_i(0)=0\), \(g_i(1)=1\) and \(0\le \theta _i\le 1\) is a truncation parameter of group \(G_i\), \(\sum _{l=1}^{0}h_i(p_{i[l]}):=0\). Note that Yan et al. (2008), Yang and Yang (2010) considered the following models: \( p_{ijr}^{APT}=p_{ij}r^{\alpha _{i1}},i = 1, 2, \ldots , m; r,j=1,2,\ldots ,n_i, \) and \( p_{ijr}^{APT}=p_{ij}\left( 1+p_{i[1]}+p_{i[2]}+\cdots +p_{i[r-1]}\right) ^{\alpha _{i2}},i = 1, 2, \ldots , m; r,j=1,2,\ldots ,n_i, \) where \(\alpha _{i1}\le 0\) (\(\alpha _{i2}\le 0\)) is a constant learning index of group \(G_i\). As in Wang and Xia (2005) and Cheng et al. (2009), the job processing time models also are: \( p_{ijr}^{APT}=p_{ij}\left( 1+\ln p_{i[1]}+\ln p_{i[2]}+\ldots +\ln p_{i[r-1]}\right) ^{\alpha _{i3}}, \) \( p_{ijr}^{APT}=p_{ij}{\beta _{i1}}^{r-1}\), \( p_{ijr}^{APT}=p_{ij}{\beta _{i2}}^{\left( p_{i[1]}+p_{i[2]}+\cdots +p_{i[r-1]}\right) }\), \(i = 1, 2, \ldots , m; r,j=1,2,\ldots ,n_i, \) and \( p_{ijr}^{APT}=p_{ij}{\beta _{i3}}^{\left( \ln p_{i[1]}+\ln p_{i[2]}+\cdots +\ln p_{i[r-1]}\right) },i = 1, 2, \ldots , m; r,j=1,2,\ldots ,n_i, \) where \(\alpha _{i3}\le 0\), \(0<\beta _{i1},\beta _{i2},\beta _{i3}\le 1\) is a constant learning index of group \(G_i\).

For a given schedule \(\varrho \), let \(C_{ij}(\varrho )\) represent completion time of job \(J_{ij}\) in group \(G_i\). The objective is to find a schedule that minimizes the makespan \(C_{\max }={\max }\{C_{ij}|i=1,2,\ldots ,f; j=1,2,\ldots ,n_i\}\) and total completion time \(\sum \sum C_{ij}=\sum _{i=1}^m \sum _{j=1}^{n_i} C_{ij}\). By using the three-field notation, the problems can be denoted by

$$\begin{aligned} 1\left| s_i^{AST} = a_i+b_it_i, p_{ij r}^{APT}=p_{ij} \max \left\{ f_i\left( \sum _{l=1}^{r-1}h_i(p_{i[l]})\right) g_i(r),\theta _i\right\} ,GT\right| C_{\max } \end{aligned}$$
(3)

and

$$\begin{aligned} 1\left| s_i^{AST} = a_i+b_it_i, p_{ij r}^{APT}=p_{ij} \max \left\{ f_i\left( \sum _{l=1}^{r-1}h_i(p_{i[l]})\right) g_i(r),\theta _i\right\} ,GT\right| \sum \sum C_{ij},\nonumber \\ \end{aligned}$$
(4)

where GT denotes the group technology. Sun and Ma (2020) considered the learning effect model \(p_{ij r}^{APT}=p_{ij} F_i\left( \sum _{l=1}^{r-1}\chi _{il}p_{i[l]},r\right) \), where \(\chi _{i1}\le \chi _{i2} \le \cdots \le \chi _{in_i}\), \(0\le F_i\left( \sum _{l=1}^{r-1}\chi _{il}p_{i[l]},r\right) \le 1\) is a non-increasing function on \(\sum _{l=1}^{r-1}\chi _{il}p_{i[l]}\) and r, where \( F_i(\sum _{l=1}^{0}\chi _{il}p_{i[l]},1)=1\). Sun and Ma (2020) proved that \(1\big |s_i^{AST} = a_i+b_it_i, p_{ij r}^{APT}=p_{ij} F_i\left( \sum _{l=1}^{r-1}\chi _{il}p_{i[l]},r\right) ,GT\big |C_{\max }\) and a special case of \(1\big |s_i^{AST} = a_i+b_it_i, p_{ij r}^{APT}=p_{ij} F_i\left( \sum _{l=1}^{r-1}\chi _{il}p_{i[l]},r\right) ,GT\big |\sum \sum C_{ij}\) are polynomially solvable.

3 Makespan minimization

Lemma 1

(Niu et al. 2015) \(M(x)=\frac{ f_i\left( V+h_i(x)\right) g_i(r+1)-f_i\left( V\right) g_i(r)}{x}\) is a non-decreasing function on x, where \(V>0\), \(\frac{df_i(x)}{dx}\le 0,\frac{d^2f_i(x)}{dx^2}\ge 0\), \(\frac{d^2h_i(x)}{dx^2}\le 0\), \(g_i(r)\) is a non-increasing function with \(h_i(0)=0\), \(g_i(1)=1\), and \(x \ge 0\).

Theorem 1

For \(1\left| s_i^{AST} \!=\! a_i+b_it_i, p_{ij r}^{APT}\!=\!p_{ij} \max \left\{ f_i\left( \sum _{l=1}^{r-1}h_i(p_{i[l]})\right) g_i(r),\theta _i\right\} ,GT\right| C_{\max }\), the optimal job sequence in each group is obtained by the nondecreasing order of \(p_{ij}\), i.e.,

$$\begin{aligned} p_{i\langle 1 \rangle }\le p_{i<2>} \le \cdots \le p_{i<n_i>}, i=1,2,\ldots ,m. \end{aligned}$$

Proof

For the group \(G_i\), let \(\pi _i = [s_1,J_{ij}, J_{ik},s_2], \pi '_i=[s_1, J_{ik}, J_{ij},s_2]\), where \(s_1\) and \(s_2\) are partial schedules, \(p_{ij} \le p_{ik}\), and there are \(r-1\) jobs in \(S_1\). Under \(\pi _i\) and \(\pi '_i\), we have

$$\begin{aligned} C_{ik}(\pi _i)= & W +p_{ij}\max \left\{ f_i\left( \sum _{l=1}^{r-1}h_i( p_{i[l]})\right) g_i(r),\theta _i\right\} \nonumber \\ & + p_{ik}\max \left\{ f_i\left( \sum _{l=1}^{r-1}h_i( p_{i[l]})+h_i(p_{ij})\right) g_i(r+1),\theta _i\right\} , \end{aligned}$$
(5)

and

$$\begin{aligned} C_{ij}(\pi '_i)= & W +p_{ik}\max \left\{ f_i\left( \sum _{l=1}^{r-1}h_i( p_{i[l]})\right) g_i(r),\theta _i\right\} \nonumber \\ & +p_{ij}\max \left\{ f_i\left( \sum _{l=1}^{r-1}h_i( p_{i[l]})+h_i(p_{ik})\right) g_i(r+1),\theta _i\right\} , \end{aligned}$$
(6)

where W is the completion time of the last job in \(s_1\).

From (5) and (6), we have

$$\begin{aligned} C_{ij}(\pi '_i)-C_{ik}(\pi _i)= & (p_{ik}-p_{ij})\max \left\{ f_i\left( \sum _{l=1}^{r-1}h_i( p_{i[l]})\right) g_i(r),\theta _i\right\} \nonumber \\ & + p_{ij}\max \left\{ f_i\left( \sum _{l=1}^{r-1}h_i( p_{i[l]})+h_i(p_{ik})\right) g_i(r+1),\theta _i\right\} \nonumber \\ & -p_{ik}\max \left\{ f_i\left( \sum _{l=1}^{r-1}h_i( p_{i[l]})+h_i(p_{ij})\right) g_i(r+1),\theta _i\right\} . \end{aligned}$$
(7)

Case 1): If \(f_i\left( \sum _{l=1}^{r-1}h_i( p_{i[l]})\right) g_i(r)\le \theta _i \), from (7), we have

$$\begin{aligned} C_{ij}(\pi '_i)-C_{ik}(\pi _i)= & (p_{ik}-p_{ij})\theta _i+ p_{ij}\theta _i-p_{ik}\theta _i=0. \end{aligned}$$
(8)

Case 2): If \(f_i\left( \sum _{l=1}^{r-1}h_i( p_{i[l]})\right) g_i(r)>\theta _i\ge f_i\left( \sum _{l=1}^{r-1}h_i( p_{i[l]})+h_i(p_{ij})\right) g_i(r+1)\), from (7), we have

$$\begin{aligned} C_{ij}(\pi '_i)-C_{ik}(\pi _i)= & (p_{ik}-p_{ij})f_i\left( \sum _{l=1}^{r-1}h_i( p_{i[l]})\right) g_i(r)+ p_{ij}\theta _i-p_{ik}\theta _i\nonumber \\= & (p_{ik}-p_{ij})\left[ f_i\left( \sum _{l=1}^{r-1}h_i( p_{i[l]})\right) g_i(r)-\theta _i\right] \nonumber \\\ge & 0. \end{aligned}$$
(9)

Case 3): If \(f_i\left( \sum _{l=1}^{r-1}h_i( p_{i[l]})+h_i(p_{ij})\right) g_i(r+1)>\theta _i\ge f_i\left( \sum _{l=1}^{r-1}h_i( p_{i[l]})+h_i(p_{ik})\right) g_i(r+1)\), from (7), we have

$$\begin{aligned}&C_{ij}(\pi '_i)-C_{ik}(\pi _i)\nonumber \\&\quad =(p_{ik}-p_{ij})f_i\left( \sum _{l=1}^{r-1}h_i( p_{i[l]})\right) g_i(r)\nonumber \\&\qquad + p_{ij}\theta _i-p_{ik}f_i\left( \sum _{l=1}^{r-1}h_i( p_{i[l]})+h_i(p_{ij})\right) g_i(r+1)\nonumber \\&\quad \ge (p_{ik}-p_{ij})f_i\left( \sum _{l=1}^{r-1}h_i( p_{i[l]})\right) g_i(r)+ p_{ij}f_i\left( \sum _{l=1}^{r-1}h_i( p_{i[l]})+h_i(p_{ik})\right) g_i(r+1)\nonumber \\&\qquad -p_{ik}f_i\left( \sum _{l=1}^{r-1}h_i( p_{i[l]})+h_i(p_{ij})\right) g_i(r+1)\nonumber \\&\quad = p_{ij}p_{ik}\bigg [\frac{f_i\left( \sum _{l=1}^{r-1}h_i( p_{i[l]})+h_i(p_{ik})\right) g_i(r+1)-f_i\left( \sum _{l=1}^{r-1}h_i( p_{i[l]})\right) g_i(r)}{p_{ik}} \nonumber \\&\qquad - \frac{f_i\left( \sum _{l=1}^{r-1}h_i( p_{i[l]})+h_i(p_{ij})\right) g_i(r+1)-f_i\left( \sum _{l=1}^{r-1}h_i( p_{i[l]})\right) g_i(r)}{p_{ij}} \bigg ]\nonumber \\&\quad = p_{ij}p_{ik}\bigg [\frac{f_i\left( V+h_i(p_{ik})\right) g_i(r+1)-f_i\left( V\right) g_i(r)}{p_{ik}}\nonumber \\&\qquad - \frac{f_i\left( V+h_i(p_{ij})\right) g_i(r+1)-f_i\left( V\right) g_i(r)}{p_{ij}} \bigg ], \end{aligned}$$
(10)

where \(V=\sum _{l=1}^{r-1}h_i( p_{i[l]})\). From Lemma 1, if \(p_{ij} \le p_{ik}\), \(C_{ij}(\pi '_i)-C_{ik}(\pi _i)\ge 0.\)

Case 4): If \(f_i\left( \sum _{l=1}^{r-1}h_i( p_{i[l]})+h_i(p_{ik})\right) g_i(r+1)>\theta _i \), from (7), we have

$$\begin{aligned} C_{ij}(\pi '_i)-C_{ik}(\pi _i)= & (p_{ik}-p_{ij})f_i\left( \sum _{l=1}^{r-1}h_i( p_{i[l]})\right) g_i(r)\\ & + p_{ij}f_i\left( \sum _{l=1}^{r-1}h_i( p_{i[l]})+h_i(p_{ik})\right) g_i(r+1)\\ & -p_{ik}f_i\left( \sum _{l=1}^{r-1}h_i( p_{i[l]})+h_i(p_{ij})\right) g_i(r+1). \end{aligned}$$

Similar to Case 3), we have \( C_j(\pi ')-C_k(\pi )\ge 0. \) \(\square \)

Theorem 2

For \(1\left| s_i^{AST} \!=\! a_i+b_it_i, p_{ij r}^{APT}\!=\!p_{ij} \max \left\{ f_i\left( \sum _{l=1}^{r-1}h_i(p_{i[l]})\right) g_i(r),\theta _i\right\} ,GT\right| C_{\max }\), the optimal group schedule is arranged in non-decreasing order of

$$\begin{aligned} \frac{a_{i}+\widetilde{P_i}}{b_i}, i=1,2,\ldots ,m, \end{aligned}$$
(11)

where

$$\begin{aligned} \widetilde{P_i}=\sum _{z=1}^{n_i}p_{i\langle z \rangle } \max \left\{ f_i\left( \sum _{l=1}^{z-1}h_i(p_{i\langle l \rangle })\right) g_i(z),\theta _i\right\} . \end{aligned}$$
(12)

Proof

From Theorem 1, the optimal job sequence in each group is obtained by the nondecreasing order of \(p_{ij}\), i.e., \( p_{i\langle 1 \rangle }\le p_{i\langle 2 \rangle } \le \cdots \le p_{i\langle n_i \rangle }, i=1,2,\ldots ,m. \). Let \(\varrho \) and \(\varrho '\) be two group schedules where \(\varrho = [S_1,G_i, G_j,S_2], \varrho '=[S_1, G_j, G_i,S_2]\) and \(S_1\) and \(S_2\) are partial sequences. Let t be the completion time of the last job in \(S_1\), for \(\varrho \), we have

$$\begin{aligned} C_{i\langle 1 \rangle ]}(\varrho )= & t+a_i+b_it+p_{i\langle 1 \rangle }=a_i+t(1+b_i)+p_{i\langle 1 \rangle },\\ C_{i\langle 2 \rangle }(\varrho )= & C_{i\langle 1 \rangle }+p_{i\langle 2 \rangle }\max \left\{ f_i\left( h_i(p_{i\langle 1 \rangle })\right) g_i(2),\theta _i\right\} \\= & a_i+t(1+b_i)+p_{i\langle 1 \rangle }+p_{i\langle 2 \rangle }\max \left\{ f_i\left( h_i(p_{i\langle 1 \rangle })\right) g_i(2),\theta _i\right\} ,\\ & \ldots \ldots \\ C_{i\langle n_i \rangle }(\varrho )= & a_i+t(1+b_i)+\sum _{z=1}^{n_i}p_{i\langle z \rangle } \max \left\{ f_i\left( \sum _{l=1}^{z-1}h_i(p_{i\langle l \rangle })\right) g_i(z), \theta _i\right\} . \end{aligned}$$
$$\begin{aligned} \begin{aligned} C_{j\langle 1 \rangle }(\varrho )&=C_{i\langle n_i \rangle }(\varrho )+a_j+b_jC_{i\langle n_i \rangle }(\varrho )+p_{j\langle 1 \rangle }\\&=a_j+C_{i\langle n_i \rangle }(\varrho )(1+b_j)+p_{j\langle 1 \rangle }\\&=a_j+a_i(1+b_j)+t(1+b_i)(1+b_j)\\&\quad +(1+b_j)\sum _{z=1}^{n_i}p_{i\langle z \rangle }\max \left\{ f_i\left( \sum _{l=1}^{z-1}h_i(p_{i\langle l \rangle })\right) g_i(z), \theta _i\right\} +p_{j\langle 1 \rangle },\\ C_{j\langle 2 \rangle }(\varrho )&=a_j+a_i(1+b_j)+t(1+b_i)(1+b_j)\\&\quad +(1+b_j)\sum _{z=1}^{n_i}p_{i\langle z \rangle } \max \left\{ f_i\left( \sum _{l=1}^{z-1}h_i(p_{i\langle l \rangle })\right) g_i(z),\theta _i\right\} \\&\quad +p_{j\langle 1 \rangle } +p_{j\langle 2 \rangle }\max \left\{ f_i\left( h_j(p_{j\langle 1 \rangle })\right) g_j(2),\theta _j\right\} ,\\&\ldots \ldots \\ C_{j\langle n_j \rangle }(\varrho )&=a_j+a_i(1+b_j)+t(1+b_i)(1+b_j)\\&\quad +(1+b_j)\sum _{z=1}^{n_i}p_{i\langle z \rangle } \max \left\{ f_i\left( \sum _{l=1}^{h-1}h_i(p_{i\langle l \rangle })\right) g_i(z),\theta _i\right\} \\&\quad +\sum _{z=1}^{n_j}p_{j\langle z \rangle }\max \left\{ f_j\left( \sum _{l=1}^{z-1}h_j(p_{j\langle l \rangle })\right) g_j(z),\theta _j\right\} . \end{aligned} \end{aligned}$$
(13)

Similarly, for \(\varrho '\), we have

$$\begin{aligned} C_{i\langle n_i \rangle }(\varrho ')= & a_i+a_j(1+b_i)+t(1+b_j)(1+b_i)\nonumber \\ & +(1+b_i)\sum _{z=1}^{n_j}p_{j\langle z \rangle }\max \left\{ f_j\left( \sum _{l=1}^{z-1}h_j(p_{j\langle l \rangle })\right) g_j(z),\theta _j\right\} \nonumber \\ & +\sum _{z=1}^{n_i}p_{i\langle z \rangle }\max \left\{ f_i\left( \sum _{l=1}^{h-1}h_i(p_{i\langle l \rangle })\right) g_i(z),\theta _i\right\} . \end{aligned}$$
(14)

From (13) and (14), we have

$$\begin{aligned} & C_{j\langle n_j \rangle }(\varrho )-C_{i\langle n_i \rangle }(\varrho ')\\ & \quad =a_ib_j+b_j\sum _{z=1}^{n_i}p_{i\langle z \rangle }\max \left\{ f_i\left( \sum _{l=1}^{h-1}h_i(p_{i\langle l \rangle })\right) g_i(z), \theta _i\right\} \\ & \qquad -a_jb_i-b_i\sum _{z=1}^{n_j}p_{j\langle z \rangle }\max \left\{ f_j\left( \sum _{l=1}^{h-1}h_j(p_{j\langle l \rangle })\right) g_j(z), \theta _j\right\} \\ & \quad = b_ib_j\left( \frac{a_{i}+\widetilde{P_i}}{b_i}-\frac{a_{j}+\widetilde{P_j}}{b_j}\right) \\ & \quad \le 0 \end{aligned}$$

if and only if

$$\begin{aligned} \frac{a_{i}+\widetilde{P_i}}{b_i}\le \frac{a_{j}+\widetilde{P_j}}{b_j}, \end{aligned}$$

where \(\widetilde{P_i}=\sum _{z=1}^{n_i}p_{i\langle z \rangle }\max \left\{ f_i\left( \sum _{l=1}^{z-1}h_i(p_{i\langle l \rangle })\right) g_i(z),\theta _i\right\} \) and \(\widetilde{P_j}=\sum _{z=1}^{n_j}p_{j\langle z \rangle }\max \left\{ f_j\left( \sum _{l=1}^{z-1}h_j(p_{j\langle l \rangle })\right) g_j(z),\theta _j\right\} \), this completes the proof. \(\square \)

From Theorems 1-2, \(1\Big |s_i^{AST} \!=! a_i+b_it_i, p_{ij r}^{APT}\!=\!p_{ij} \max \left\{ \!f_i\left( \sum _{l=1}^{r-1}h_i(p_{i[l]})\right) g_i(r),\theta _i\!\right\} \!, GT\Big |C_{\max }\) can be solved by the following algorithm:

Algorithm 1.

Step 1. Jobs in each group are scheduled in non-decreasing order of \(p_{ij}\), i.e.,

\(p_{i\langle 1 \rangle }\le p_{i\langle 2 \rangle } \le \ldots \le p_{i\langle n_i \rangle }, i=1,2,\ldots ,m.\)

Step 2. Calculate \(\rho (G_i)=\frac{a_{i}+\widetilde{P_i}}{b_i},i=1,2,\ldots ,m\), where

\(\widetilde{P_i}=\sum _{z=1}^{n_i}p_{i[z]}\max \left\{ f_i\left( \sum _{l=1}^{z-1}h_i(p_{i[l]})\right) g_i(z),\theta _i\right\} .\)

Step 3. Groups are scheduled in non-decreasing order of \(\rho (G_i)\), i.e., \(\rho (G_1)\le \rho (G_2) \le \ldots \le \rho (G_m)\).

Theorem 3

The \(1\left| s_i^{AST} \!=\! a_i+b_it_i, p_{ij r}^{APT}\!=\!p_{ij} \max \left\{ f_i\left( \sum _{l=1}^{r-1}h_i(p_{i[l]})\right) g_i(r),\theta _i\right\} ,GT\right| C_{\max }\) can be solved by Algorithm 1 in \(O(n {\log } n)\) time.

Proof

Obviously, the optimal schedule in a certain group \(G_i\) can be obtained in \(O(n_i{\log }n_i)\) and the optimal group schedule can be obtained in \(O(m{\log }m)\). Obviously \(\sum _{i=1}^mO(n_i{\log }n_i)\le O(n {\log } n)\), hence, the total time for Algorithm 1 is \(O(n {\log } n)\). \(\square \)

Example 1

There are \({n}=10\) jobs, where \(m=3\), \({n}_{1}={n}_{2}=3,{n}_{3}=4\), \(t_0=0\), \(f_i\left( \sum _{l=1}^{r-1}h_i(p_{i[l]})\right) \) \(= \left( 1+\sum _{l=1}^{r-1}\ln p_{i[l]}\right) ^{\alpha _i}\) (\(\alpha _i\le 0\)), \(g_i(r)= {\beta _i}^{r-1}\) (\( 0<\beta _i\le 1\)), and the other coefficients of all jobs are given in Table 1.

Table 1 The values of Example 1

Solution:

  • Step 1. By Theorem 1, the optimal internal job sequences are:

  • \({\pi }^*_{1}: [{J}_{13}\rightarrow {J}_{11}\rightarrow {J}_{12}]\),

  • \({\pi }^*_{2}: [{J}_{23}\rightarrow {J}_{22}\rightarrow {J}_{21}]\),

  • \({\pi }^*_{3}: [{J}_{34}\rightarrow {J}_{32}\rightarrow {J}_{31}\rightarrow {J}_{33}].\)

  • Step 2. By Eqs. (11) and (12), we have

  • \(\rho (G_1)=\frac{a_{1}+\widetilde{P_1}}{b_1}=5.865885\),

  • \(\rho (G_2)=\frac{a_{2}+\widetilde{P_2}}{b_2}=3.162026\),

  • \(\rho (G_3)=\frac{a_{3}+\widetilde{P_3}}{b_3}=20.77932\).

Step 3. By Theorem 2, it follows that the optimal group schedule is \(\varrho ^{*}: [G_{2}\rightarrow {G}_{1}\rightarrow {G}_{3}]\).

4 Total completion time minimization

Theorem 4

For \(1\Big |s_i^{AST} = a_i+b_it_i, p_{ij r}^{APT}=p_{ij} \max \left\{ f_i\left( \sum _{l=1}^{r-1}h_i(p_{i[l]})\right) g_i(r),\theta _i\right\} , GT\Big |\sum \sum C_{ij}\), the optimal job sequence in each group is obtained by the nondecreasing order of \(p_{ij}\), i.e.,

$$\begin{aligned} p_{i\langle 1 \rangle }\le p_{i\langle 2 \rangle } \le \cdots \le p_{i\langle n_i \rangle }, i=1,2,\ldots ,m. \end{aligned}$$

Proof

Similar to Theorem 1. For the group \(G_i\), from schedules \(\pi _i = [s_1,J_{ij}, J_{ik},s_2]\), \( \pi '_i=[s_1, J_{ik}, J_{ij},s_2]\) and \(p_{ij} \le p_{ik}\), we have

$$\begin{aligned} C_{ij}(\pi _i)= & W +p_{ij}\max \left\{ f_i\left( \sum _{l=1}^{r-1}h_i( p_{i[l]})\right) g_i(r),\theta _i\right\} \\\le & C_{ik}(\pi _i')=W +p_{ik}\max \left\{ f_i\left( \sum _{l=1}^{r-1}h_i( p_{i[l]})\right) g_i(r),\theta _i\right\} \end{aligned}$$

and \(C_{ik}(\pi _i)\le C_{ij}(\pi '_i)\), hence \(C_{ik}(\pi _i)+C_{ij}(\pi _i)\le C_{ik}(\pi _i')+ C_{ij}(\pi '_i)\), this completes the proof. \(\square \)

From Mosheiov (1991), the problem \(1\left| p_j^{APT} =1+\beta _jt_j\right| \sum C_{j}\) is an open problem. In order to determine the optimal group sequence, we have the following result:

Theorem 5

For

$$\begin{aligned} 1\left| s_i^{AST} = a_i+b_it_i, p_{ij r}^{APT}=p_{ij} \max \left\{ f_i\left( \sum _{l=1}^{r-1}h_i(p_{i[l]})\right) g_i(r),\theta _i\right\} ,GT\right| \sum \sum C_{ij}, \end{aligned}$$

if \(\frac{b_{i}}{n_i(1+b_i)}\le \frac{b_{j}}{n_j(1+b_j)}\) \(\Rightarrow \frac{a_i+\widetilde{P_i}}{n_i(1+b_i)}\le \frac{a_j+\widetilde{P_j}}{n_j(1+b_j)}\), the optimal group schedule is arranged in non-decreasing order of \(\frac{b_{i}}{n_i(1+b_i)}\) or \(\frac{a_i+\widetilde{P_i}}{n_i(1+b_i)}\), \(i=1,2,\ldots ,m\), where

$$\begin{aligned} \widetilde{P_i}=\sum _{z=1}^{n_i}p_{i\langle z \rangle }\max \left\{ f_i\left( \sum _{l=1}^{z-1}h_i(p_{i\langle l \rangle })\right) g_i(z),\theta _i\right\} \end{aligned}$$

and

$$\begin{aligned} \widetilde{P_j}=\sum _{z=1}^{n_j}p_{j\langle z \rangle }\max \left\{ f_j\left( \sum _{l=1}^{z-1}h_j(p_{j\langle l \rangle })\right) g_j(z),\theta _j\right\} . \end{aligned}$$

Proof

From Theorem 2, for \(\varrho \) and \(\varrho '\), we have

$$\begin{aligned} & \sum \sum C_{ij}(\varrho )-\sum \sum C_{ij}(\varrho ')\\ & \quad = tn_in_j(1+b_i)(1+b_j)\left( \frac{b_{i}}{n_i(1+b_i)}-\frac{b_{j}}{n_j(1+b_j)}\right) \\ & \qquad +n_in_j(1+b_i)(1+b_j)\left( \frac{a_i+\widetilde{P_i}}{n_i(1+b_i)}-\frac{a_j+\widetilde{P_j}}{n_j(1+b_j)}\right) . \end{aligned}$$

If \(\frac{b_{i}}{n_i(1+b_i)}\le \frac{b_{j}}{n_j(1+b_j)}\) and \( \frac{a_i+\widetilde{P_i}}{n_i(1+b_i)}\le \frac{a_j+\widetilde{P_j}}{n_j(1+b_j)}\), we have \(\sum \sum C_{ij}(\varrho )\le \sum \sum C_{ij}(\varrho ')\), this completes the proof. \(\square \)

4.1 Branch-and-bound algorithm

From Theorems 4-5, to solve

$$\begin{aligned} 1\left| s_i^{AST} =a_i+ b_it_i, p_{ij r}^{APT}=p_{ij} \max \left\{ f_i\left( \sum _{l=1}^{r-1}h_i(p_{i[l]})\right) g_i(r),\theta _i\right\} ,GT\right| \sum \sum C_{ij}, \end{aligned}$$

an heuristic algorithm is proposed (i.e., an upper bound of branch-and-bound algorithm).

Algorithm 2.

Step 1. Jobs in each group are scheduled in non-decreasing order of \(p_{ij}\), i.e.,

\(p_{i\langle 1 \rangle }\le p_{i\langle 2 \rangle } \le \ldots \le p_{i\langle n_i \rangle }, i=1,2,\ldots ,m.\)

Step 2. Groups are scheduled in non-decreasing order of

\(\rho (G_i)=a_{i}, i=1,2,\ldots ,m.\)

Step 3. Groups are scheduled in non-decreasing order of

\(\rho (G_i)=b_{i}, i=1,2,\ldots ,m.\)

Step 4. Groups are scheduled in non-increasing order of

\(\rho (G_i)=n_{i}, i=1,2,\ldots ,m.\)

Step 5. Groups are scheduled in non-decreasing order of

\(\rho (G_i)=\frac{a_{i}}{n_i(1+b_i)}, i=1,2,\ldots ,m.\)

Step 6. Groups are scheduled in non-decreasing order of

\(\rho (G_i)=\frac{b_{i}}{n_i(1+b_i)}, i=1,2,\ldots ,m.\)

Step 7. Groups are scheduled in non-decreasing order of

\(\rho (G_i)=\frac{a_{i}+\sum _{h=1}^{n_i}p_{i[h]}\max \left\{ f_i\left( \sum _{l=1}^{h-1}h_i(p_{i[l]})\right) g_i(h),\theta _i\right\} }{n_i(1+b_i)}, i=1,2,\ldots ,m.\)

Step 8. Choose the one with smaller \(\sum \sum C_{ij}\) as the solution by Step 2-7.

Lemma 2

(Gawiejnowicz 2020a) The term \(\sum _{k=1}^{m}x_{k}\prod _{l=k+1}^m y_l\) can be minimized by the non-decreasing order of \(\frac{x_{i}}{y_i-1}\).

Lemma 3

The term \(\sum _{k=1}^{m}x_{k}\prod _{l=k+1}^{m+1} y_l\) can be minimized by the non-decreasing order of \(\frac{x_{i}}{y_i-1}\).

Proof

For \(\delta =[s_1,i,j,s_2]\) and \(\delta '=[s_1,j,i,s_2]\), there is

$$\begin{aligned} \sum _{k=1}^{m}x_{k}\prod _{l=k+1}^{m+1} y_l(\delta )-\sum _{k=1}^{m}x_{k}\prod _{l=k+1}^{m+1} y_l(\delta ')= & x_iy_j(y_{i+2}...y_{m+1})+x_j(y_{i+2}...y_{m+1})\\ & -x_jy_i(y_{i+2}...y_{m+1})-x_i(y_{i+2}...y_{m+1}) \\= & (y_{i+2}...y_{m+1})[x_i(y_j-1)-x_j(y_i-1)]\\= & (y_{i+2}...y_{m+1})(y_i-1)(y_j-1)\\ & \times \left( \frac{x_i}{y_i-1}-\frac{x_j}{y_j-1}\right) \\\le & 0, \end{aligned}$$

if and only if \(\frac{x_i}{y_i-1}\le \frac{x_j}{y_j-1}\). \(\square \)

Lemma 4

The term \(\sum _{k=1}^{m}x_{k}\prod _{l=1}^k y_l\) can be minimized by the non-decreasing order of \(\frac{y_{i}-1}{x_iy_i}\).

Proof

For \(\delta =[s_1,i,j,s_2]\) and \(\delta '=[s_1,j,i,s_2]\), there is

$$\begin{aligned} \sum _{k=1}^m x_k \prod _{l=1}^k y_l(\delta )-\sum _{k=1}^m x_k \prod _{l=1}^k y_l(\delta ')= & x_iy_1y_2...y_i+x_{j}y_1y_2...y_iy_j\\ & -(x_j y_1y_2...y_j+x_iy_1y_2...y_jy_i) \\= & y_1y_2...y_{i-1}[x_jy_j(y_i-1)-x_iy_i(y_j-1)] \\= & y_1y_2...y_{i-1}x_iy_ix_jy_j\left( \frac{y_i-1}{x_iy_i}-\frac{y_j-1}{x_jy_j}\right) \\\le & 0 \end{aligned}$$

if and only if \(\frac{y_i-1}{x_iy_i}\le \frac{y_j-1}{x_jy_j}\). \(\square \)

From Theorem 4, the optimal job sequence within each group can be obtained, i.e., \( p_{i\langle 1 \rangle }\le p_{i\langle 2 \rangle } \le \cdots \le p_{i\langle n_i \rangle }, i=1,2,\ldots ,m. \). Let \(\varrho =(\chi ^{pp},\chi ^{uu})\) be a group schedule, where \(\chi ^{pp}\) (resp. \(\chi ^{uu}\)) is the scheduled (resp. unscheduled) part, and there are \(\psi \) groups in \(\chi ^{pp}\), from the proof of Theorem 2, we have

$$\begin{aligned} & \sum _{i=1}^{\psi }\sum _{j=1}^{n_i} C_{i[j]}(\chi ^{pp})+\sum _{i=\psi +1}^{m}\sum _{j=1}^{n_i} C_{[i][j]}(\chi ^{uu})\nonumber \\ & \quad =\sum _{i=1}^{\psi }\sum _{j=1}^{n_i} C_{i[j]}(\chi ^{pp})+\sum _{i=\psi +1}^{m}n_{[i]}\left( \sum _{k=\psi +1}^{i}a_{[k]}\prod _{l=k+1}^i(1+b_{[l]})\right) \nonumber \\ & \qquad +\, t\left( \sum _{k=\psi +1}^{m}n_{[k]}\prod _{l=\psi +1}^k(1+b_{[l]})\right) \nonumber \\ & \qquad +\sum _{i=\psi +1}^{m-1}n_{[i+1]}\left( \sum _{k=\psi +1}^{i}\widetilde{P_{[k]}}\prod _{l=k+1}^{i+1}(1+b_{[l]})\right) \nonumber \\ & \qquad +\sum _{i=\psi +1}^{m}\sum _{h=1}^{n_{[i]}}\left( n_{[i]}-h+1\right) p_{[i]\langle h \rangle }\max \left\{ f_{[i]}\left( \sum _{l=1}^{h-1}h_{[i]}(p_{[i]\langle l \rangle })\right) g_{[i]}(h),\theta _{[i]}\right\} ,\nonumber \\ \end{aligned}$$
(15)

where \(\prod _{l=k}^{k-1}(1+b_{[l]})=1\) and t is the completion time of the last job in \(\chi ^{pp}\), i.e., \(t=C_{\psi [n_\psi ]}(\chi ^{pp})\). From (15), \(\sum _{i=1}^{\psi }\sum _{j=1}^{n_i} C_{i[j]}(\chi ^{pp})\) and \(t=C_{\psi [n_\psi ]}(\chi ^{pp})\) are constants, \(\sum _{k=\psi +1}^{i}a_{[k]}\prod _{l=k+1}^i(1+b_{[l]})\) (resp. \(\sum _{k=\psi +1}^{i}\widetilde{P_{[k]}} \prod _{l=k+1}^{i+1}(1+b_{[l]})\)) can be minimized according to Lemma 2 (resp. Lemma 3) by non-decreasing of \(\frac{a_i}{b_i}\) (resp. \(\frac{\widetilde{P_i}}{b_i}\)), and \(\sum _{k=\psi +1}^{m}n_{[k]}\prod _{l=\psi +1}^k(1+b_{[l]})\) can be minimized by the non-decreasing order of \(\frac{b_{i}}{n_i(1+b_i)}\) according to Lemma 4\((i\in \chi ^{uu})\).

Let \(n_{\min }=\min \{n_{[\psi +1]},n_{[\psi +2]},...,n_{[m]}\}\), hence, we have the following lower bound:

$$\begin{aligned} LB= & \sum _{i=1}^{\psi }\sum _{j=1}^{n_i} C_{ij}(\chi ^{pp}) + \sum _{i=\psi +1}^m n_{\min } \left( \sum _{k=\psi +1}^i a_{\langle k \rangle }\prod _{l=k+1}^i(1+b_{\langle l \rangle })\right) \nonumber \\ & +t\left( \sum _{k=\psi +1}^m n_{_{\langle \langle k \rangle \rangle }}\prod _{l=\psi +1}^k (1+b_{\langle \langle l \rangle \rangle })\right) \nonumber \\ & +\sum _{i=\psi +1}^{m-1} n_{\min }\left( \sum _{k=\psi +1}^i\widetilde{P_{(k)}}\prod _{l=k+1}^{i+1} (1+b_{(l)})\right) \nonumber \\ & +\sum _{i=\psi +1}^{m}\sum _{h=1}^{n_{[i]}}\left( n_{[i]}-h+1\right) p_{[i]\langle h \rangle }\max \left\{ f_{[i]} \left( \sum _{l=1}^{h-1}h_{[i]}(p_{[i]\langle l \rangle })\right) g_{[i]}(h),\theta _{[i]}\right\} ,\nonumber \\ \end{aligned}$$
(16)

where \(\frac{a_{\langle \psi +1 \rangle }}{b_{\langle \psi +1 \rangle }}\le \frac{a_{\langle \psi +2 \rangle }}{b_{\langle \psi +2]\rangle }}\le \cdots \le \frac{a_{\langle m \rangle }}{b_{\langle m \rangle }}\), \(\frac{b_{\langle \langle \psi +1 \rangle \rangle }}{n_{\langle \langle \psi +1 \rangle \rangle }}(1+b_{\langle \langle \psi +1 \rangle \rangle }) \le \frac{b_{\langle \langle \psi +2 \rangle \rangle }}{n_{\langle \langle \psi +2 \rangle \rangle }}(1+b_{\langle \langle \psi +2 \rangle \rangle }) \le \cdots \le \frac{b_{\langle \langle m \rangle \rangle }}{n_{\langle \langle m \rangle \rangle }}(1+b_{\langle \langle m \rangle \rangle })\), and \(\frac{\widetilde{P_{(\psi +1)}}}{b_{(\psi +1)}}\le \frac{\widetilde{P_{(\psi +2)}}}{b_{(\psi +2)}}\le \cdots \le \frac{\widetilde{P_{(m)}}}{b_{(m)}}\) (where \(\frac{a_{\langle i \rangle }}{b_{\langle i \rangle }}\), \(\frac{b_{\langle \langle i \rangle \rangle }}{n_{\langle \langle i \rangle \rangle }(1+b_{\langle \langle i \rangle \rangle })}\), and \(\frac{\widetilde{P_{(i)}}}{b_{(i)}}\) do not necessarily correspond to the same job, \(i=\psi +1,\psi +2,...,m\)). For \(G_{[i]}\), \(p_{[i]\langle 1 \rangle }\le p_{[i]\langle 2 \rangle } \le \cdots \le p_{[i]<n_{[i]}>}\), and the sequence of \(p_{(i)\langle j \rangle }\) involved in \(P_{(i)}\) is the same as that of \(G_{[i]}\), that is, \(p_{(i)\langle 1 \rangle }\le p_{(i)\langle 2 \rangle } \le \cdots \le p_{(i)\langle n_{(i)} \rangle }\), where \(j=1,2,...,n_{(i)}\).

From the upper bound (i.e., Algorithm 2) and lower bound (see Eq. (16)), a branch-and-bound algorithm (B &B) is established as follows:

Algorithm 3. (B &B)

Step 1). Use Algorithm 2 to obtain an initial group sequence.

Step 2). Calculate the lower bound (see equation (16)) for the node (group). If the lower bound for an unfathomed partial group schedule is larger than or equal to the objective value of the initial solution (see equation (15)), eliminate the node and all the nodes following it in the branch. Calculate the objective value of the completed group schedule (see equation (15)). If it is less than the initial solution, replace it as the new solution; otherwise, eliminate it.

Step 3). Continue until all nodes have been explored.

4.2 Other algorithms

As in Nawaz et al. (1983) and Paredes-Astudillo et al. (2024), the following heuristic algorithm (HA) can be proposed.

Algorithm 4: HA

Step 1). Let \(\varrho ^A\) be the group sequence obtained from Algorithm 2.

Step 2). Set \(\lambda =2\). Select the first two groups from the sorted list and select the better of the two possible sequences. Do not change the relative positions of these two jobs with respect to each other in the remaining steps of the algorithm. Set \(\lambda =3\).

Step 3). Pick the job in the \(\lambda \)th position of the list generated in Step 1) and find the best group sequence by placing it at all possible \(\lambda \) positions in the partial sequence found in the previous step, without changing the relative positions to each other of the already assigned groups. The  number of enumerations at this step equals \(\lambda \).

Step 4). If \(\lambda =m\), STOP; otherwise, set \(\lambda =\lambda +1\) and go to Step 3).

As in He et al. (2023), a tabu search (TS) algorithm is a method for \(\sum \sum C_{ij}\) minimization. The initial group sequence of the TS algorithm is decided by Algorithm 2, and the maximum number of iterations for the TS algorithm is 1000 m.

Algorithm 5: TS

Step 1). Let the tabu list be empty and the iteration number be zero.

Step 2). Set the initial group sequence of the TS algorithm, calculate its objective cost (by equation (15)), and set the current group sequence as the best solution \(\varrho ^{A*}\).

Step 3). Search the associated neighborhood of the current group sequence and resolve if there is a group sequence \(\varrho ^{A**}\) with the smallest objective cost in the associated neighborhood and it is not in the tabu list.

Step 4). If \(\varrho ^{A**}\) is better than \(\varrho ^{A*}\), then let \(\varrho ^{A*}=\varrho ^{A**}\). Update the tabu list and the iteration number.

Step 5). If there is not a group sequence in the associated neighborhood but it is not in the tabu list or the maximum number of iterations is reached (i.e., 1000 m), then output the final group sequence. Otherwise, update the tabu list and go to Step 3).

As in Li et al. (2024a, 2024b), simulated annealing (SA) is also a method for \(\sum \sum C_{ij}\) minimization.

Algorithm 6: SA

Step 1). Set the internal job sequence by the Second step of Algorithm 2.

Step 2). Use the pairwise interchange (PI) neighborhood generation method.

Step 3). Calculate the objective value of the original schedule \(\varrho ^A\).

Step 4). Calculate the objective value of the new schedule \(\varrho ^{A*}\). If the \(\varrho ^{A*}\) is less than \(\varrho ^A\), it is accepted. Nevertheless, if the \(\varrho ^{A*}\) is higher, it might still be accepted with a decreasing probability as the process proceeds. This acceptance probability is determined by the following exponential distribution function: \( P(accept)=exp(-\alpha \times \triangle {TC}), \) where \(\alpha \) is a parameter and \(\triangle {TC}\) is the change in the objective function. In addition, the method is used to change \(\alpha \) in the kth iteration as follows: \( \alpha =\frac{k}{\delta }, \) where \(\delta \) is a constant. After preliminary trials, \(\delta =1\) is used.

Step 5). If \(\varrho ^{A*}\) increases, the new sequence is accepted when \(P(accept)>\beta \), where \(\beta \) is randomly sampled from the uniform distribution.

Step 6). The schedule is stable after 1000m iterations.

4.3 Number study

The heuristic algorithms (i.e., HA, TS, and SA) and the B &B algorithm were programmed in C++ (carried out on CPU Interl core i5-8250U 1.4GHz PC with 8.00GB RAM), where \(n=100, 200, 300, 400\); m= 8, 9, 10, 11; \(p_{ij r}^{APT}=p_{ij} \max \left\{ \left( 1+\sum _{l=1}^{r-1}\ln p_{i[l]}\right) ^{\alpha _i}{\beta _i}^{r-1},\theta _i\right\} \) (\(\alpha _i\le 0, 0<\beta _i\le 1\)) and \(n_i\ge 1\). The parameters setting is given as follows:

1) Real number: \(\alpha _i\in [-0.5,-0.1]\); \(\beta _i\in [0.4,0.9]\); \(\theta _i\in [0.3,0.6]\);

2) Integer number: \(a_i\in [3,50]\) and \(p_{ij}\in [3,50]\); \(a_i\in [51,100]\) and \(p_{ij}\in [51,100]\); \(a_i\in [3,100]\) and \(p_{ij}\in [3,100]\);

3) \(b_i\in [0.01,0.05]\); \(b_i\in [0.05,0.15]\); \(b_i\in [0.15,0.3]\).

For simulation accuracy, each random instance was conducted 20 times. The error of algorithm H is calculated as

$$\begin{aligned} \frac{\sum \sum C_{ij}(H)}{\sum \sum C_{ij}^*}, \end{aligned}$$
(17)

where \(H\in \{HA,TS,SA\}\), \(\sum \sum C_{ij}(H)\) (resp. \(\sum \sum C_{ij}^*\)) is the objective value (see (15)) generated by algorithm H (resp. B &B). In addition, running time (i.e., millisecond (ms)) of HA, TS, SA and B &B is defined. From Tables 2, 3 and 4, the maximum CPU time of B &B is 2,815,317 ms (i.e., \(n\times m = 400\times 11\)). For the CPU time of B &B, \(b_i\in [0.01,0.05]\) needs more time than \(b_i\in [0.05,0.15]\), and \(b_i\in [0.05,0.15]\) needs more time than \(b_i\in [0.15,0.3]\). From Tables 5, 6 and 7, the maximum error of SA is less than HA and TS for \(n\times m \le 400\times 11\) and the results of \(b_i\in [0.01,0.05]\) is more accurate than \(b_i\in [0.05,0.15]\) and \(b_i\in [0.15,0.3]\).

To further compare the algorithms TS, SA and B &B, the parameters \(n=300\), \(m=10\), \(b_i\in [0.01,0.05]\), \(b_i\in [0.05,0.15]\), and \(b_i\in [0.15,0.3]\) were given. The algorithms TS, SA and B &B were individually executed on the same instance. For each combination, 20 random instances were performed. It is also assumed that the CPU time for TS, SA are all obtained from B &B, that is, all three algorithms have the same CPU time. In order to calculate the error \(\frac{\sum \sum C_{ij}(H)}{\sum \sum C_{ij}^*}\) (\(H\in \{TS,SA\}\)), the total completion time values of TS and SA are obtained and compared with the values obtained by B &B. From the data in Tables 8, 9 and 10, it can be seen that SA is more accurate that TS, especially for \(n=300\), \(m=10\), \(b_i\in [0.01,0.05]\) (i.e., there were 16 instances \(\frac{\sum \sum C_{ij}(SA)}{\sum \sum C_{ij}^*}=1\)).

Table 2 CPU time for \(a_i\in [3,50]\) and \(p_{ij}\in [3,50]\) (ms)
Table 3 CPU time for \(a_i\in [51,100]\) and \(p_{ij}\in [51,100]\) (ms)
Table 4 CPU time for \(a_i\in [3,100]\) and \(p_{ij}\in [3,100]\) (ms)
Table 5 Error bound for \(a_i\in [3,50]\) and \(p_{ij}\in [3,50]\)
Table 6 Error bound for \(a_i\in [51,100]\) and \(p_{ij}\in [51,100]\)
Table 7 Error bound for \(a_i\in [3,100]\) and \(p_{ij}\in [3,100]\)
Table 8 Results of Algorithms of \(n=300,m=10\), \(b_i\in [0.01,0.05]\)
Table 9 Results of algorithms of \(n=300,m=10\), \(b_i\in [0.05,0.15]\)
Table 10 Results of Algorithms of \(n=300,m=10\), \(b_i\in [0.15,0.3]\)

5 Conclusions

We discussed the single-machine group scheduling where group setup times are increasing functions of their starting times and the jobs in the same group have general truncated learning effects. We showed that the makespan minimization problem remains polynomially solvable. For the total completion time minimization, we proposed heuristic algorithms and a branch-and-bound algorithm. Experimental study showed that the B &B algorithm can solve instances of 400 \(\times \) 11 jobs in less than 2,815,317 ms, and the algorithms of SA are more accurate than HA and TS. In future research, we expect to explore more general group models with deteriorating jobs and/or learning effect, extend the problems to due date (window) assignments (Geng et al. 2023; Wang et al. 2024a; Sun et al. 2024), or consider flow shop scheduling (Yu et al. 2023; Wang et al. 2024c) with group technology.