Abstract
In this paper we consider single-machine scheduling problems with group technology, where the group setup times are general linear functions of their starting times and the jobs in the same group have general truncated learning effects. The objective is to minimize the makespan and total completion time, respectively. We show that the makespan minimization remains polynomially solvable. For the total completion time minimization, optimal properties are presented and then we introduce some heuristic algorithms and a branch-and-bound algorithm.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
In classical scheduling theory, it is assumed that the job processing times fixed and constant values. In practice, however, we often encounter settings in which job processing times may be subject to change due to the phenomenon of deterioration and/or learning. Extensive surveys of scheduling problems involving deteriorating jobs (time-dependent processing times) can be found in Alidaee and Womer (1999), Cheng et al. (2004), Yin et al. (2015) and Gawiejnowicz (2020a, 2020b). More recently, Pei et al. (2019) considered the parallel-batching scheduling with step-deteriorating jobs. For the objective of maximising the total net revenue, they proposed some solution algorithms. Qian and Han (2022) (resp. Miao et al. 2023; Mao et al. 2023; Lu et al. 2024) studied the due-date (resp. due-window) assignment scheduling with delivery times and deterioration effects. Sun et al. (2023) considered the single-machine maintenance activity scheduling with deterioration effects. Wang et al. (2023b) and Wang et al. (2024d) addressed single-machine resource allocation scheduling with deterioration effects. Zhang et al. (2023) and Li et al. (2024b) studied the single-machine two-agent scheduling with resource allocations and deterioration effects. Lv and Wang (2024a) considered the no-idle flow shop scheduling with deterioration effects. Under common due date, they proved that some special cases are polynomially solvable. Lv et al. (2024) studied the single-machine scheduling with ready times and deterioration effects. For the total weighted completion time minimization, they proposed a branch-and-bound algorithm and some heuristic algorithms. Mao et al. (2024) considered the single-machine delivery times scheduling with general deterioration effects. They proved that the makespan minimization is polynomially solvable. Zhang et al. (2024) studied the single-machine scheduling problems with deteriorating jobs. For the slack due window, they proved that the minmax type problem is polynomially solvable.
In addition, extensive surveys of research related to scheduling with learning effects were presented by Biskup (2008) and Azzouz et al. (2018). More recently, Qian and Zhan (2021) and Wang et al. (2023a) studied the single-machine scheduling with delivery times and truncated learning effect. Under the due date assignment, they proved that some problems can be solved in polynomial time. Sun et al. (2021) (resp. Li et al. 2024c) considered the flow shop scheduling with learning effects (resp. truncated learning effects). For the total weighted completion time (resp. makespan) minimization, Sun et al. (2021) (resp. Li et al. 2024c) proposed some solution algorithms. Wang et al. (2021), Wang and Wang (2023) and Qian et al. (2024) considered the single-machine resource allocation scheduling with learning effects. Zhao (2022) (resp. Wang et al. 2024b) studied the single-machine scheduling problems with truncated learning effects and setup (resp. delivery) times. Ma et al. (2023) considered the single-machine online scheduling with learning effect. For the sum of completion times, they designed an optimal online algorithm. Lv and Wang (2024b) addressed the two-machine flow shop scheduling with release dates and truncated learning effects. For the total completion time minimization, they proposed a branch-and-bound algorithm and some heuristic algorithms.
On the other hand, the production efficiency can be increased by grouping various parts and products with similar designs and/or production processes, this phenomenon is known as group technology. Group technology that groups similar products into families helps increase the efficiency of operations and decrease the requirement of facilities (Potts and Van Wassenhove 1992; Kuo and Yang 2006; Wu and Lee 2008a; Wu et al. 2008b; Wang et al. 2008; Yan et al. 2008; Wang et al. 2009 and Lee and Wu 2009). He and Sun (2015) considered single-machine group scheduling problems with deterioration and learning effects. They showed that the makespan and some special cases of total completion time minimizations remain polynomially solvable. Huang (2019) and Liu et al. (2019) studied single-machine group scheduling with deteriorating jobs. For the primary (resp. secondary) criterion of minimizing the total weighted completion time (resp. maximum cost), they proved that the bicriterion problem is polynomially solvable. Under ready times and makespan minimization, Liu et al. (2019) proposed some solution algorithms. Miao (2019) studied parallel-batch makespan minimization scheduling with deteriorating jobs. Under group technology, she proved that two single-machine problems are polynomially solvable. Xu et al. (2021) investigated single-machine group scheduling with deteriorating jobs and nonperiodical maintenance. He et al. (2023) dealt with single-machine group scheduling with due-date assignment and resource allocation. To solve the generalized case of minimizing earliness-tardiness cost, they proposed a branch-and-bound and heuristic algorithms. Yan et al. (2023) and Qian (2023) considered single-machine group scheduling problems with learning effects and resource allocation. Li et al. (2024a) addressed single-machine group scheduling with convex resource allocation, learning effects and due date assignment. To solve the generalized case of minimizing the weighted sum of earliness, tardiness, common due date, resource consumption, and makespan, they proposed the heuristic and branch-and-bound algorithms. Liu and Wang (2023) and Wang and Liu (2024) studied single-machine group scheduling with resource allocation. Under the due date, they proved that some special cases of the problem are polynomially solvable.
Recently, Sun and Ma (2020) considered single-machine group scheduling with learning effects and deteriorating jobs. They proved that the makespan and the total completion time (under some special cases) problems remain polynomially solvable. This paper extends the results of Sun and Ma (2020), by focusing on group setup times are general linear functions of their starting times and the jobs in the same group have general truncated learning effects. The main contributions of this article are given as follows: i) Single-machine group scheduling with general linear deterioration and truncated learning effects is modeled and studied; ii) For the makespan minimization, we prove the problem is polynomially solvable; iii) For general case of the total completion time minimization, we propose some heuristics and a branch-and-bound algorithm. The remaining part of the paper is organized as follows. In the next section, a precise formulation is given. The problem of minimizing the makespan (resp. total completion time) is given in Sect. 3 (resp. Sect. 4). The last section contains conclusions.
2 Problem formulation
In this section, we first define the notation that is used throughout this paper (see the symbols before the Introduction), followed by the description of the problem.
There are n jobs grouped into m groups, and these n jobs are to be processed on a single machine. A setup time is required if the machine switches from one group to another and all setup times of groups for processing at time \(t_0\ge 0\). The machine can handle one job at a time and job preemption is not allowed. Let \(n_i\) be the number of jobs belonging to group \(G_i\), thus, \(n_1+n_2+\cdots +n_m=n\). Let \(J_{ij}\) denote the jth job in group \(G_i\), \(i=1,2,\ldots ,m;j=1,2,\ldots ,n_i\). As in Browne and Yechiali (1990), the actual setup time of \(G_i\) is:
where \(a_i\) (resp. \(b_i\)) is the normal setup time (resp. setup deterioration rate) of \(G_i\) and \(t_i\) denotes the start setup time of \(G_i\). As in Zhao (2022), if job \(J_{ij}\) in group \(G_i\) is scheduled in the rth position, then the actual job processing time of it is
where \(p_{ij}\) is the normal (basic) processing time of job \(J_{ij}\), \(\frac{df_i(x)}{dx}\le 0,\frac{d^2f_i(x)}{dx^2}\ge 0\), \(\frac{d^2h_i(x)}{dx^2}\le 0\), \(g_i(r)\) is a non-increasing function with \(f_i(0)=1\), \(h_i(0)=0\), \(g_i(1)=1\) and \(0\le \theta _i\le 1\) is a truncation parameter of group \(G_i\), \(\sum _{l=1}^{0}h_i(p_{i[l]}):=0\). Note that Yan et al. (2008), Yang and Yang (2010) considered the following models: \( p_{ijr}^{APT}=p_{ij}r^{\alpha _{i1}},i = 1, 2, \ldots , m; r,j=1,2,\ldots ,n_i, \) and \( p_{ijr}^{APT}=p_{ij}\left( 1+p_{i[1]}+p_{i[2]}+\cdots +p_{i[r-1]}\right) ^{\alpha _{i2}},i = 1, 2, \ldots , m; r,j=1,2,\ldots ,n_i, \) where \(\alpha _{i1}\le 0\) (\(\alpha _{i2}\le 0\)) is a constant learning index of group \(G_i\). As in Wang and Xia (2005) and Cheng et al. (2009), the job processing time models also are: \( p_{ijr}^{APT}=p_{ij}\left( 1+\ln p_{i[1]}+\ln p_{i[2]}+\ldots +\ln p_{i[r-1]}\right) ^{\alpha _{i3}}, \) \( p_{ijr}^{APT}=p_{ij}{\beta _{i1}}^{r-1}\), \( p_{ijr}^{APT}=p_{ij}{\beta _{i2}}^{\left( p_{i[1]}+p_{i[2]}+\cdots +p_{i[r-1]}\right) }\), \(i = 1, 2, \ldots , m; r,j=1,2,\ldots ,n_i, \) and \( p_{ijr}^{APT}=p_{ij}{\beta _{i3}}^{\left( \ln p_{i[1]}+\ln p_{i[2]}+\cdots +\ln p_{i[r-1]}\right) },i = 1, 2, \ldots , m; r,j=1,2,\ldots ,n_i, \) where \(\alpha _{i3}\le 0\), \(0<\beta _{i1},\beta _{i2},\beta _{i3}\le 1\) is a constant learning index of group \(G_i\).
For a given schedule \(\varrho \), let \(C_{ij}(\varrho )\) represent completion time of job \(J_{ij}\) in group \(G_i\). The objective is to find a schedule that minimizes the makespan \(C_{\max }={\max }\{C_{ij}|i=1,2,\ldots ,f; j=1,2,\ldots ,n_i\}\) and total completion time \(\sum \sum C_{ij}=\sum _{i=1}^m \sum _{j=1}^{n_i} C_{ij}\). By using the three-field notation, the problems can be denoted by
and
where GT denotes the group technology. Sun and Ma (2020) considered the learning effect model \(p_{ij r}^{APT}=p_{ij} F_i\left( \sum _{l=1}^{r-1}\chi _{il}p_{i[l]},r\right) \), where \(\chi _{i1}\le \chi _{i2} \le \cdots \le \chi _{in_i}\), \(0\le F_i\left( \sum _{l=1}^{r-1}\chi _{il}p_{i[l]},r\right) \le 1\) is a non-increasing function on \(\sum _{l=1}^{r-1}\chi _{il}p_{i[l]}\) and r, where \( F_i(\sum _{l=1}^{0}\chi _{il}p_{i[l]},1)=1\). Sun and Ma (2020) proved that \(1\big |s_i^{AST} = a_i+b_it_i, p_{ij r}^{APT}=p_{ij} F_i\left( \sum _{l=1}^{r-1}\chi _{il}p_{i[l]},r\right) ,GT\big |C_{\max }\) and a special case of \(1\big |s_i^{AST} = a_i+b_it_i, p_{ij r}^{APT}=p_{ij} F_i\left( \sum _{l=1}^{r-1}\chi _{il}p_{i[l]},r\right) ,GT\big |\sum \sum C_{ij}\) are polynomially solvable.
3 Makespan minimization
Lemma 1
(Niu et al. 2015) \(M(x)=\frac{ f_i\left( V+h_i(x)\right) g_i(r+1)-f_i\left( V\right) g_i(r)}{x}\) is a non-decreasing function on x, where \(V>0\), \(\frac{df_i(x)}{dx}\le 0,\frac{d^2f_i(x)}{dx^2}\ge 0\), \(\frac{d^2h_i(x)}{dx^2}\le 0\), \(g_i(r)\) is a non-increasing function with \(h_i(0)=0\), \(g_i(1)=1\), and \(x \ge 0\).
Theorem 1
For \(1\left| s_i^{AST} \!=\! a_i+b_it_i, p_{ij r}^{APT}\!=\!p_{ij} \max \left\{ f_i\left( \sum _{l=1}^{r-1}h_i(p_{i[l]})\right) g_i(r),\theta _i\right\} ,GT\right| C_{\max }\), the optimal job sequence in each group is obtained by the nondecreasing order of \(p_{ij}\), i.e.,
Proof
For the group \(G_i\), let \(\pi _i = [s_1,J_{ij}, J_{ik},s_2], \pi '_i=[s_1, J_{ik}, J_{ij},s_2]\), where \(s_1\) and \(s_2\) are partial schedules, \(p_{ij} \le p_{ik}\), and there are \(r-1\) jobs in \(S_1\). Under \(\pi _i\) and \(\pi '_i\), we have
and
where W is the completion time of the last job in \(s_1\).
Case 1): If \(f_i\left( \sum _{l=1}^{r-1}h_i( p_{i[l]})\right) g_i(r)\le \theta _i \), from (7), we have
Case 2): If \(f_i\left( \sum _{l=1}^{r-1}h_i( p_{i[l]})\right) g_i(r)>\theta _i\ge f_i\left( \sum _{l=1}^{r-1}h_i( p_{i[l]})+h_i(p_{ij})\right) g_i(r+1)\), from (7), we have
Case 3): If \(f_i\left( \sum _{l=1}^{r-1}h_i( p_{i[l]})+h_i(p_{ij})\right) g_i(r+1)>\theta _i\ge f_i\left( \sum _{l=1}^{r-1}h_i( p_{i[l]})+h_i(p_{ik})\right) g_i(r+1)\), from (7), we have
where \(V=\sum _{l=1}^{r-1}h_i( p_{i[l]})\). From Lemma 1, if \(p_{ij} \le p_{ik}\), \(C_{ij}(\pi '_i)-C_{ik}(\pi _i)\ge 0.\)
Case 4): If \(f_i\left( \sum _{l=1}^{r-1}h_i( p_{i[l]})+h_i(p_{ik})\right) g_i(r+1)>\theta _i \), from (7), we have
Similar to Case 3), we have \( C_j(\pi ')-C_k(\pi )\ge 0. \) \(\square \)
Theorem 2
For \(1\left| s_i^{AST} \!=\! a_i+b_it_i, p_{ij r}^{APT}\!=\!p_{ij} \max \left\{ f_i\left( \sum _{l=1}^{r-1}h_i(p_{i[l]})\right) g_i(r),\theta _i\right\} ,GT\right| C_{\max }\), the optimal group schedule is arranged in non-decreasing order of
where
Proof
From Theorem 1, the optimal job sequence in each group is obtained by the nondecreasing order of \(p_{ij}\), i.e., \( p_{i\langle 1 \rangle }\le p_{i\langle 2 \rangle } \le \cdots \le p_{i\langle n_i \rangle }, i=1,2,\ldots ,m. \). Let \(\varrho \) and \(\varrho '\) be two group schedules where \(\varrho = [S_1,G_i, G_j,S_2], \varrho '=[S_1, G_j, G_i,S_2]\) and \(S_1\) and \(S_2\) are partial sequences. Let t be the completion time of the last job in \(S_1\), for \(\varrho \), we have
Similarly, for \(\varrho '\), we have
if and only if
where \(\widetilde{P_i}=\sum _{z=1}^{n_i}p_{i\langle z \rangle }\max \left\{ f_i\left( \sum _{l=1}^{z-1}h_i(p_{i\langle l \rangle })\right) g_i(z),\theta _i\right\} \) and \(\widetilde{P_j}=\sum _{z=1}^{n_j}p_{j\langle z \rangle }\max \left\{ f_j\left( \sum _{l=1}^{z-1}h_j(p_{j\langle l \rangle })\right) g_j(z),\theta _j\right\} \), this completes the proof. \(\square \)
From Theorems 1-2, \(1\Big |s_i^{AST} \!=! a_i+b_it_i, p_{ij r}^{APT}\!=\!p_{ij} \max \left\{ \!f_i\left( \sum _{l=1}^{r-1}h_i(p_{i[l]})\right) g_i(r),\theta _i\!\right\} \!, GT\Big |C_{\max }\) can be solved by the following algorithm:
Algorithm 1. | |
---|---|
Step 1. Jobs in each group are scheduled in non-decreasing order of \(p_{ij}\), i.e., | |
\(p_{i\langle 1 \rangle }\le p_{i\langle 2 \rangle } \le \ldots \le p_{i\langle n_i \rangle }, i=1,2,\ldots ,m.\) | |
Step 2. Calculate \(\rho (G_i)=\frac{a_{i}+\widetilde{P_i}}{b_i},i=1,2,\ldots ,m\), where | |
\(\widetilde{P_i}=\sum _{z=1}^{n_i}p_{i[z]}\max \left\{ f_i\left( \sum _{l=1}^{z-1}h_i(p_{i[l]})\right) g_i(z),\theta _i\right\} .\) | |
Step 3. Groups are scheduled in non-decreasing order of \(\rho (G_i)\), i.e., \(\rho (G_1)\le \rho (G_2) \le \ldots \le \rho (G_m)\). |
Theorem 3
The \(1\left| s_i^{AST} \!=\! a_i+b_it_i, p_{ij r}^{APT}\!=\!p_{ij} \max \left\{ f_i\left( \sum _{l=1}^{r-1}h_i(p_{i[l]})\right) g_i(r),\theta _i\right\} ,GT\right| C_{\max }\) can be solved by Algorithm 1 in \(O(n {\log } n)\) time.
Proof
Obviously, the optimal schedule in a certain group \(G_i\) can be obtained in \(O(n_i{\log }n_i)\) and the optimal group schedule can be obtained in \(O(m{\log }m)\). Obviously \(\sum _{i=1}^mO(n_i{\log }n_i)\le O(n {\log } n)\), hence, the total time for Algorithm 1 is \(O(n {\log } n)\). \(\square \)
Example 1
There are \({n}=10\) jobs, where \(m=3\), \({n}_{1}={n}_{2}=3,{n}_{3}=4\), \(t_0=0\), \(f_i\left( \sum _{l=1}^{r-1}h_i(p_{i[l]})\right) \) \(= \left( 1+\sum _{l=1}^{r-1}\ln p_{i[l]}\right) ^{\alpha _i}\) (\(\alpha _i\le 0\)), \(g_i(r)= {\beta _i}^{r-1}\) (\( 0<\beta _i\le 1\)), and the other coefficients of all jobs are given in Table 1.
Solution:
-
Step 1. By Theorem 1, the optimal internal job sequences are:
-
\({\pi }^*_{1}: [{J}_{13}\rightarrow {J}_{11}\rightarrow {J}_{12}]\),
-
\({\pi }^*_{2}: [{J}_{23}\rightarrow {J}_{22}\rightarrow {J}_{21}]\),
-
\({\pi }^*_{3}: [{J}_{34}\rightarrow {J}_{32}\rightarrow {J}_{31}\rightarrow {J}_{33}].\)
-
\(\rho (G_1)=\frac{a_{1}+\widetilde{P_1}}{b_1}=5.865885\),
-
\(\rho (G_2)=\frac{a_{2}+\widetilde{P_2}}{b_2}=3.162026\),
-
\(\rho (G_3)=\frac{a_{3}+\widetilde{P_3}}{b_3}=20.77932\).
Step 3. By Theorem 2, it follows that the optimal group schedule is \(\varrho ^{*}: [G_{2}\rightarrow {G}_{1}\rightarrow {G}_{3}]\).
4 Total completion time minimization
Theorem 4
For \(1\Big |s_i^{AST} = a_i+b_it_i, p_{ij r}^{APT}=p_{ij} \max \left\{ f_i\left( \sum _{l=1}^{r-1}h_i(p_{i[l]})\right) g_i(r),\theta _i\right\} , GT\Big |\sum \sum C_{ij}\), the optimal job sequence in each group is obtained by the nondecreasing order of \(p_{ij}\), i.e.,
Proof
Similar to Theorem 1. For the group \(G_i\), from schedules \(\pi _i = [s_1,J_{ij}, J_{ik},s_2]\), \( \pi '_i=[s_1, J_{ik}, J_{ij},s_2]\) and \(p_{ij} \le p_{ik}\), we have
and \(C_{ik}(\pi _i)\le C_{ij}(\pi '_i)\), hence \(C_{ik}(\pi _i)+C_{ij}(\pi _i)\le C_{ik}(\pi _i')+ C_{ij}(\pi '_i)\), this completes the proof. \(\square \)
From Mosheiov (1991), the problem \(1\left| p_j^{APT} =1+\beta _jt_j\right| \sum C_{j}\) is an open problem. In order to determine the optimal group sequence, we have the following result:
Theorem 5
For
if \(\frac{b_{i}}{n_i(1+b_i)}\le \frac{b_{j}}{n_j(1+b_j)}\) \(\Rightarrow \frac{a_i+\widetilde{P_i}}{n_i(1+b_i)}\le \frac{a_j+\widetilde{P_j}}{n_j(1+b_j)}\), the optimal group schedule is arranged in non-decreasing order of \(\frac{b_{i}}{n_i(1+b_i)}\) or \(\frac{a_i+\widetilde{P_i}}{n_i(1+b_i)}\), \(i=1,2,\ldots ,m\), where
and
Proof
From Theorem 2, for \(\varrho \) and \(\varrho '\), we have
If \(\frac{b_{i}}{n_i(1+b_i)}\le \frac{b_{j}}{n_j(1+b_j)}\) and \( \frac{a_i+\widetilde{P_i}}{n_i(1+b_i)}\le \frac{a_j+\widetilde{P_j}}{n_j(1+b_j)}\), we have \(\sum \sum C_{ij}(\varrho )\le \sum \sum C_{ij}(\varrho ')\), this completes the proof. \(\square \)
4.1 Branch-and-bound algorithm
an heuristic algorithm is proposed (i.e., an upper bound of branch-and-bound algorithm).
Algorithm 2. | |
---|---|
Step 1. Jobs in each group are scheduled in non-decreasing order of \(p_{ij}\), i.e., | |
\(p_{i\langle 1 \rangle }\le p_{i\langle 2 \rangle } \le \ldots \le p_{i\langle n_i \rangle }, i=1,2,\ldots ,m.\) | |
Step 2. Groups are scheduled in non-decreasing order of | |
\(\rho (G_i)=a_{i}, i=1,2,\ldots ,m.\) | |
Step 3. Groups are scheduled in non-decreasing order of | |
\(\rho (G_i)=b_{i}, i=1,2,\ldots ,m.\) | |
Step 4. Groups are scheduled in non-increasing order of | |
\(\rho (G_i)=n_{i}, i=1,2,\ldots ,m.\) | |
Step 5. Groups are scheduled in non-decreasing order of | |
\(\rho (G_i)=\frac{a_{i}}{n_i(1+b_i)}, i=1,2,\ldots ,m.\) | |
Step 6. Groups are scheduled in non-decreasing order of | |
\(\rho (G_i)=\frac{b_{i}}{n_i(1+b_i)}, i=1,2,\ldots ,m.\) | |
Step 7. Groups are scheduled in non-decreasing order of | |
\(\rho (G_i)=\frac{a_{i}+\sum _{h=1}^{n_i}p_{i[h]}\max \left\{ f_i\left( \sum _{l=1}^{h-1}h_i(p_{i[l]})\right) g_i(h),\theta _i\right\} }{n_i(1+b_i)}, i=1,2,\ldots ,m.\) | |
Step 8. Choose the one with smaller \(\sum \sum C_{ij}\) as the solution by Step 2-7. |
Lemma 2
(Gawiejnowicz 2020a) The term \(\sum _{k=1}^{m}x_{k}\prod _{l=k+1}^m y_l\) can be minimized by the non-decreasing order of \(\frac{x_{i}}{y_i-1}\).
Lemma 3
The term \(\sum _{k=1}^{m}x_{k}\prod _{l=k+1}^{m+1} y_l\) can be minimized by the non-decreasing order of \(\frac{x_{i}}{y_i-1}\).
Proof
For \(\delta =[s_1,i,j,s_2]\) and \(\delta '=[s_1,j,i,s_2]\), there is
if and only if \(\frac{x_i}{y_i-1}\le \frac{x_j}{y_j-1}\). \(\square \)
Lemma 4
The term \(\sum _{k=1}^{m}x_{k}\prod _{l=1}^k y_l\) can be minimized by the non-decreasing order of \(\frac{y_{i}-1}{x_iy_i}\).
Proof
For \(\delta =[s_1,i,j,s_2]\) and \(\delta '=[s_1,j,i,s_2]\), there is
if and only if \(\frac{y_i-1}{x_iy_i}\le \frac{y_j-1}{x_jy_j}\). \(\square \)
From Theorem 4, the optimal job sequence within each group can be obtained, i.e., \( p_{i\langle 1 \rangle }\le p_{i\langle 2 \rangle } \le \cdots \le p_{i\langle n_i \rangle }, i=1,2,\ldots ,m. \). Let \(\varrho =(\chi ^{pp},\chi ^{uu})\) be a group schedule, where \(\chi ^{pp}\) (resp. \(\chi ^{uu}\)) is the scheduled (resp. unscheduled) part, and there are \(\psi \) groups in \(\chi ^{pp}\), from the proof of Theorem 2, we have
where \(\prod _{l=k}^{k-1}(1+b_{[l]})=1\) and t is the completion time of the last job in \(\chi ^{pp}\), i.e., \(t=C_{\psi [n_\psi ]}(\chi ^{pp})\). From (15), \(\sum _{i=1}^{\psi }\sum _{j=1}^{n_i} C_{i[j]}(\chi ^{pp})\) and \(t=C_{\psi [n_\psi ]}(\chi ^{pp})\) are constants, \(\sum _{k=\psi +1}^{i}a_{[k]}\prod _{l=k+1}^i(1+b_{[l]})\) (resp. \(\sum _{k=\psi +1}^{i}\widetilde{P_{[k]}} \prod _{l=k+1}^{i+1}(1+b_{[l]})\)) can be minimized according to Lemma 2 (resp. Lemma 3) by non-decreasing of \(\frac{a_i}{b_i}\) (resp. \(\frac{\widetilde{P_i}}{b_i}\)), and \(\sum _{k=\psi +1}^{m}n_{[k]}\prod _{l=\psi +1}^k(1+b_{[l]})\) can be minimized by the non-decreasing order of \(\frac{b_{i}}{n_i(1+b_i)}\) according to Lemma 4\((i\in \chi ^{uu})\).
Let \(n_{\min }=\min \{n_{[\psi +1]},n_{[\psi +2]},...,n_{[m]}\}\), hence, we have the following lower bound:
where \(\frac{a_{\langle \psi +1 \rangle }}{b_{\langle \psi +1 \rangle }}\le \frac{a_{\langle \psi +2 \rangle }}{b_{\langle \psi +2]\rangle }}\le \cdots \le \frac{a_{\langle m \rangle }}{b_{\langle m \rangle }}\), \(\frac{b_{\langle \langle \psi +1 \rangle \rangle }}{n_{\langle \langle \psi +1 \rangle \rangle }}(1+b_{\langle \langle \psi +1 \rangle \rangle }) \le \frac{b_{\langle \langle \psi +2 \rangle \rangle }}{n_{\langle \langle \psi +2 \rangle \rangle }}(1+b_{\langle \langle \psi +2 \rangle \rangle }) \le \cdots \le \frac{b_{\langle \langle m \rangle \rangle }}{n_{\langle \langle m \rangle \rangle }}(1+b_{\langle \langle m \rangle \rangle })\), and \(\frac{\widetilde{P_{(\psi +1)}}}{b_{(\psi +1)}}\le \frac{\widetilde{P_{(\psi +2)}}}{b_{(\psi +2)}}\le \cdots \le \frac{\widetilde{P_{(m)}}}{b_{(m)}}\) (where \(\frac{a_{\langle i \rangle }}{b_{\langle i \rangle }}\), \(\frac{b_{\langle \langle i \rangle \rangle }}{n_{\langle \langle i \rangle \rangle }(1+b_{\langle \langle i \rangle \rangle })}\), and \(\frac{\widetilde{P_{(i)}}}{b_{(i)}}\) do not necessarily correspond to the same job, \(i=\psi +1,\psi +2,...,m\)). For \(G_{[i]}\), \(p_{[i]\langle 1 \rangle }\le p_{[i]\langle 2 \rangle } \le \cdots \le p_{[i]<n_{[i]}>}\), and the sequence of \(p_{(i)\langle j \rangle }\) involved in \(P_{(i)}\) is the same as that of \(G_{[i]}\), that is, \(p_{(i)\langle 1 \rangle }\le p_{(i)\langle 2 \rangle } \le \cdots \le p_{(i)\langle n_{(i)} \rangle }\), where \(j=1,2,...,n_{(i)}\).
From the upper bound (i.e., Algorithm 2) and lower bound (see Eq. (16)), a branch-and-bound algorithm (B &B) is established as follows:
Algorithm 3. (B &B) | |
---|---|
Step 1). Use Algorithm 2 to obtain an initial group sequence. | |
Step 2). Calculate the lower bound (see equation (16)) for the node (group). If the lower bound for an unfathomed partial group schedule is larger than or equal to the objective value of the initial solution (see equation (15)), eliminate the node and all the nodes following it in the branch. Calculate the objective value of the completed group schedule (see equation (15)). If it is less than the initial solution, replace it as the new solution; otherwise, eliminate it. | |
Step 3). Continue until all nodes have been explored. |
4.2 Other algorithms
As in Nawaz et al. (1983) and Paredes-Astudillo et al. (2024), the following heuristic algorithm (HA) can be proposed.
Algorithm 4: HA | |
---|---|
Step 1). Let \(\varrho ^A\) be the group sequence obtained from Algorithm 2. | |
Step 2). Set \(\lambda =2\). Select the first two groups from the sorted list and select the better of the two possible sequences. Do not change the relative positions of these two jobs with respect to each other in the remaining steps of the algorithm. Set \(\lambda =3\). | |
Step 3). Pick the job in the \(\lambda \)th position of the list generated in Step 1) and find the best group sequence by placing it at all possible \(\lambda \) positions in the partial sequence found in the previous step, without changing the relative positions to each other of the already assigned groups. The number of enumerations at this step equals \(\lambda \). | |
Step 4). If \(\lambda =m\), STOP; otherwise, set \(\lambda =\lambda +1\) and go to Step 3). |
As in He et al. (2023), a tabu search (TS) algorithm is a method for \(\sum \sum C_{ij}\) minimization. The initial group sequence of the TS algorithm is decided by Algorithm 2, and the maximum number of iterations for the TS algorithm is 1000 m.
Algorithm 5: TS | |
---|---|
Step 1). Let the tabu list be empty and the iteration number be zero. | |
Step 2). Set the initial group sequence of the TS algorithm, calculate its objective cost (by equation (15)), and set the current group sequence as the best solution \(\varrho ^{A*}\). | |
Step 3). Search the associated neighborhood of the current group sequence and resolve if there is a group sequence \(\varrho ^{A**}\) with the smallest objective cost in the associated neighborhood and it is not in the tabu list. | |
Step 4). If \(\varrho ^{A**}\) is better than \(\varrho ^{A*}\), then let \(\varrho ^{A*}=\varrho ^{A**}\). Update the tabu list and the iteration number. | |
Step 5). If there is not a group sequence in the associated neighborhood but it is not in the tabu list or the maximum number of iterations is reached (i.e., 1000 m), then output the final group sequence. Otherwise, update the tabu list and go to Step 3). |
As in Li et al. (2024a, 2024b), simulated annealing (SA) is also a method for \(\sum \sum C_{ij}\) minimization.
Algorithm 6: SA | |
---|---|
Step 1). Set the internal job sequence by the Second step of Algorithm 2. | |
Step 2). Use the pairwise interchange (PI) neighborhood generation method. | |
Step 3). Calculate the objective value of the original schedule \(\varrho ^A\). | |
Step 4). Calculate the objective value of the new schedule \(\varrho ^{A*}\). If the \(\varrho ^{A*}\) is less than \(\varrho ^A\), it is accepted. Nevertheless, if the \(\varrho ^{A*}\) is higher, it might still be accepted with a decreasing probability as the process proceeds. This acceptance probability is determined by the following exponential distribution function: \( P(accept)=exp(-\alpha \times \triangle {TC}), \) where \(\alpha \) is a parameter and \(\triangle {TC}\) is the change in the objective function. In addition, the method is used to change \(\alpha \) in the kth iteration as follows: \( \alpha =\frac{k}{\delta }, \) where \(\delta \) is a constant. After preliminary trials, \(\delta =1\) is used. | |
Step 5). If \(\varrho ^{A*}\) increases, the new sequence is accepted when \(P(accept)>\beta \), where \(\beta \) is randomly sampled from the uniform distribution. | |
Step 6). The schedule is stable after 1000m iterations. |
4.3 Number study
The heuristic algorithms (i.e., HA, TS, and SA) and the B &B algorithm were programmed in C++ (carried out on CPU Interl core i5-8250U 1.4GHz PC with 8.00GB RAM), where \(n=100, 200, 300, 400\); m= 8, 9, 10, 11; \(p_{ij r}^{APT}=p_{ij} \max \left\{ \left( 1+\sum _{l=1}^{r-1}\ln p_{i[l]}\right) ^{\alpha _i}{\beta _i}^{r-1},\theta _i\right\} \) (\(\alpha _i\le 0, 0<\beta _i\le 1\)) and \(n_i\ge 1\). The parameters setting is given as follows:
1) Real number: \(\alpha _i\in [-0.5,-0.1]\); \(\beta _i\in [0.4,0.9]\); \(\theta _i\in [0.3,0.6]\);
2) Integer number: \(a_i\in [3,50]\) and \(p_{ij}\in [3,50]\); \(a_i\in [51,100]\) and \(p_{ij}\in [51,100]\); \(a_i\in [3,100]\) and \(p_{ij}\in [3,100]\);
3) \(b_i\in [0.01,0.05]\); \(b_i\in [0.05,0.15]\); \(b_i\in [0.15,0.3]\).
For simulation accuracy, each random instance was conducted 20 times. The error of algorithm H is calculated as
where \(H\in \{HA,TS,SA\}\), \(\sum \sum C_{ij}(H)\) (resp. \(\sum \sum C_{ij}^*\)) is the objective value (see (15)) generated by algorithm H (resp. B &B). In addition, running time (i.e., millisecond (ms)) of HA, TS, SA and B &B is defined. From Tables 2, 3 and 4, the maximum CPU time of B &B is 2,815,317 ms (i.e., \(n\times m = 400\times 11\)). For the CPU time of B &B, \(b_i\in [0.01,0.05]\) needs more time than \(b_i\in [0.05,0.15]\), and \(b_i\in [0.05,0.15]\) needs more time than \(b_i\in [0.15,0.3]\). From Tables 5, 6 and 7, the maximum error of SA is less than HA and TS for \(n\times m \le 400\times 11\) and the results of \(b_i\in [0.01,0.05]\) is more accurate than \(b_i\in [0.05,0.15]\) and \(b_i\in [0.15,0.3]\).
To further compare the algorithms TS, SA and B &B, the parameters \(n=300\), \(m=10\), \(b_i\in [0.01,0.05]\), \(b_i\in [0.05,0.15]\), and \(b_i\in [0.15,0.3]\) were given. The algorithms TS, SA and B &B were individually executed on the same instance. For each combination, 20 random instances were performed. It is also assumed that the CPU time for TS, SA are all obtained from B &B, that is, all three algorithms have the same CPU time. In order to calculate the error \(\frac{\sum \sum C_{ij}(H)}{\sum \sum C_{ij}^*}\) (\(H\in \{TS,SA\}\)), the total completion time values of TS and SA are obtained and compared with the values obtained by B &B. From the data in Tables 8, 9 and 10, it can be seen that SA is more accurate that TS, especially for \(n=300\), \(m=10\), \(b_i\in [0.01,0.05]\) (i.e., there were 16 instances \(\frac{\sum \sum C_{ij}(SA)}{\sum \sum C_{ij}^*}=1\)).
5 Conclusions
We discussed the single-machine group scheduling where group setup times are increasing functions of their starting times and the jobs in the same group have general truncated learning effects. We showed that the makespan minimization problem remains polynomially solvable. For the total completion time minimization, we proposed heuristic algorithms and a branch-and-bound algorithm. Experimental study showed that the B &B algorithm can solve instances of 400 \(\times \) 11 jobs in less than 2,815,317 ms, and the algorithms of SA are more accurate than HA and TS. In future research, we expect to explore more general group models with deteriorating jobs and/or learning effect, extend the problems to due date (window) assignments (Geng et al. 2023; Wang et al. 2024a; Sun et al. 2024), or consider flow shop scheduling (Yu et al. 2023; Wang et al. 2024c) with group technology.
Data availability
The data used to support the findings of this paper are available from the corresponding author upon request.
Abbreviations
- GT :
-
Group technology
- m :
-
Number of groups (\(m\ge 2\))
- \(G_i\) :
-
Group i, \(i=1,2,\ldots ,m\)
- \(n_i\) :
-
Number of jobs belonging to \(G_i,\) \(i=1,2,\ldots ,m\)
- n :
-
Total number of jobs, i.e., \(n_1+n_2+\cdots +n_m=n\)
- \(J_{ij}\) :
-
Job j in \(G_i\), \(i=1,2,\ldots ,m\), \(j=1,2,\ldots ,n_i\)
- \(p_{ij}\) :
-
Normal processing time of \(J_{ij},\) \(i=1,2,\ldots ,m\), \(j=1,2,\ldots ,n_i\)
- \(p_{i[j]}\) :
-
Normal processing time of \(J_{i[j]}\) scheduled in the jth position of \(G_i\), \(i=1,2,\ldots ,m\), \(j=1,2,\ldots ,n_i\)
- \(a_i\) :
-
Normal setup time of \(G_i\), \(i=1,2,\ldots ,m\)
- \(b_i\) :
-
Setup deterioration rate of \(G_i\), \(i=1,2,\ldots ,m\)
- \(t_i\) :
-
Start setup time of \(G_i\), \(i=1,2,\ldots ,m\)
- \(\theta _i\) :
-
Truncation parameter of \(G_i\), \(i=1,2,\ldots ,m\)
- \(s_i^{AST}\) :
-
Actual setup time of \(G_i\), \(i=1,2,\ldots ,m\)
- \(p_{ijr}^{APT}\) :
-
Actual processing time of job \(J_{ij}\) scheduled in the rth position of \(G_i\), \(i=1,2,\ldots ,m\), \(j=1,2,\ldots ,n_i\)
- \(\alpha _i\) \((\alpha _{i1}, \alpha _{i2}, \alpha _{i3})\) :
-
Learning index of \(G_i\), \(i=1,2,\ldots ,m\)
- \(\beta _{i}\) \((\beta _{i1}, \beta _{i2}, \beta _{i3})\) :
-
Learning index of \(G_i\), \(i=1,2,\ldots ,m\)
- \(\varrho :\) :
-
A job schedule of n jobs
- \(C_{ij} = C_{ij}(\pi )\) :
-
Completion time for job \(J_{ij}\) in \(\pi \)
- \(C_{\max }\) :
-
Makespan, i.e., \(C_{\textrm{max}}=\textrm{max}\{C_{ij}|i=1,2,\ldots ,m,j=1,2,\ldots ,n_i\}\)
- \(\sum \sum C_{ij}=\sum _{i=1}^m \sum _{j=1}^{n_i} C_{ij}\) :
-
Total completion time of all jobs
References
Alidaee B, Womer NK (1999) Scheduling with time dependent processing processing times: review and extensions. J Oper Res Soc 50:711–720
Azzouz A, Ennigrou M, Said LB (2018) Scheduling problems under learning effects: classification and cartography. Int J Prod Res 56:1642–1661
Biskup D (2008) A state-of-the-art review on scheduling with learning effects. Eur J Oper Res 188:315–329
Browne S, Yechiali U (1990) Scheduling deteriorating jobs on a single processor. Oper Res 38:495–498
Cheng TCE, Ding Q, Lin BMT (2004) A concise survey of scheduling with time-dependent processing times. Eur J Oper Res 152:1–13
Cheng TCE, Lai P-J, Wu C-C, Lee W-C (2009) Single-machine scheduling with sum-of-logarithm-processing-times-based learning considerations. Inf Sci 179:3127–3135
Gawiejnowicz S (2020a) Models and algorithms of time-dependent scheduling. Springer, Berlin
Gawiejnowicz S (2020b) A review of four decades of time-dependent scheduling: main results, new topics, and open problems. J Sched 23:3–47
Geng X-N, Sun X, Wang J-Y, Pan L (2023) Scheduling on proportionate flow shop with job rejection and common due date assignment. Comput Ind Eng 181:109317
He Y, Sun L (2015) One-machine scheduling problems with deteriorating jobs and position-dependent learning effects under group technology considerations. Int J Syst Sci 46(7):1319–1326
He H, Zhao Y, Ma X, Lv Z, Wang J-B (2023) Branch-and-bound and heuristic algorithms for group scheduling with due-date assignment and resource allocation. Mathematics 11(23):4745
Huang X (2019) Bicriterion scheduling with group technology and deterioration effect. J Appl Math Comput 60:455–464
Kuo W-H, Yang D-L (2006) Single-machine group scheduling with a time-dependent learning effect. Comput Oper Res 33:2099–2112
Lee W-C, Wu C-C (2009) A note on single-machine group scheduling problems with position-based learning effect. Appl Math Model 33:2159–2163
Li M-H, Lv D-Y, Lu Y-Y, Wang J-B (2024a) Scheduling with group technology, resource allocation, and learning effect simultaneously. Mathematics 12(7):1029
Li M-H, Lv D-Y, Lv Z-G, Zhang L-H, Wang J-B (2024b) A two-agent resource allocation scheduling problem with slack due-date assignment and general deterioration function. Comput Appl Math 43(4):229
Li M-H, Lv D-Y, Zhang L-H, Wang J-B (2024c) Permutation flow shop scheduling with makespan objective and truncated learning effects. J Appl Math Comput. https://doi.org/10.1007/s12190-024-02080-w
Liu WG, Wang XY (2023) Group technology scheduling with due-date assignment and controllable processing times. Processes 11:1271
Liu F, Yang J, Lu Y-Y (2019) Solution algorithms for single-machine group scheduling with ready times and deteriorating jobs. Eng Optim 51(5):862–874
Lu Y-Y, Zhang S, Tao J-Y (2024) Earliness-tardiness scheduling with delivery times and deteriorating jobs. Asia Pac J Oper Res. https://doi.org/10.1142/S021759592450009X
Lv D-Y, Wang J-B (2024a) No-idle flow shop scheduling with deteriorating jobs and common due date under dominating machines. Asia Pac J Oper Res. https://doi.org/10.1142/S0217595924500039
Lv D-Y, Wang J-B (2024b) Research on two-machine flow shop scheduling problem with release dates and truncated learning effects. Eng Optim. https://doi.org/10.1080/0305215X.2024.2372633
Lv Z-G, Zhang L-H, Wang X-Y, Wang J-B (2024) Single machine scheduling proportionally deteriorating jobs with ready times subject to the total weighted completion time minimization. Mathematics 12:610
Ma R, Guo SA, Zhang XY (2023) An optimal online algorithm for single-processor scheduling problem with learning effect. Theor Comput Sci 928:1–12
Mao R-R, Wang Y-C, Lv D-Y, Wang J-B, Lu Y-Y (2023) Delivery times scheduling with deterioration effects in due window assignment environments. Mathematics 11:3983
Mao R-R, Lv D-Y, Ren N, Wang J-B (2024) Supply chain scheduling with deteriorating jobs and delivery times. J Appl Math Comput 70(3):2285–2312
Miao CX (2019) Parallel-batch scheduling with deterioration and group technology. IEEE Access 7:119082–119086
Miao CX, Song J, Zhang Y (2023) Single-machine time-dependent scheduling with proportional and delivery times. Asia Pac J Oper Res 40(4):2240015
Mosheiov G (1991) V-shaped policies for scheduling deteriorating jobs. Oper Res 39:979–991
Nawaz M, Enscore EE, Ham I (1983) A heuristic algorithm for the \(m\)-machine, \(n\)-job flow-shop sequencing problem. Omega 11:91–95
Niu Y-P, Wan L, Wang J-B (2015) A note on scheduling jobs with extended sum-of-processing-times-based and position-based learning effect. Asia Pac J Oper Res 32(2):1550001
Paredes-Astudillo YA, Botta-Genoulaz V, Montoya-Torres JR (2024) Impact of learning effect modelling in flowshop scheduling with makespan minimisation based on the Nawaz–Enscore–Ham algorithm. Int J Prod Res 62(6):1999–2014
Pei J, Wang X, Fan W, Pardalos PM, Liu X (2019) Scheduling step-deteriorating jobs on bounded parallel-batching machines to maximise the total net revenue. J Oper Res Soc 70(10):1830–1847
Potts CN, Van Wassenhove LN (1992) Integrating scheduling with batching and lot-sizing: a review of algorithms and complexity. J Oper Res Soc 43:395–406
Qian J (2023) Note on single machine common flow allowance group scheduling with learning effect and resource allocation. Comput Ind Eng 186:109750
Qian J, Han HY (2022) The due date assignment scheduling problem with the deteriorating jobs and delivery time. J Appl Math Comput 68:2173–2186
Qian J, Zhan Y (2021) The due date assignment scheduling problem with delivery times and truncated sum-of-processing-times-based learning effect. Mathematics 9(23):3085
Qian J, Chang G, Zhang X (2024) Single-machine common due-window assignment and scheduling with position-dependent weights, delivery time, learning effect and resource allocations. J Appl Math Comput 70(3):1965–1994
Sun L, Ma W (2020) Research into group scheduling problems with deterioration and general learning effect. Oper Res Manag Sci 29(3):125–127 ((in Chinese))
Sun X, Geng X-N, Liu F (2021) Flow shop scheduling with general position weighted learning effects to minimise total weighted completion time. J Oper Res Soc 72(12):2674–2689
Sun X, Liu T, Geng X-N, Hu Y, Xu J-X (2023) Optimization of scheduling problems with deterioration effects and an optional maintenance activity. J Sched 26:251–266
Sun X, Geng X-N, Wang J-Y, Pan L (2024) Slack due window assignment scheduling in the single-machine with controllable processing times. J Ind Manag Optim 20(1):15–35
Wang XY, Liu WG (2024) Optimal different due-dates assignment scheduling with group technology and resource allocation. Mathematics 12(3):436
Wang Y-C, Wang J-B (2023) Study on convex resource allocation scheduling with a time-dependent learning effect. Mathematics 11:3179
Wang J-B, Xia Z-Q (2005) Flow shop scheduling with a learning effect. J Oper Res Soc 56:1325–1330
Wang J-B, Lin L, Shan F (2008) Single machine group scheduling problems with deteriorating jobs. Int J Adv Manuf Technol 39:808–812
Wang J-B, Gao W-J, Wang L-Y, Wang D (2009) Single machine group scheduling with general linear deterioration to minimize the makespan. Int J Adv Manuf Technol 43:146–150
Wang J-B, Lv D-Y, Xu J, Ji P, Li F (2021) Bicriterion scheduling with truncated learning effects and convex controllable processing times. Int Trans Oper Res 28(3):1573–1593
Wang S-H, Lv D-Y, Wang J-B (2023a) Research on position-dependent weights scheduling with delivery times and truncated sum-of-processing-times-based learning effect. J Ind Manag Optim 19(4):2824–2837
Wang J-B, Lv D-Y, Wang S-Y, Jiang C (2023b) Resource allocation scheduling with deteriorating jobs and position-dependent workloads. J Ind Manag Optim 19(3):1658–1669
Wang J-B, Bao H, Wan C (2024a) Research on multiple slack due-date assignments scheduling with position-dependent weights. Asia Pac J Oper Res. https://doi.org/10.1142/S0217595923500392
Wang X-Y, Lv D-Y, Ji P, Yin N, Wang J-B, Jin Q (2024b) Single machine scheduling problems with truncated learning effects and exponential past-sequence-dependent delivery times. Comput Appl Math 43(4):194
Wang J-B, Lv D-Y, Wan C (2024c) Proportionate flow shop scheduling with job-dependent due windows and position-dependent weights. Asia Pac J Oper Res. https://doi.org/10.1142/S0217595924500118
Wang J-B, Wang Y-C, Wan C, Lv D-Y, Zhang L (2024d) Controllable processing time scheduling with total weighted completion time objective and deteriorating jobs. Asia Pac J Oper Res 41(3):2350026
Wu C-C, Lee W-C (2008) Single-machine group-scheduling problems with deteriorating setup times and job-processing times. Int J Prod Econ 115:128–133
Wu C-C, Shiau Y-R, Lee W-C (2008) Single-machine group scheduling problems with deterioration consideration. Comput Oper Res 35:1652–1659
Xu H, Li X, Ruiz R, Zhu H (2021) Group scheduling with nonperiodical maintenance and deteriorating effects. IEEE Trans Syst Man Cybern Syst 51(5):2860–2872
Yan Y, Wang D-Z, Wang D-W, Wang H-F (2008) Single-machine group scheduling problems with deterioration and learning effects. IEEE Proceedings of the 7th world congress on intelligent control and automation. pp 4933–4936 Chongqing, China. https://doi.org/10.1109/WCICA.2008.4593725.
Yan J-X, Ren N, Bei H-B, Bao H, Wang J-B (2023) Study on resource allocation scheduling problem with learning factors and group technology. J Ind Manag Optim 19(5):3419–3435
Yang S-J, Yang D-L (2010) Single-machine group scheduling problems under the effects of deterioration and learning. Comput Ind Eng 58:754–758
Yin Y, Wu W-H, Cheng TCE, Wu C-C (2015) Single-machine scheduling with time-dependent and position-dependent deteriorating jobs. Int J Comput Integr Manuf 28(7):781–790
Yu Y, Pan Q, Pang X, Tang X (2023) An attribution feature-based memetic algorithm for hybrid flowshop scheduling problem with operation skipping. IEEE Trans Autom Sci Eng. https://doi.org/10.1109/TASE.2023.3346446
Zhang L-H, Lv D-Y, Wang J-B (2023) Two-agent slack due-date assignment scheduling with resource allocations and deteriorating jobs. Mathematics 11(12):2737
Zhang L-H, Geng X-N, Xue J, Wang J-B (2024) Single machine slack due window assignment and deteriorating jobs. J Ind Manag Optim 20:1593–1614
Zhao S (2022) Scheduling jobs with general truncated learning effects including proportional setup times. Comput Appl Math 41(4):146
Funding
This work was supported by the Natural Science Foundation of Liaoning Province, China (2024-MS-175) and the Basic Scientific Research Project of Liaoning Provincial Department of Education (JYTZD2023050).
Author information
Authors and Affiliations
Corresponding authors
Ethics declarations
Conflict of interest
The authors declare that they have no Conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Yin, N., Gao, M. Single-machine group scheduling with general linear deterioration and truncated learning effects. Comp. Appl. Math. 43, 386 (2024). https://doi.org/10.1007/s40314-024-02881-6
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s40314-024-02881-6