1 Introduction

Scheduling problems with resource allocations (it is also called controllable processing times) have been the focus of many scholars (Shabtay and Steiner [1], Yedidsiona and Shabtay [2], Sun et al. [3], and Kovalev et al. [4]). In 2021, Zhao [5] considered the flow shop scheduling with resource allocation and learning effects under no-wait setting. For the slack due-window, Zhao [5] proved that some versions of scheduling cost (i.e., weighted sum of earliness-tardiness and due-window assignment) and resource cost can be solved in polynomial time. Lu et al. [6] considered due-date assignment problem with resource allocation and learning effects. Mor et al. [7] addressed single-machine scheduling with resource allocation. For some NP-hard problems, they proposed heuristic algorithms. Tian [8] studied scheduling with resource allocation and common/slack due-window, they sowed that four versions of scheduling cost (i.e., weighted sum of earliness-tardiness, number of early and tardy job, and due-window assignment) are polynomially solvable. Wang and Wang [9] considered single-machine resource allocation scheduling with the time-dependent learning effect. Zhang et al. [10] and Li et al. [11] studied two-agent single machine resource allocation scheduling with deteriorating jobs. Wang et al. [12] investigated the single-machine scheduling with deteriorating jobs and convex resource allocation. A bicriteria analysis on total weighted completion time and resource consumption cost is provided. Qian et al. [13] addressed single-machine due-window assignment scheduling with resource allocations and learning effect. Under delivery times, they proved that some problems are polynomially solvable. Sun et al. [14] studied single machine resource allocation scheduling with slack due window assignment. Zhang et al. [15] considered single machine resource allocation scheduling with exponential time-dependent learning effects.

In addition, some researchers examined the models with group technology (see Potts and Van Wassenhove [16], Webster and Baker [17], Wu and Lee [18], Li et al. [19], Ji et al. [20], Ji et al. [21], and Zhang et al. [22]). In 2019, Huang [23] and Liu et al. [24] considered single machine group scheduling with deterioration effects. Bajwa et al. [25] studied single machine group scheduling with the sequence-independent setup times. For the number of tardy jobs minimization, they proposed a hybrid heuristic and particle swarm optimization meta-heuristics. Xu et al. [26] examined group scheduling with deteriorating effects. Under a nonperiodical maintenance, they proposed some heuristic algorithms. Chen et al. [27] addressed single machine group scheduling with due date assignment. Under three due date methods, the goal is to minimize the cost function including earliness-tardiness, due date assignment and flow time, they proved that the problem can be solved in polynomial time. He et al. [28] considered the flowshop group scheduling with sequence-dependent setup times. For the makespan minimization, they proposed some heuristic algorithms to solve the problem. Wang and Ye [29] delved into group scheduling with random learning effects. They proved that some problems polynomial solvable.

Under many modern industrial process, there has been increasing attention to the scheduling problems involving both group technology and resource allocation (Shabtay et al. [30], Zhu et al. [31], Wang et al. [32], and Lv et al. [33]). In 2023, Yan et al. [34] examined the single machine group problem with learning effects and resource allocation. For the total completion time minimization subject to limited resource availability, they proposed some algorithms. Liu and Wang [35] and He et al. [36] examined the single machine group scheduling with resource allocations and position-dependent weights. Under common and slack due-date assignments, Liu and Wang [35] proved that some special cases can be solved in polynomial time; For a general case of the problem, He et al. [36] proposed some heuristic algorithms and a branch-and-bound. Li et al. [37] considered the single machine group scheduling with convex resource allocation and learning effect. Under common due date (denoted by \(\widetilde{con}\)) assignment, for the non-regular objection, they proposed the heuristic and branch-and-bound algorithms. Recently, Chen et al. [38] studied the single machine group scheduling with resource allocation. Under the different due dates (denoted by \(\widetilde{dif}\)) assignment, they proved that a special case of two scheduling problems (i.e., the linear and convex resource consumption functions) can be solved in polynomial time. In light of the significance of group scheduling with resource allocation in real manufacturing environments, in this paper, we continue the study of Chen et al. [38], the purpose is to consider the general case of Chen et al. [38]. Contributions of this study are presented as follows: (i) The general group scheduling with resource allocation and \(\widetilde{dif}\) is modeled and studied. (ii) To solve the general problem of Chen et al. [38], the structural properties are derived, and solution algorithms (including a branch-and-bound algorithm and a heuristic algorithm) are proposed. (iii) Numerical tests are presented to evaluate the efficiency of the solution algorithms.

The rest of this paper is organized as follows: In Sect. 2, we give a description of the problem. In Sect. 3, we presents some preliminary properties. In Sect. 4, we proposed the solution algorithms to solve the general problem. In Sect. 5, we present computational study for the algorithms. In Sect. 6, we present the conclusions.

2 Problem assumptions

In this paper, the problem formulation can be described as follows: There are n jobs \(J_1,J_2,\ldots ,J_n\) grouped into z groups \(\widehat{G}_1,\widehat{G}_2,\ldots ,\widehat{G}_z\), and these jobs to be processed on a single machine, where there are \(n_h\) jobs in the group \(\widehat{G}_h\), i.e., \(\widehat{G}_h=\{J_{h,1},J_{h,2},\ldots ,J_{h,n_h}\}\) \((h=1,2,\ldots ,z)\), \(\sum _{h=1}^{z}n_h=n\). Let \(s_h\) denote the setup time of \(\widehat{G}_h\); \(C_{h,j}\) be the completion time of \(J_{h,j}\) in \(\widehat{G}_h\). For the \(\widetilde{dif}\) assignment, the due date of \(J_{h,j}\) is \(d_{h,j}\). As in Shabtay et al. [30] and Chen et al. [38], the actual processing time of \(J_{h,j}\) is

$$\begin{aligned} p_{h,j}^{Act}=\left( \frac{\varpi _{h,j}}{u_{h,j}}\right) ^\eta ,h=1,2,\ldots ,z;j=1,2,\ldots ,n_h, \end{aligned}$$
(1)

where \(\varpi _{h,j}\) is workload of \(J_{h,j}\), \(\eta >0\) is a constant, \(u_{h,j}\) is the amount of resource allocated to \(J_{h,j}\). The goal is to find a schedule \(\delta \), due dates and resource allocations to minimize the following cost function:

$$\begin{aligned} F(\delta ,d_{h,j},u_{h,j}|_{h=1}^{z},_{j=1}^{n_h})=\sum _{h=1}^{z}\sum _{j=1}^{n_h}(\alpha _{h,j}E_{h,j}+\beta _{h,j}T_{h,j} +\gamma _{h,j}d_{h,j}+v_{h,j}u_{h,j}),\nonumber \\ \end{aligned}$$
(2)

where \(E_{h,j}=\max \{d_{h,j}-C_{h,j},0\}\) (resp. \(T_{h,j}=\max \{C_{h,j}-d_{h,j},0\}\)) is the earliness (resp. tardiness) of \(J_{h,j}\) (Yang et al. [39], Geng et al. [40], Lv and Wang [41], and Wang et al. [42]), \(\alpha _{h,j}\) (resp. \(\beta _{h,j}\), \(\gamma _{h,j}\)) denotes unit earliness (resp. tardiness, due date) cost of \(J_{h,j}\), i.e., the weight \(\alpha _{h,j}\) (\(\beta _{h,j}\), \(\gamma _{h,j}\)) is job-dependent weight of \(J_{h,j}\), and \(v_{{h,j}}\) is the unit consumption cost of \(J_{h,j}\) (i.e., the cost associated with the per unit consumption of resource), i.e.,

$$\begin{aligned} 1\left| \widetilde{gt}, \widetilde{dif},p_{h,j}^{Act}=\left( \frac{\varpi _{h,j}}{u_{h,j}}\right) ^\eta \right| \sum _{h=1}^{z}\sum _{j=1}^{n_h}(\alpha _{h,j}E_{h,j}+\beta _{h,j}T_{h,j} +\gamma _{h,j}d_{h,j}+v_{h,j}u_{h,j}),\nonumber \\ \end{aligned}$$
(3)

where 1 denotes a single machine setting, \(\widetilde{gt}\) represents group technology, the second field (i.e., \(\widetilde{gt}, \widetilde{dif},p_{h,j}^{Act}=\left( \frac{\varpi _{h,j}}{u_{h,j}}\right) ^\eta \)) is job characteristics, the third field \(\sum _{h=1}^{z}\sum _{j=1}^{n_h}(\alpha _{h,j}E_{h,j}+\beta _{h,j}T_{h,j} +\gamma _{h,j}d_{h,j}+v_{h,j}u_{h,j})\) refers to the optimal criterion.

3 Preliminary properties

Let [r] denote job (group) scheduled in the rth position in a sequence, from Chen et al. [38], we have

$$\begin{aligned}{} & {} \sum _{h=1}^{z}\sum _{j=1}^{n_h}(\alpha _{h,j}E_{h,j}+\beta _{h,j}T_{h,j} +\gamma _{h,j}d_{h,j}+v_{h,j}u_{h,j})\nonumber \\{} & {} \quad =\sum _{h=1}^z\Psi _{[h]}\left( \sum _{k=1}^hs_{[k]}\right) +(\eta ^\frac{1}{\eta +1}+\eta ^\frac{-\eta }{\eta +1})\nonumber \\{} & {} \quad \sum _{h=1}^z\sum _{j=1}^{n_{[h]}}\theta _{[h],[j]}\left( \sum _{k=h+1}^z\Psi _{[k]}+ \sum _{\xi =j}^{n_{[h]}} \psi _{[h],[\xi ]}\right) ^\frac{1}{\eta +1}, \end{aligned}$$
(4)

where \(\theta _{[h],[j]}=\left( \varpi _{[h],[j]}v_{[h],[j]}\right) ^\frac{\eta }{\eta +1}\).

From Chen et al. [38] and Eq. (4), \( 1\left| \widetilde{gt}, \widetilde{dif},p_{J_{h,j}}^{Act}=\left( \frac{\varpi _{h,j}}{u_{h,j}}\right) ^\eta \right| \sum _{h=1}^{z}\sum _{j=1}^{n_h}\) \((\alpha _{h,j}E_{h,j}+\beta _{h,j}T_{h,j} +\gamma _{h,j}d_{h,j}+v_{h,j}u_{h,j}) \) reduces to a purely combinatorial optimization of minimizing Eq. (4).

Lemma 1

For each \(h = 1, 2,\ldots ,z\), if \(\psi _{h,o}\ge \psi _{h,\chi }\) implies \(\theta _{h,o} \le \theta _{h,\chi }\), the optimal job sequence in \(\widehat{G}_h\) is in non-decreasing order of \(\theta _{h,j}\) (or in non-increasing order of \(\psi _{h,j}\)), where \(\theta _{h,j}=\left( \varpi _{h,j}v_{h,j}\right) ^\frac{\eta }{\eta +1}\), \(h=1,2,\ldots ,z\).

Proof

By Eq. (4), for group \(\widehat{G}_{[h]}\), we only need to minimize

$$\begin{aligned} F_{[h]}=\sum _{j=1}^{n_{[h]}}\theta _{[h],[j]}\left( \sum _{k=h+1}^z\Psi _{[k]}+ \sum _{\xi =j}^{n_{[h]}} \psi _{[h],[\xi ]}\right) ^\frac{1}{\eta +1}. \end{aligned}$$
(5)

By the adjacent interchange method, let \(\delta _{[h]}=[\pi _{1},J_{h,o},J_{h,\chi },\pi _{2}]\) and \(\delta _{[h]}'=[\pi _{1},J_{h,\chi },J_{h,o},\pi _{2}]\), where \(\pi _{1}\) and \(\pi _{2}\) are partial schedules, and \(J_{h,o}\) (resp. \(J_{h,\chi }\)) is scheduled at \(\lambda \)th (resp. \((\lambda +1)\)th) position in \(\delta _{[h]}\). Let X (resp. Y) be the partial sum of \(F_{[h]}\) in \(\pi _{1}\) (resp. \(\pi _{2}\)), we have

$$\begin{aligned} F_{[h]}(\delta _{[h]})= & {} X+\theta _{[h],o}\left( \psi _{[h],o}+ \psi _{[h],\chi }+\sum _{k=h+1}^z\Psi _{[k]}+ \sum _{\xi =\lambda +2}^{n_{[h]}} \psi _{[h],[\xi ]}\right) ^\frac{1}{\eta +1}\nonumber \\{} & {} +\theta _{[h],\chi }\left( \psi _{[h],\chi }+\sum _{k=h+1}^z\Psi _{[k]}+ \sum _{\xi =\lambda +2}^{n_{[h]}} \psi _{[h],[\xi ]}\right) ^\frac{1}{\eta +1}+Y, \end{aligned}$$
(6)

and

$$\begin{aligned} F_{[h]}(\delta _{[h]}')= & {} X+\theta _{[h],\chi }\left( \psi _{[h],\chi }+ \psi _{[h],o}+\sum _{k=h+1}^z\Psi _{[k]}+ \sum _{\xi =\lambda +2}^{n_{[h]}} \psi _{[h],[\xi ]}\right) ^\frac{1}{\eta +1}\nonumber \\{} & {} +\theta _{[h],o}\left( \psi _{[h],o}+\sum _{k=h+1}^z\Psi _{[k]}+ \sum _{\xi =\lambda +2}^{n_{[h]}} \psi _{[h],[\xi ]}\right) ^\frac{1}{\eta +1}+Y. \end{aligned}$$
(7)

We assume that \(\psi _{h,o}\ge \psi _{h,\chi }\), \(\theta _{h,o} \le \theta _{h,\chi }\), \(Z=\sum _{k=h+1}^z\Psi _{[k]}+ \sum _{\xi =\lambda +2}^{n_{[h]}} \psi _{[h],[\xi ]}\), from Eqs. (6) and (7), we have

$$\begin{aligned} F_{[h]}(\delta _{[h]})-F_{[h]}(\delta _{[h]}')= & {} \theta _{[h],o}\left( \psi _{[h],o}+ \psi _{[h],\chi }+Z\right) ^\frac{1}{\eta +1}+\theta _{[h],\chi }\left( \psi _{[h],\chi }+Z\right) ^\frac{1}{\eta +1}\nonumber \\{} & {} -\theta _{[h],\chi }\left( \psi _{[h],\chi }+ \psi _{[h],o}+Z\right) ^\frac{1}{\eta +1}-\theta _{[h],o}\left( \psi _{[h],o}+Z\right) ^\frac{1}{\eta +1}\nonumber \\= & {} \theta _{[h],o}\left( \left( \psi _{[h],o}+ \psi _{[h],\chi }+Z\right) ^\frac{1}{\eta +1}-\left( \psi _{[h],o}+Z\right) ^\frac{1}{\eta +1}\right) \nonumber \\{} & {} +\theta _{[h],\chi }\left( \left( \psi _{[h],\chi }+Z\right) ^\frac{1}{\eta +1}-\left( \psi _{[h],\chi }+ \psi _{[h],o}+Z\right) ^\frac{1}{\eta +1}\right) \nonumber \\\le & {} \theta _{[h],\chi }\left( \left( \psi _{[h],o}+ \psi _{[h],\chi }+Z\right) ^\frac{1}{\eta +1}-\left( \psi _{[h],o}+Z\right) ^\frac{1}{\eta +1}\right) \nonumber \\{} & {} +\theta _{[h],\chi }\left( \left( \psi _{[h],\chi }+Z\right) ^\frac{1}{\eta +1}-\left( \psi _{[h],\chi }+ \psi _{[h],o}+Z\right) ^\frac{1}{\eta +1}\right) \nonumber \\= & {} \theta _{[h],\chi }\left[ \left( \psi _{[h],\chi }+Z\right) ^\frac{1}{\eta +1}-\left( \psi _{[h],o}+Z\right) ^\frac{1}{\eta +1}\right] \nonumber \\\le & {} 0. \end{aligned}$$
(8)

Hence, the optimal job sequence in \(\widehat{G}_{[h]}\) is in non-decreasing order of \(\theta _{[h],j}\) (or in non-increasing order of \(\psi _{[h],j}\)). \(\square \)

Corollary 1

For each \(h = 1, 2,\ldots , z\), if \(\psi _{h,\xi }= \psi _{h}\) for \(\xi =1,2,\ldots ,n_h\), the optimal job sequence in \(\widehat{G}_h\) is in non-decreasing order of \(\theta _{h,j}\).

Corollary 2

For each \(h = 1, 2,\ldots , z\), if \(\theta _{h,\xi }= \theta _{h}\) for \(\xi =1,2,\ldots ,n_h\), the optimal job sequence in \(\widehat{G}_h\) is in non-increasing order of \(\psi _{h,j}\).

Similarly, we have

Lemma 2

(He et al. [36]). \(\sum _{h=1}^z\Psi _{[h]}\left( \sum _{k=1}^hs_{[k]}\right) \) is minimized if \(\frac{\Psi _{[1]}}{s_{[1]}}\ge \frac{\Psi _{[2]}}{s_{[2]}}\ge \ldots \ge \frac{\Psi _{[z]}}{s_{[z]}}\).

4 Solution algorithms for the general case

Under a special case (i.e., \(n_h=\bar{n}\) and \(\psi _{h,\xi }= \bar{\psi }\)), Chen et al. [38] proved that

$$\begin{aligned}{} & {} 1\left| \widetilde{gt}, \widetilde{dif},p_{J_{h,j}}^{Act}=\left( \frac{\varpi _{h,j}}{u_{h,j}}\right) ^\eta ,n_h=\bar{n},\psi _{h,\xi }= \bar{\psi }\right| \\{} & {} \quad \sum _{h=1}^{z}\sum _{j=1}^{n_h}(\alpha _{h,j}E_{h,j}+\beta _{h,j}T_{h,j} +\gamma _{h,j}d_{h,j}+v_{h,j}u_{h,j}) \end{aligned}$$

can be solved in \(O(n^3)\) time. Below we will propose algorithms to solve the general case of

$$\begin{aligned} 1\left| \widetilde{gt}, \widetilde{dif},p_{J_{h,j}}^{Act}=\left( \frac{\varpi _{h,j}}{u_{h,j}}\right) ^\eta \right| \sum _{h=1}^{z}\sum _{j=1}^{n_h}(\alpha _{h,j}E_{h,j}+\beta _{h,j}T_{h,j} +\gamma _{h,j}d_{h,j}+v_{h,j}u_{h,j}). \end{aligned}$$

4.1 Solution of job sequence within each group

In this subsection, the optimal job sequence \(\delta _h\) within group \(\widehat{G}_h\) will be obtained. For group \(\widehat{G}_h\), from Eq. (5) and the proof of Lemma 1, we only need to minimize

$$\begin{aligned} F_{h}=\sum _{j=1}^{n_{h}}\theta _{h,j}\left( \sum _{\xi =j}^{n_{h}} \psi _{h,\xi }\right) ^\frac{1}{\eta +1}. \end{aligned}$$
(9)

Let \(\delta _h =(\delta _h^{sp},\delta _h^{up} )\) be a sequence of jobs within group \(\widehat{G}_h\), where \(\delta _h^{sp}\) (resp. \(\delta _h^{up}\)) is the scheduled (resp. unscheduled part) part, and suppose there are g jobs in \(\delta _h^{sp}\), we have

$$\begin{aligned} F_{h}(\delta _h^{sp},\delta _h^{up} )=\sum _{j=1}^{g}\theta _{h,[j]}\left( \sum _{\xi =j}^{n_{h}} \psi _{h,[\xi ]}\right) ^\frac{1}{\eta +1}+\sum _{j=g+1}^{n_{h}}\theta _{h,[j]}\left( \sum _{\xi =j}^{n_{h}} \psi _{h,[\xi ]}\right) ^\frac{1}{\eta +1}. \end{aligned}$$
(10)

Observe that \(\sum _{j=1}^{g}\theta _{h,[j]}\left( \sum _{\xi =j}^{n_{h}} \psi _{h,[\xi ]}\right) ^\frac{1}{\eta +1}\) in Eq. (10) is known and a lower bound for \(F_{h}(\delta _h^{sp},\delta _h^{up} )\) is obtained by minimizing \(\sum _{j=g+1}^{n_{h}}\theta _{h,[j]}\left( \sum _{\xi =j}^{n_{h}} \psi _{h,[\xi ]}\right) ^\frac{1}{\eta +1}\). From Lemma 1, we obtain the first lower bound (\(\underline{\underline{LB}}\))

$$\begin{aligned} LB_1(F_{h})=\sum _{j=1}^{g}\theta _{h,[j]}\left( \sum _{\xi =j}^{n_{h}} \psi _{h,[\xi ]}\right) ^\frac{1}{\eta +1}+\sum _{j=g+1}^{n_{h}}\theta _{h,(j)}\left( \sum _{\xi =j}^{n_{h}} \psi _{h,<\xi >}\right) ^\frac{1}{\eta +1}, \end{aligned}$$
(11)

where \(\psi _{h,<g+1>}\ge \psi _{h,<g+2>}\ge \ldots \ge \psi _{h,<n_h>},\) \(\theta _{h,(g+1)}\le \theta _{h,(g+2)}\le \ldots \le \theta _{h,(n_h)}\) (note that \(\psi _{h,<j>}\) and \(\theta _{h,(j)}\) (\(j=g+1,g+2,\ldots ,n_h\)) do not necessarily correspond to the same job).

Similarly, let \(\psi _{h,\min } =\min \{\psi _{h,j}|j\in \delta _h^{up}\},\) we obtain the second \(\underline{\underline{LB}}\)

$$\begin{aligned} LB_2(F_{h})=\sum _{j=1}^{g}\theta _{h,[j]}\left( \sum _{\xi =j}^{n_{h}} \psi _{h,[\xi ]}\right) ^\frac{1}{\eta +1}+\sum _{j=g+1}^{n_{h}}\theta _{h,(j)}\left[ ({n_{h}}-j+1) \psi _{h,\min }\right] ^\frac{1}{\eta +1},\nonumber \\ \end{aligned}$$
(12)

where \(\theta _{h,(g+1)}\le \theta _{h,(g+2)}\le \ldots \le \theta _{h,(n_h)}\).

Let \(\theta _{h,\min } =\min \{\theta _{h,j}|j\in \delta _h^{up}\},\) we obtain the third \(\underline{\underline{LB}}\)

$$\begin{aligned} LB_3(F_{h})=\sum _{j=1}^{g}\theta _{h,[j]}\left( \sum _{\xi =j}^{n_{h}} \psi _{h,[\xi ]}\right) ^\frac{1}{\eta +1}+\sum _{j=g+1}^{n_{h}}\theta _{h,\min }\left( \sum _{\xi =j}^{n_{h}} \psi _{h,<\xi >}\right) ^\frac{1}{\eta +1},\nonumber \\ \end{aligned}$$
(13)

where \(\psi _{h,<g+1>}\ge \psi _{h,<g+2>}\ge \ldots \ge \psi _{h,<n_h>}\).

In order to make the \(\underline{\underline{LB}}\) tighter, the maximum value of expressions (11), (12) and (13) will be chosen as a \(\underline{\underline{LB}}\) for \( F_{h}(\delta _h^{sp},\delta _h^{up} )\), i.e.,

$$\begin{aligned} \underline{\underline{LB}}(F_{h})=\max \{LB_1(F_{h}),LB_2(F_{h}), LB_3(F_{h})\}. \end{aligned}$$
(14)

From the above analysis and Framinan and Leisten [43], the following upper bound (\(\underline{\underline{UP}}\)) algorithm is proposed for sequence \(\delta _h\) within \(\widehat{G}_h\), i.e.,

Algorithm 1

(\(\underline{\underline{UP}}\) for sequence \(\delta _h\) within \(\widehat{G}_h\))

Phase 1

  1. Step 1

    Sequence jobs in non-decreasing order of \(\theta _{h,j}\).

  2. Step 2

    Sequence jobs in non-increasing order of \(\psi _{h,j}\).

  3. Step 3

    Sequence jobs in non-decreasing order of \(\frac{\theta _{h,j}}{\psi _{h,j}}\).

  4. Step 4

    Choose the better solution from Steps 1, 2 and 3.

Phase 2

Step i Let \(\delta _h^0\) be the job sequence obtained from Phase 1.

Step ii Set \(q=2\). Select the first two jobs from the sorted list and select the better of the two possible sequences.

Step iii Increment q, \(q=q+1\). Select the qth job from the sorted list and insert it into q possible positions of the best partial sequence obtained so far. Among the q sequences, the best q-job partial sequence is selected based on minimum \(F_h\) (see Eq. (9)). Next, determine all possible sequences by interchanging jobs in positions x and y of the above partial sequence for all \(1\le x\le q,x < y\le q\). Select the best partial sequence among \(\frac{q(q-1)}{2}\) sequences having minimum \(F_h\) (see Eq. (9)).

Step iv). If \(q=n_h\), then STOP; otherwise, go to Step iii).

From \(\underline{\underline{LB}}\) (14) and \(\underline{\underline{UP}}\) (Algorithm 1), the following branch-and-bound (BB) algorithm is proposed to obtain the sequence \(\delta _h\) within \(\widehat{G}_h\):

Algorithm 2

(\(\underline{\underline{BB}}\) for sequence \(\delta _h\) within \(\widehat{G}_h\), denoted by \(BB_{\widehat{G}_h}\))

Step 1 (Find \(\underline{\underline{UB}}\)) Use Phase 1 of Algorithm 1 to obtain an initial solution for the sub-problem of determining the optimal job sequence \(\delta _h\).

Step 2 The bounding and termination are the same as He et al. [36] (\(\underline{\underline{LB}}\) is Eq. (14) and objective cost is Eq. (9)).

4.2 Solution of group sequence

From Subsection 4.1, we assume that the optimal job sequences within each group are given. Let \(\varrho =(\varrho ^{sp},\varrho ^{up} )\) be a sequence of groups, where \(\varrho ^{sp}\) (resp. \(\varrho ^{up}\)) is scheduled (resp. unscheduled) part, and there are \(\varsigma \) groups in \(\varrho ^{sp}\), from Eq. (4), one can achieve

$$\begin{aligned}{} & {} F(\varrho ^{sp},\varrho ^{up} ) =\sum _{h=1}^{\varsigma }\Psi _{[h]}\left( \sum _{k=1}^hs_{[k]}\right) +\sum _{h=\varsigma +1}^z\Psi _{[h]}\left( \sum _{k=1}^{\varsigma }s_{[k]}+\sum _{k=\varsigma +1}^hs_{[k]}\right) \nonumber \\{} & {} \quad +(\eta ^\frac{1}{\eta +1}+\eta ^\frac{-\eta }{\eta +1})\times \sum _{h=1}^{\varsigma }\sum _{j=1}^{n_{[h]}}\theta _{[h],[j]}\left( \sum _{k=h+1}^z\Psi _{[k]}+ \sum _{\xi =j}^{n_{[h]}} \psi _{[h],[\xi ]}\right) ^\frac{1}{\eta +1}\nonumber \\{} & {} \quad +(\eta ^\frac{1}{\eta +1}+\eta ^\frac{-\eta }{\eta +1})\times \sum _{h=\varsigma +1}^z\sum _{j=1}^{n_{[h]}}\theta _{[h],[j]}\left( \sum _{k=h+1}^z\Psi _{[k]}+ \sum _{\xi =j}^{n_{[h]}} \psi _{[h],[\xi ]}\right) ^\frac{1}{\eta +1}. \end{aligned}$$
(15)

From (15), \(\sum _{k=1}^{\varsigma }s_{[k]}\), \(\sum _{h=1}^{\varsigma }\Psi _{[h]}\left( \sum _{k=1}^hs_{[k]}\right) \) and \(\sum _{h=1}^{\varsigma }\sum _{j=1}^{n_{[h]}}\theta _{[h],[j]}\left( \sum _{k=1}^z\Psi _{[k]}+ \sum _{\xi =j}^{n_{[h]}} \psi _{[h],[\xi ]}\right) ^\frac{1}{\eta +1}\) are constants, \(\sum _{h=\varsigma +1}^z\Psi _{[h]}\left( \sum _{k=1}^{\varsigma }s_{[k]}+\sum _{k=\varsigma +1}^hs_{[k]}\right) \) can be minimized by Lemma 2,

\(\sum _{h=\varsigma +1}^z\sum _{j=1}^{n_{[h]}}\theta _{[h],[j]}\left( \sum _{k=h+1}^z\Psi _{[k]}+ \sum _{\xi =j}^{n_{[h]}} \psi _{[h],[\xi ]}\right) ^\frac{1}{\eta +1}\ge \sum _{h=\varsigma +1}^z\sum _{j=1}^{n_{[h]}}\theta _{[h],[j]}\left( \sum _{\xi =j}^{n_{[h]}} \psi _{[h],[\xi ]}\right) ^\frac{1}{\eta +1}.\) Hence, we have the following lower bound:

$$\begin{aligned} \underline{\underline{LB}}= & {} \sum _{h=1}^{\varsigma }\Psi _{[h]}\left( \sum _{k=1}^hs_{[k]}\right) +\sum _{h=\varsigma +1}^z\Psi _{<h>}\left( \sum _{k=1}^{\varsigma }s_{[k]}+\sum _{k=\varsigma +1}^hs_{<k>}\right) \nonumber \\{} & {} +(\eta ^\frac{1}{\eta +1}+\eta ^\frac{-\eta }{\eta +1})\times \sum _{h=1}^{\varsigma }\sum _{j=1}^{n_{[h]}}\theta _{[h],[j]}\left( \sum _{k=h+1}^z\Psi _{[k]}+ \sum _{\xi =j}^{n_{[h]}} \psi _{[h],[\xi ]}\right) ^\frac{1}{\eta +1}\nonumber \\{} & {} +(\eta ^\frac{1}{\eta +1}+\eta ^\frac{-\eta }{\eta +1})\times \sum _{h=\varsigma +1}^z\sum _{j=1}^{n_{[h]}}\theta _{[h],[j]}\left( \sum _{\xi =j}^{n_{[h]}} \psi _{[h],[\xi ]}\right) ^\frac{1}{\eta +1}, \end{aligned}$$
(16)

where \(\frac{\Psi _{<\varsigma +1>}}{s_{<\varsigma +1>}}\ge \frac{\Psi _{<\varsigma +2>}}{s_{<\varsigma +2>}}\ge \ldots \ge \frac{\Psi _{<z>}}{s_{<z>}}\).

Similarly, the following \(\underline{\underline{UB}}\) algorithm for group sequence \(\varrho \) is:

Algorithm 3

(\(\underline{\underline{UB}}\) for group sequence \(\varrho \))

Phase 1

  1. Step 1

    Sequence groups in non-decreasing order of \(s_{h}\).

  2. Step 2

    Sequence groups in non-increasing order of \(\frac{\Psi _{h}}{s_{h}}\).

  3. Step 3

    Sequence groups in non-increasing order of \(\Psi _{h}\).

  4. Step 4

    Choose the better solution from Steps 1, 2 and 3.

Phase 2

Step i Let \(\varrho ^0\) be the group sequence obtained from Phase 1.

Step ii). Set \(l=2\). Select the first two groups from the sorted list and select the better of the two possible sequences.

Step iii). Increment l, \(l=l+1\). Select the lth group from the sorted list and insert it into l possible positions of the best partial sequence obtained so far. Among the l sequences, the best l-job partial sequence is selected based on minimum F (see Eq. (5)). Next, determine all possible sequences by interchanging groups in positions x and y of the above partial sequence for all \(1\le x\le l,x < y\le l\). Select the best partial sequence among \(\frac{l(l-1)}{2}\) sequences having minimum F (see Eq. (5)).

Step iv). If \(l=z\), then STOP; otherwise, go to Step iii).

From \(\underline{\underline{LB}}\) (16) and \(\underline{\underline{UB}}\) (Algorithm 3), the following BB algorithm is proposed to obtain the optimal group sequence \(\varrho \):

Algorithm 4

(BB for group sequence \(\varrho \), denoted by \(BB_{\varrho }\))

Step 1. (Find \(\underline{\underline{UB}}\)) Use Phase 1 of Algorithm 3 to obtain an initial solution for the sub-problem of determining the optimal group sequence \(\varrho \).

Step 2. The bounding and termination are the same as He et al. [36] (\(\underline{\underline{LB}}\) is Eq. (16) and objective cost is Eq. (4)).

4.3 Algorithms

From Subsections 4.1-4.2, and Li et al. [44], the general problem

$$\begin{aligned} 1\left| \widetilde{gt}, \widetilde{dif},p_{J_{h,j}}^{Act}=\left( \frac{\varpi _{h,j}}{u_{h,j}}\right) ^\eta \right| \sum _{h=1}^{z}\sum _{j=1}^{n_h}(\alpha _{h,j}E_{h,j}+\beta _{h,j}T_{h,j} +\gamma _{h,j}d_{h,j}+v_{h,j}u_{h,j}) \end{aligned}$$

is solved optimally by:

Algorithm 5

(Exact algorithm based on BB)

Step 1 For each group \(\widehat{G}_h\), calculate the optimal job sequence by using Algorithm 2, \(h=1,2,\ldots ,z\).

Step 2 Calculate the optimal group sequence by using Algorithm 4.

Since Algorithm 5 is based on BB, hence we propose the following heuristic algorithm:

Algorithm 6

(Heuristic algorithm)

Step 1 For each group \(\widehat{G}_h\), calculate the local optimal job sequence by using Algorithm 1, \(h=1,2,\ldots ,z\).

Step 2 Calculate the local optimal group sequence by using Algorithm 3.

5 Number study

The heuristic (i.e., Algorithm 6) and the exact algorithm (i.e., BB, Algorithm 5) were programmed in C++ (carried out on CPU Interl core i5-8250U 1.4GHz PC with 8.00GB RAM), where \(n=50, 60, 70, 80\) and z= 8, 9, 10, 11, 12, and \(n_h\ge 1\). The parameters setting is given as follows:

  1. (1)

    \(s_h\), \(\alpha _{h,j},\beta _{h,j}\), \(\gamma _{h,j}\) and \(v_{h,j}\) were drawn from a discrete uniform distribution in [1, 49];

  2. (2)

    \(\varpi _{h,j}\) were drawn from a discrete uniform distribution in [1, 49], [50, 99], and [1, 99], i.e., \(\varpi _{h,j}\in [1,49]\), \(\varpi _{h,j}\in [50,99]\), and \(\varpi _{h,j}\in [1,99]\);

  3. (3)

    \(\eta =1,1.5,2,2.5\).

Table 1 Results for \(\varpi _{h,j}\in [1,49]\) (CPU time is ms)
Table 2 Results for \(\varpi _{h,j}\in [50,99]\) (CPU time is ms)
Table 3 Results for \(\varpi _{h,j}\in [1,99]\) (CPU time is ms)
Table 4 Calculated \(t-\)values for the hypothesis tests

For simulation accuracy, each random instance was conducted 20 times, and the total number of instances is \(4\times 5\times 3\times 4\times 20=\)4800. The error of Algorithm 6 is calculated as

$$\begin{aligned} \frac{F(H)}{F^*}, \end{aligned}$$
(17)

where F(H) (resp. \(F^*\)) is the objective value (see Eq. (4)) generated by Algorithm 6 (resp. Algorithm 5).

On the other hand, running time (i.e., ms (millisecond)) of Algorithms 5 and 6 is defined. All of the experimental minimum CPU value, maximum CPU value and average CPU value can easily show that Algorithm 6 is more efficient than Algorithm 5 statistically. From Tables 1, 2 and 3, the maximum error of Algorithm 6 is less than 1.0505 for \(n\times z \le 80\times 12\) and the results of \(\varpi _{h,j}\in [1,49]\) is more accurate than \(\varpi _{h,j}\in [50,99]\) and \(\varpi _{h,j}\in [1,99]\).

As the results in Table 1-3 show that Algorithm 6 could be more accurate in the case of \(\varpi _{h,j}\in [1,49]\) than \(\varpi _{h,j}\in [50,99]\) and \(\varpi _{h,j}\in [1,99]\), statistical hypothesis tests are implemented to compare the effectiveness of Algorithm 6 in the case of \(Case1:\varpi _{h,j}\in [1,49]\) and \(Case2:\varpi _{h,j}\in [50,99]\) for representativeness in Table 4. For a display, the instances where \(\eta =1,1.5,2,2.5\),\(n=50, 60, 70, 80\) and z= 12 are considered. The \(t-\)test is used for the tests: \(t=\frac{\overline{X_{Case2}}-\overline{X_{Case1}}}{S_w\sqrt{1/m_{Case2}+1/m_{Case1}}}\), where \(S_w^2=\frac{(m_{Case2}-1)S_{Case2}^2+(m_{Case1}-1)S_{Case1}^2}{m_{Case2}+m_{Case1}-2}\) and \(\overline{X}\) denotes the mean error. The corresponding statistical hypothesis test is configured as \(H_0: \mu _{Case2}>\mu _{Case1}, H_1: \mu _{Case2}\le \mu _{Case1}\). Type I error of \(1\%\) is used and \(t_{critical}=2.5\). Experiment results in Table 4 show that the hypothesis that \(H_0: \mu _{Case2}>\mu _{Case1}\) with a type I error of \(1\%\) cannot be rejected statistically.

6 Conclusions

This paper studied the group scheduling with resource allocation, under single machine and \(\widetilde{dif}\) assignment, the goal is to minimize the weighted sum of the earliness-tardiness cost, due date assignment cost and the resource consumption cost. For the general problem, the heuristic and BB algorithms were proposed. The experimental simulations showed that the BB algorithm is able to obtain an optimal solution with less than or equal to \(80\times 12\) jobs in a reasonable time (maximum CPU time is 17646047 ms), and the error of the heuristic algorithm can be within the reasonable range (maximum error bound is 1.047). Challenging further research can deal with the extensions of this model to the flow shop setting (see Wang and Wang [45], Liu et al. [46], Sun et al. [47], and Lv and Wang [48]), study the \(\widetilde{gt}\) scheduling with non-regular objective functions (e.g. due-window assignment, Lin [49], Mao et al. [50], Lv et al. [51], and Zhang et al. [52]), or consider other \(\widetilde{gt}\) scheduling with deteriorating jobs (see Gawiejnowicz [53], Lv et al. [54], Mao et al. [55], and Ma et al. [56]).