1 Introduction

This work considers the following combinatorial optimization problem. A finite set of jobs must be processed without preemption. Each job requires two resources: (1) a common resource, shared by all jobs, is required during a certain part of the job’s processing period, while (2) a secondary resource, which is shared with only a subset of the other jobs, is required during the job’s whole processing period. This is the case, for example, in the context of the production of certain products where some raw material is put into specific fixtures or molds (the secondary resources), which are then sequentially processed on a single machine (the common resource). Finally, some further postprocessing (e.g., cooling) might be required before the fixtures/molds are available for further usage again. In order to perform this process as efficiently as possible, the aim is to minimizing the makespan, i.e., the total time required to finish the processing of all jobs. In the following we refer to this problem as Job Sequencing with One Common and Multiple Secondary Resources (JSOCMSR).

The technical definition of the problem, which is provided later on, was inspired by a more specific application scenario: the scheduling of patients in radiotherapy for cancer treatment [2, 7] and particle therapy for cancer treatment [8]. In modern particle therapy, carbon or proton particles are accelerated in cyclotrons or synchrotrons to almost the speed of light and from there directed into a treatment room where a patient is radiated. A number of differently equipped treatment rooms is available (typically two to four) and the particle beam can only be directed into one of these rooms at a time. For each patient it is known in advance in which room she or he has to be treated in dependence on her/his specific needs. Moreover, each patient requires a certain preparation (such as positioning, fixation, possibly sedation) in the room before the actual irradiation can start. Upon finishing the irradiation of a patient, some further time is usually needed for medical inspections before the patient can actually leave the room and the treatment of a next patient can start. Note that the available rooms correspond to the secondary resources mentioned above, while the particle beam is the common resource. The scheduling of a set of patients at, e.g., one day in such a facility is considered.

For further information on particle therapy patient scheduling, in which JSOCMSR appears as sub-problem, the interested reader is referred to [8]. The whole practical scenario has to consider a time horizon of several weeks, additional resources, their availability time windows, and a combination of more advanced objectives.

The JSOCMSR is rather easy to solve when (1) only the common resource usage is the bottleneck and enough secondary resources are available or (2) the pre- and postprocessing times in which only the secondary resources are required are negligible in comparison to the jobs’ total processing times. In such cases the jobs can, essentially, be performed in almost an arbitrary ordering. The problem, however, becomes challenging when pre- and postprocessing times are substantial and many jobs require the same secondary resources. In this work we consider such difficult scenarios.

1.1 Contribution of This Work

In addition to formally proving that the JSOCMSR is NP-hard, we provide a lower bound on the makespan objective, which is then exploited both in the context of a constructive heuristic and a novel A* algorithm. The latter works on a special graph structure that allows to efficiently exploit symmetries and features a diving mechanism in order to obtain also heuristic solutions in regular intervals. In addition, we present a mixed-integer linear programming (MILP) model for the JSOCMSR. Our experiments show that the A* algorithm performs excellently. Even many large problem instances with up to 1000 jobs can be solved to proven optimality. There are, however, also difficult problem instances for which A* terminates early due to excessive memory requirements. In these cases, heuristic solutions together with lower bounds and typically small optimality gaps are returned. In comparison, solving the MILP model by the general purpose MILP solver CPLEXFootnote 1 cannot compete with A*, as only solutions to rather small problem instances can be obtained in reasonable time.

2 Related Work

In the literature there are only few publications dealing with scenarios similar to JSOCMSR. Veen et al. [10] studied a related problem in which the common resource corresponds to a machine on which the jobs are processed and secondary resources needed in a pre- and postprocessing are called templates. An important restriction in their problem is that the postprocessing times are assumed to be negligible compared to the total processing times of the jobs. This implies that the starting time of each job only depends on its immediate predecessor. More specifically, a job j requiring a different resource than its predecessor \(j'\) can always be started after a setup time only depending on job j, while a job requiring the same resource can always be started after a postprocessing time only depending on job \(j'\). Due to these characteristics, this problem can be interpreted as a traveling salesman problem (TSP) with a special cost structure. It is shown that this problem can be solved efficiently in time \(O(n\log n)\).

Somewhat related is the no-wait flowshop problem; see [1] for a survey on this problem and related ones. Here, each job needs to be processed on each of m machines in the same order and the processing of the job on a successive machine always has to take place immediately after its processing has finished on the preceding machine. This problem can be solved in time \(O(n\log n)\) for two machines via a transformation to a specially structured TSP [4]. In contrast, for three and more machines the problem is NP-hard, although it can still be transformed into a specially structured TSP. Röck [9] proved that the problem is strongly NP-hard for three machines by a reduction from the 3D-matching problem.

A more general problem as which our JSOCMSR can be modeled is the Resource-Constrained Project Scheduling Problem (RCPSP) with maximal time lags. We obtain a corresponding RCPSP instance from a JSOCMSR instance by splitting each job into three activities which are the preprocessing, the main part also requiring the common resource, and the postprocessing. These activities must be performed for each job in this order with maximal time lags of zero, and all resource requirements must be respected. For a survey on RCPSPs with various extensions and respective solution methods see Hartmann and Briskorn [6]. For practically solving the JSOCMSR, however, such a mapping does not seem to be effective due to the increased number of required activities and since specificities of the problem are not exploited.

3 Problem Definition and Complexity

An instance of JSOCMSR consists of a set of n jobs \(J=\{1,\ldots ,n\}\), the common resource 0, and a set of m secondary resources \(R=\{1,\ldots ,m\}\). By \(R_0=\{0\}\cup R\) we denote the set of all resources. Each job \(j\in J\) has a total processing time \(p_j>0\) during which it fully requires a secondary resource \(q_j\in R\). Furthermore, each job j requires the common resource 0 from a time \(p^{\mathrm {pre}}_j\ge 0\) on, counted from the job’s start, for a duration \(p^0_j\) with \(0<p^0_j\le p_j-p^{\mathrm {pre}}_j\). A solution to the problem is described by the jobs’ starting times \(s=(s_j)_{j\in J}\) with \(s_j\ge 0\). Such a solution s is feasible if no two jobs require a resource at the same time.

The objective is to find a feasible schedule that minimizes the finishing time of the job that finishes latest. This optimization criterion is known as the makespan, and it can be calculated for a solution s by

$$\begin{aligned} \mathrm {MS}(s) = \max \,\{s_j+p_j \mid j \in J\}. \end{aligned}$$
(1)

As each job requires the common resource 0, and only one job can use this resource at a time, a solution implies a total ordering of the jobs. Vice versa, any ordering—i.e., permutation—\(\pi =(\pi _i)_{i=1,\ldots ,n}\), of the jobs in J can be decoded into a feasible solution in the straight-forward greedy way by scheduling each job in the given order at the earliest feasible time. We call a schedule in which, for a certain job permutation \(\pi \), each job is scheduled at its earliest time, a normalized schedule. Obviously, any optimal solution is either a normalized schedule or there exists a corresponding normalized schedule with the same objective value. We therefore also use the notation \(\mathrm {MS}(\pi )\) for the makespan of the normalized solution induced by the job permutation \(\pi \).

For convenience we further define the duration of the postprocessing time by \(p^{\mathrm {post}}_j = p_j- p^{\mathrm {pre}}_j - p^0_j,\ \forall j\in J\) and denote by \(J_r = \{j\in J \mid q_r = r\}\) the subset of jobs requiring resource \(r\in R\) as secondary resource. Note that \(J = \bigcup _{r \in R} J_r\). The minimal makespan over all feasible solutions, i.e., the optimal solution value, is denoted by \(\mathrm {MS}^*\).

3.1 Computational Complexity

Let the decision variant of JSOCMSR be the problem in which it has to be determined if there exists a feasible solution with a makespan corresponding to a given constant \(\mathrm {MS}^*\).

Theorem 1

The decision variant of JSOCMSR is NP-complete for \(m\ge 2\).

Proof

Our problem is in class NP since a solution can be checked in polynomial time. We show that JSOCMSR is NP-complete by a polynomial reduction from the well-known NP-complete Partition Problem (PP) [3], which is stated as follows: Given a finite set of positive integers \(A\subset \mathbb {N}\), partition it into two disjoint subsets \(A_1\) and \(A_2\) such that \(\sum _{a\in A_1} a = \sum _{a\in A_2} a\).

We transform an instance of the PP into an instance of the JSOCMSRC as follows. Let \(m=2\) and J consist of the following jobs:

  • For each \(a\in A\) there is a corresponding job \(j\in \{1,\ldots ,|A|\}\subset J\) with processing time \(p_j=a\) requiring resource \(q_j=1\) and the common resource 0 the whole time, i.e., \(p^0_j=p_j\) and \(p^{\mathrm {pre}}_j=0\).

  • Furthermore, there are two jobs \(j\in \{|A|+1,|A|+2\}\subset J\) with processing times \(p_j=\frac{1}{2}\sum _{a\in A} a + 1\) requiring resource \(q_j=2\) the whole time but the common resource 0 just at the first time slot, i.e., \(p^0=1\) and \(p^{\mathrm {pre}}_j=0\).

Let \(\mathrm {MS}^* = p_{|A|+1}+p_{|A|+2} = \sum _{a\in A} a + 2\). A feasible solution to JSOCMSR with makespan \(\mathrm {MS}^*\) must have the jobs \(|A|+1\) and \(|A|+2\) scheduled sequentially without any gap and all other jobs in parallel to those two. A corresponding solution to the PP can immediately be derived by considering the integers associated with the jobs scheduled in parallel to job \(|A+1|\) as \(A_1\) and those scheduled in parallel to job \(|A+2|\) as \(A_2\). The obtained solution to the PP must be feasible since \(\sum _{a\in A_1} a = \sum _{a\in A_2} a = \frac{1}{2}\sum _{a\in A} a\) holds as the jobs corresponding to the integers do not overlap and there is exactly \(\frac{1}{2}\sum _{a\in A} a\) time left at the common resource 0 when processing jobs \(|A|+1\) and \(|A|+2\), respectively. It also follows that if there is no JSOCMSR solution with makespan \(\mathrm {MS}^*\), then there cannot exist a feasible solution to the PP.

Clearly, the described transformation of a PP instance into a JSOCMSR instance as well as the derivation of the PP solution from the obtained schedule can both be done in time O(|A|), i.e., polynomial time.

Consequently, the decision variant of the JSOCMSR is NP-complete.    \(\square \)

Corollary 1

The makespan minimization variant of JSOCMSR is NP-hard.

3.2 Lower and Upper Bounds

For an instance of JSOCMSR a lower bound for the makespan can be calculated on the basis of each resource \(r\in R\) by taking the total time \(\sum _{j\in J_r} p_j\). Similarly, one more lower bound can also be obtained from the total time resource 0 is required, i.e., \(\sum _{j\in J} p^0_j\). The latter can further be improved by adding the minimal time for preprocessing and postprocessing for the first and last scheduled jobs, respectively. Taking the maximum of these \(m+1\) individual lower bounds yields

$$\begin{aligned} \mathrm {MS}^\mathrm {LB}= \max \left( \min _{j,j'\in J \, \mid \, j \not = j' \vee |J|=1} (p^{\mathrm {pre}}_j + p^{\mathrm {post}}_{j'}) + \sum _{j\in J} p^0_j,\ \max _{r\in R} \sum _{j\in J_r} p_j \right) . \end{aligned}$$
(2)

Figure 1 illustrates these relationships.

Fig. 1.
figure 1

Resource-specific individual lower bounds and the overall lower bound \(\mathrm {MS}^\mathrm {LB}\) for an example instance with \(n=5\) jobs and \(m=3\) secondary resources.

A trivial upper bound is obtained when scheduling all jobs strictly sequentially, yielding \(\mathrm {MS}^\mathrm {UB} = \sum _{j\in J} p_j\). It follows that taking any normalized solution has an approximation factor of no more than m, since \(\mathrm {MS}^\mathrm {UB}\le m\cdot \mathrm {MS}^\mathrm {LB}\).

4 Least Lower Bound Heuristic

We construct a heuristic solution by iteratively selecting a not yet scheduled job and always appending it at the end of the current partial schedule at the earliest possible time. The crucial aspect is the greedy selection of the job to be scheduled next, which is based on the lower bound calculation from Sect. 3.2. Therefore we call this heuristic Least Lower Bound Heuristic (LLBH).

Let \(\pi ^\mathrm {p}\) be the current partial job permutation representing the current normalized schedule and \(J'\subseteq J\) be the set of remaining unscheduled jobs. Given \(\pi ^\mathrm {p}\), the earliest availability time for each resource—that is, the time from which on the resource might be used by a next yet unscheduled job—can be calculated from the respective finishing time of the last job using this resource:

$$\begin{aligned} t_0 =&{\left\{ \begin{array}{ll}\mathop {\max }\nolimits _{j\in J\setminus J'} s_j + p^{\mathrm {pre}}_j + p^0_j &{} \text{ for } J'\not = J \\ 0 &{} \text{ else } \end{array}\right. } \end{aligned}$$
(3)
$$\begin{aligned} t_r =&{\left\{ \begin{array}{ll}\mathop {\max }\nolimits _{j\in J_r\setminus J'} s_j + p_j &{} \text{ for } J_r\setminus J'\not =\emptyset \\ 0 &{} \text{ else } \end{array}\right. } \qquad \forall r\in R \end{aligned}$$
(4)

These times, however, can possibly be further increased (trimmed) as the earliest usage time of resource \(r\in R\) also depends on the remaining unscheduled jobs and the earliest usage time of the common resource 0. We therefore apply the rule

$$\begin{aligned} t_r \leftarrow \max (t_r,\, t_0-\max _{j\in J_r\cap J'} p^{\mathrm {pre}}_j) \qquad \forall r\in R \mid J_r\cap J'\not =\emptyset . \end{aligned}$$
(5)

Moreover, also \(t_0\) might be increased as its earliest usage time also depends on the remaining unscheduled jobs and the earliest usage times of their secondary resources. These relations are considered by applying the rule

$$\begin{aligned} t_0 \leftarrow \max \left( t_0,\, \min \limits _{j\in J'}(t_{q_j} + p^{\mathrm {pre}}_j)\right) = \max \left( t_0, \min \limits _{r \in R \mid J_r \cap J'\not =\emptyset } (t_r + \min \limits _{j\in J_r\cap J'} p^{\mathrm {pre}}_j ) \right) . \end{aligned}$$
(6)

Further note that after a successful increase of \(t_0\) by rule (6), some resource \(r\in R\) might become available for a further increase of its \(t_r\) by the respective rule (5). We therefore apply all these trimming rules repeatedly until no further increase can be achieved.

Following our general lower bound calculation for the makespan in (2), it is now possible to derive a more specific lower bound for a given partial permutation \(\pi ^\mathrm {p}\) considering any possible extension to a complete solution on the basis of each resource \(r\in R \mid J_r\cap J'\not =\emptyset \) by

$$\begin{aligned} \mathrm {MS}^\mathrm {LB}_r(\pi ^\mathrm {p}) = {\left\{ \begin{array}{ll} t_r + \sum \nolimits _{j\in J_r\cap J'} p_j \qquad &{} \text{ for } J_r\cap J'\not =\emptyset \\ 0 &{} \text{ else } \end{array}\right. } \qquad \forall r \in R. \end{aligned}$$
(7)

Note that we define \(\mathrm {MS}^\mathrm {LB}_r(\pi ^\mathrm {p})=0\) for any resource r that is not required by any remaining job in \(J'\) since these bounds should not be relevant for our further considerations.

A lower bound w.r.t. the common resource 0 can be calculated similarly by

$$\begin{aligned} \mathrm {MS}^\mathrm {LB}_0(\pi ^\mathrm {p}) = \max \left( t_0 + \min \limits _{j\in J'} p^{\mathrm {post}}_j, \min \limits _{j,j'\in J'\,\mid \, j\not =j'\vee |J'|=1} (t_{q_j}+p^{\mathrm {pre}}_j+p^{\mathrm {post}}_{j'})\right) +\sum \limits _{j\in J'} p^0_j. \end{aligned}$$
(8)

Clearly, an overall lower bound for the partial solution \(\pi ^\mathrm {p}\) is obtained from the maximum of the individual bounds

$$\begin{aligned} \mathrm {MS}^\mathrm {LB}_{\max }(\pi ^\mathrm {p}) = \max _{r\in R_0} \mathrm {MS}^\mathrm {LB}_r(\pi ^\mathrm {p}). \end{aligned}$$
(9)

For selecting the next job in LLBH to be appended to \(\pi ^\mathrm {p}\), we always consider the impact of each job \(j\in J'\) on each individual bound \(\mathrm {MS}^\mathrm {LB}_r\), \(r\in R_0\), as this gives a more fine-grained discrimination than just considering the impact on the overall bound \(\mathrm {MS}^\mathrm {LB}_{\max }(\pi ^\mathrm {p})\), which would often lead to ties.

More specifically, let \(\mathbf {f}(\pi ^\mathrm {p})=(f_0(\pi ^\mathrm {p}),\ldots ,f_m(\pi ^\mathrm {p}))\) be the vector of the bounds \(\mathrm {MS}^\mathrm {LB}_r(\pi ^\mathrm {p})\) for \(r\in R_0\) sorted in non-increasing value order, i.e., \(f_0(\pi ^\mathrm {p}) = \mathrm {MS}^\mathrm {LB}_{\max }(\pi ^\mathrm {p}) \ge f_1(\pi ^\mathrm {p}) \ge \ldots \ge f_m(\pi ^\mathrm {p})\) holds.

Let \(\pi ^\mathrm {p}\oplus j\) denote the partial solution obtained by appending job \(j\in J'\) to \(\pi ^\mathrm {p}\). We consider \(\pi ^\mathrm {p}\oplus j\) better than \(\pi ^\mathrm {p}\oplus j'\) for \(j,j'\in J'\) iff

$$\begin{aligned} \exists i\in \{0,\ldots ,m\} \mid f_i(\pi ^\mathrm {p}\oplus j)< f_{i}(\pi ^\mathrm {p}\oplus j') \ \wedge \forall i'<i : f_{i'}(\pi ^\mathrm {p}\oplus j) = f_{i'}(\pi ^\mathrm {p}\oplus j'). \end{aligned}$$
(10)

In other words, the sorted vectors \(\mathbf {f}(\pi ^\mathrm {p}\oplus j)\) and \(\mathbf {f}(\pi ^\mathrm {p}\oplus j')\) are compared in a lexicographic order.

LLBH always selects in each iteration a job \(j\in J'\) yielding a (locally) best extension. In the case when multiple extensions have equal \(\mathbf {f}\)-vectors, one of them is chosen at random.

5 Mixed Integer Linear Programming Formulation

The position-based mixed integer linear program (MILP) described in the following models solutions to the JSOCMSR in terms of permutations of all jobs. Index \(i \in \{1,\ldots ,n\}\) refers hereby to position i in a permutation. Variables \(x_{j,i} \in \{0,1\}\), for all \(j \in J\) and \(i \in \{1,\ldots , n\}\), are set to one iff job j is assigned to position i in the permutation. Variables \(s_i\ge 0\) represent the starting time of the jobs scheduled at each position \(i=1,\ldots ,n\) in the permutation. Finally, \(\mathrm {MS}\ge 0\) is the makespan variable to be minimized.

$$\begin{aligned} \min&\mathrm {MS}\end{aligned}$$
(11)
$$\begin{aligned}&\sum _{j \in J} x_{j,i} = 1&i=1,\ldots ,n \end{aligned}$$
(12)
$$\begin{aligned}&\sum _{i=1}^n x_{j,i} = 1&j \in J \end{aligned}$$
(13)
$$\begin{aligned}&s_i + \sum _{j \in J} x_{j,i} \cdot p_j \le \mathrm {MS}&i=1,\ldots ,n \end{aligned}$$
(14)
$$\begin{aligned}&s_1 = 0&\end{aligned}$$
(15)
$$\begin{aligned}&s_i + \sum _{j \in J} x_{j,i} \cdot p_j^{\mathrm {pre}} \ge s_{i-1} + \sum _{j \in J} x_{j,i-1} \cdot (p_j^{\mathrm {pre}} + p_j^0)&i=2,\ldots ,n \end{aligned}$$
(16)
$$\begin{aligned}&s_{i'} - s_i + \sum _{j \in J_r} x_{j,i'} (M+p_j) + \sum _{j \in J_r} x_{j,i}M \le 2M&\nonumber \\&i=2,\ldots ,n,\ i'=1,\ldots ,i-1,\ r \in R \end{aligned}$$
(17)
$$\begin{aligned}&x_{j,i} \in \{0, 1\}& j \in J,\ i=1,\ldots ,n \end{aligned}$$
(18)
$$\begin{aligned}&s_i \ge 0&i=1,\ldots ,n \end{aligned}$$
(19)
$$\begin{aligned}&\mathrm {MS}\ge 0 \end{aligned}$$
(20)

Hereby, Eq. (12) ensure that exactly one job is assigned to the i-th position of the permutation and (13) ensure that each job is assigned to exactly one position. The makespan is determined by inequalities (14). Equation (15) sets the starting time of the first job in the permutation to zero, and the remaining two sets of inequalities make sure that no resource is used by more than one job at a time. Hereby, inequalities (16) take care of the common resource 0, while (17) consider the secondary resources. The Big-M constant in these latter inequalities is set to the makespan obtained by LLBH.

6 A* Algorithm

Based on the solution construction principle of LLBH it is also possible to perform a more systematic search for a proven optimal solution following the concept of A* search [5]. Our A* algorithm searches in a graph whose nodes correspond to partial solutions and whose arcs represent the extensions of partial solutions by appending not yet scheduled nodes. More precisely, each node in this graph maintains the following information:

  1. 1.

    The unordered set \(\hat{J}\subset J\) of already scheduled jobs, implemented by a bit-vector.

  2. 2.

    A set of Non-Dominated Times (NDT) records, where each NDT record corresponds to an individual, more specific partial solution with an indirectly given ordering for the scheduled jobs by storing:

    • the vector \(\mathbf {t}=(t_r)_{r\in R_0}\) of the trimmed earliest usage times \(t_r\) for all resources as defined by (3)–(6);

    • the last scheduled job \(j^\mathrm {last}\in \hat{J}\) after which \(\mathbf {t}\) was obtained;

    • an evaluation vector \(\mathbf {f}'\) similar to \(\mathbf {f}\) that will be defined below.

Thus, each node aggregates all partial solutions \(\pi ^\mathrm {p}\) having the same jobs \(\hat{J}\) scheduled, and each NDT record provides more specific information for each (non-dominated) partial solution. For a given node/NDT record, the corresponding ordering of the scheduled jobs \(\hat{J}\) can be derived in a reverse iterative manner by considering the fitting preceding node/NDT records, always continuing with the node \(\hat{J}\setminus \{j^\mathrm {last}\}\) and an NDT record with times \(t_r\) allowing to schedule job \(j^\mathrm {last}\) without exceeding the \(t_r\) values of the last node/NDT record.

Initially a starting node/NDT record corresponding to the empty schedule is generated with \(\hat{J}=\emptyset \), \(\mathbf {t}=\mathbf {0}\), \(j^\mathrm {last}=\mathrm {none}\), and \(\mathbf {f}'=(\mathrm {MS}^\mathrm {LB},\ldots ,\mathrm {MS}^\mathrm {LB})\). The goal node is a node with \(\hat{J}=J\), corresponding to all complete solutions.

The set of all so far considered nodes is implemented by a hash-table with \(\hat{J}\) as key. Furthermore, the A* algorithm maintains a priority queue containing references to all open node/NDT record pairs, i.e., the non-dominated partial solutions that have not yet been expanded. The order criterion in this priority queue extends the is-better relation (10) from the LLBH heuristic by considering the number of remaining unscheduled jobs \(|J\setminus \hat{J}|(\pi ^\mathrm {p})\) as secondary criterion after \(\mathrm {MS}^\mathrm {LB}_{\max }(\pi ^\mathrm {p})\), i.e., vectors

$$\begin{aligned} \mathbf {f}' := (\mathrm {MS}^\mathrm {LB}_{\max }(\pi ^\mathrm {p})=f_{0}(\pi ^\mathrm {p}),|J\setminus \hat{J}|(\pi ^\mathrm {p}), f_{1}(\pi ^\mathrm {p}),\ldots ,f_{m}(\pi ^\mathrm {p})) \end{aligned}$$
(21)

are lexicographically compared. This enhanced relation implies that partial solutions with more scheduled jobs are preferred over partial solutions with the same \(\mathrm {MS}^\mathrm {LB}_{\max }\) but fewer scheduled jobs, and thus the search adopts depth-first search characteristics when \(\mathrm {MS}^\mathrm {LB}_{\max }\) does not change. In this way, complete solutions are obtained earlier.

Algorithm 1 sketches our A* algorithm. In each major iteration, a best node/NDT record pair is taken from the priority queue and expanded by considering the addition of each job \(j\in J\setminus \hat{J}\). Hereby, the corresponding node is looked up or created when it does not yet exist and a respective NDT record is determined by calculating the earliest usage times \(\mathbf {t}\) and the evaluation vector \(\mathbf {f}\). The possibly multiple NDT records in the node are checked for dominance: Only non-identical and non-dominated entries are kept. An NDT record with time vector \(\mathbf {t}\) dominates (symbol \(\lhd \)) another NDT record with time vector \(\mathbf {t}'\) iff \(\forall r\in R_0 \; (t_r\le t'_r)\,\wedge \,\exists r\in R_0 \; ( t_r<t'_r)\). The A* algorithm stops with a proven optimal solution when the goal node representing a complete solution is selected for expansion.

figure a

Diving: The A* algorithm described above aims at finding a proven optimal solution as quickly as possible. It usually does not yield intermediate complete solutions significantly earlier than when terminating with the proven optimum.

To also obtain intermediate heuristic solutions we extended our A* algorithm by diving for a complete solution at regular intervals: At the very beginning and after each \(\delta \) regular iterations, the algorithm switches from its classical best-first strategy temporarily to a greedy completion strategy which follows in essence LLBH. The currently selected node is expanded by considering all feasible extensions, and each extension is evaluated by calculating the respective evaluation vector \(\mathbf {f}'\). From all these extensions, only those that are new and non-dominated—i.e., no corresponding node/NDT entry exists yet—are kept. Should no extension remain in this way, diving terminates unsuccessfully. Otherwise, a best extension is selected from this set according to the lexicographic comparison of the \(\mathbf {f}'\) vectors, and the diving continues by expanding this node/NDT record pair next. This methodology guarantees that always not yet expanded nodes are further expanded and the diving, if successful, always yields a different solution.

7 Computational Results

To test our algorithms we created two non-trivial sets of random instances. Set B exhibits a balanced (B) workload over all resources R, whereas set S has a skewed (S) workload. Each set consists of 50 instances for each combination of \(n \in \{10, 20, 50, 100, 200, 500, 1000\}\) jobs and \(m \in \{2,3,5\}\) secondary resources. The required resource \(q_j\) for each job \(j \in J\) was randomly sampled from the discrete uniform distribution \(\mathcal {U}\{1,m\}\) for the balanced set B but in a skewed way for set S: There, resource m is chosen with twice the probability of each of the resources 1 to \(m-1\). The preprocessing times \(p^{\mathrm {pre}}_j\) and postprocessing times \(p^{\mathrm {post}}_j\) were sampled from \(\mathcal {U}\{0,1000\}\) for both instance sets, while times \(p^0_j\) were sampled from \(\mathcal {U}\{1,1000\}\) in case of set B and \(\mathcal {U}\{1,2500\}\) in case of set S.

Table 1. Average results of LLBH, A*, and CPLEX for instances of sets B, S, and P.

A third set of instances was derived from the work on patient scheduling for particle therapy in [8]. This set, called P, comprises 699 instances that are expected in practical day-scenarios of this application. We partitioned the whole set into groups with up to 10, 11 to 20, 21 to 50, and 51 to 100 jobs with 51, 39, 207 and 402 instances, respectively. All these instances use \(m=3\) secondary resources. All three instance sets are available from https://www.ac.tuwien.ac.at/research/problem-instances#JSOCMSR.

The algorithms were implemented using G++ 5.4.1. All tests were done on a single core of an Intel Xeon E5649 with 2.53 GHz with a CPU-time limit of 900 s and a memory limit of 15 GB RAM. The MILP from Sect. 5 was solved with CPLEX 12.7. In A* diving was performed every \(\delta =1000\)-th iteration.

Table 1 lists aggregated results for each combination of instance type and the different numbers of jobs and secondary resources. Columns opt state the percentage of instances that could be solved to proven optimality. Columns %-gap list average optimality gaps of final solutions \(\pi \), which are calculated by \(100\cdot (\mathrm {MS}(\pi )-\mathrm {LB})/\mathrm {LB}\), where \(\mathrm {LB}\) is the lower bound returned from A* in case of LLBH and A* and the lower bound returned from CPLEX in case of CPLEX. Columns provide corresponding standard deviations. Columns t show the median running times in seconds. In case of MILP, optimality gaps are list only if solutions for all 50 instances could be obtained.

These results give a rather clear picture: While A* performs very well on essentially all instance sets and sizes—its largest average optimality gaps are <5%—CPLEX applied to our MILP model cannot compete at all. CPLEX is not even able to solve all instances with 10 jobs to optimality, and generally does not yield any solution for instances with 200 and more jobs. With only few exceptions, instances of set B are generally rather easy to solve for A* to either optimality or with a small remaining gap of less than 0.2%. Median running times are here fractions of a second for \(n\le 500\) and under three seconds for \(n=1000\). Here we could observe that the general lower bound \(\mathrm {MS}^\mathrm {LB}\) is usually very tight and especially for \(m=2\) often already corresponds to the optimal solution value. Skewed instances of type S but also most instances of type P are more difficult to solve. Especially for set S and \(m\in \{2,3\}\), A* was only able to solve instances up to size 20 consistently to optimality. The reason when A* did not terminate with proven optimality was always that the memory limit had been reached. However, thanks to A*’s diving, heuristic solutions with small remaining optimality gaps could still be found. The LLBH is—as expected—always very fast, nevertheless providing excellent solutions, although without specific performance guarantees.

8 Conclusions

In this work we introduced the problem of scheduling a set of jobs, where each job requires two resources: a common resource shared by all jobs for part of their processing, and a secondary resource for the whole processing time. Despite that we could show this problem to be NP-hard, we came up with an excellent lower bound for the makespan, which we exploited in the fast constructive heuristic LLBH and the complete A* search. The A* algorithm features in particular a special graph structure in which each node corresponds to an unordered set of already scheduled jobs in combination with a set of NDT records representing individual non-dominated partial solutions. Hereby it is possible to exploit symmetries and reduce the memory consumption. A diving mechanism is further used to obtain heuristic solutions in regular intervals. It turns out that A* works mostly extremely well. However, some instances especially with skewed resource workloads and competing resources are occasionally hard to solve. The focus of further research will be to better understand difficult instances, to consider extended variants of this problem and to develop advanced heuristic methods.