Keywords

1 Introduction

The job shop scheduling problem (JSP) is considered to be one of the most relevant scheduling problems. It consists in allocating a set of resources to execute a set of jobs under a set of given constraints, with the most popular objective in the literature of minimizing the project’s execution timespan, also known as makespan. Solving this problem improves the efficiency of chain production processes, optimising the use of energy and materials [12] and having a positive impact on costs and environmental sustainability. However, in real-world applications, the available information is often imprecise. Interval uncertainty arises as soon as information is incomplete, and contrary to the case of stochastic and fuzzy scheduling, it does not assume any further knowledge, thus representing a first step towards solving problems in other frameworks [1]. Moreover, intervals are a natural model whenever decision-makers prefer to provide only a minimal and a maximal duration, and obtain interval results that can be easily understood. Under such circumstances, interval scheduling allows to concentrate on significant scheduling decisions and to produce robust solutions.

Contributions to interval scheduling in the literature are not abundant. In [10], a genetic algorithm is proposed for a JSP minimizing the total tardiness with respect to job due dates with both processing times and due dates represented by intervals. In [5] a different genetic algorithm is applied to the same problem, including a study of different interval ranking methods based on the robustness of the resulting schedules. A population-based neighbourhood search for an interval JSP with makespan minimisation is presented in [9]. In [11], a hybrid between PSO and a genetic algorithm is used to solve a flexible JSP with interval processing times as part of a larger integrated planning and scheduling problem. Recently, in [6] a genetic algorithm is applied to the JSP with interval uncertainty minimizing the makespan and achieving the results that are the current state of the art.

Metaheuristic search methods are especially suited for job shop due to its complexity. In particular, artificial bee colony (ABC) is a swarm intelligence optimiser inspired by the intelligent foraging behaviour of honeybees that has shown very competitive performance on JSP with makespan minimisation. For instance, [13] propose an evolutionary computation algorithm based on ABC that includes a state transition rule to construct the schedules. Taking some principles from Genetic Algorithms, [14] present an Improved ABC (IABC) where a mutation operation is used for exploring the search space, enhancing the search performance of the algorithm. Later, [2] propose an effective ABC approach based on updating the population using the information of the best-so-far food source.

In the following, we consider the JSP with makespan minimisation and intervals modelling uncertain durations. The problem is presented in Sect. 2. In Sect. 3 we propose several variants of an ABC algorithm to address this problem. These variants are compared in Sect. 4, where the most successful one is also compared with the state-of-the-art and a robustness analysis is also included.

2 The Job Shop Problem with Interval Durations

The classical job shop scheduling problem consists of a set of resources \(M=\{M_1, \dots , M_m\}\) and a set of jobs \(J=\{J_1,\dots ,J_n\}\). Each job \(J_j\) is organised in tasks \((o(j,1),\dots ,o(j,m_j))\) that need to be sequentially scheduled. We assume w.l.o.g. that tasks are indexed from 1 to \(N=\sum _{j=1}^n m_j\), so we can refer to task o(jl) by its index \(o=\sum _{i=1}^{j-1} m_i+l\) and denote the set of all tasks as \(O=\{1,\dots , N\}\). Each task \(o \in O\) requires the uninterrupted and exclusive use of a machine \(\nu _o \in M\) for its whole processing time \(p_o\).

A solution to this problem is a schedule \(\boldsymbol{s}\), i.e. an allocation of starting times for each task, which, besides being feasible (all constraints hold), is optimal according to some criterion, in our case, minimal makespan \(C_{max}\).

2.1 Interval Uncertainty

Following [9] and [5], uncertainty in the processing time of tasks is modelled using a closed intervals. Therefore, the processing time of task \(o \in O\) is represented by an interval \(\mathbf {p_o}=[\underline{p}_o,\overline{p}_o]\), where \(\underline{p}_o\) and \(\overline{p}_o\) are the available lower and upper bounds for the exact but unknown processing time \(p_o\).

The interval JSP (IJSP) with makespan mimisation requires two arithmetic operations: addition and maximum. Given two intervals \(\mathbf {a}=[\underline{a},\overline{a}], \mathbf {b}=[\underline{b},\overline{b}]\), the addition is expressed as \([\underline{a}+\underline{b}, \overline{a}+\overline{b}]\) and the maximum as \([\max (\underline{a}, \underline{b}), \max (\overline{a}, \overline{b})]\). Also, given the lack of a natural order in the set of closed intervals, to determine the schedule with the “minimal” makespan, we need an interval raking method. For the sake of fair comparisons with the literature, we shall use the midpoint method: \(\mathbf {a} \le _{MP} \mathbf {b} \Leftrightarrow m(\mathbf {a}) \le m(\mathbf {b}) \) with \( m(\mathbf {a}) =(\underline{a}+\overline{a})/2\). This is used in [5] and it is equivalent to the method used in [10]. Notice that \(m(\mathbf {a})\) coincides with the expected value of the uniform distribution on the interval \(E[\mathbf {a}]\).

A schedule \(\boldsymbol{s}\) for the IJSP establishes a relative order \(\pi \) among tasks requiring the same machine. Conversely, given a task processing order \(\pi \) the schedule \(\boldsymbol{s}\) may be computed as follows. For every task \(o \in O\), let \(\mathbf {s_o(\pi )}\) and \(\mathbf {c_o(\pi )}\) denote respectively the starting and completion times of o, let \(PM_o(\pi )\) and \(SM_o(\pi )\) denote the predecessor and successor tasks of o in the machine \(\nu _o\) according to \(\pi \), and let \(PJ_o\) and \(SJ_o\) denote the tasks preceding and succeeding o in its job. Then the starting time of o is given by \(\mathbf {s_o(\pi )} = \max (\mathbf {s}_{PJ_o} + \mathbf {p}_{PJ_o},\mathbf {s}_{PM_o(\pi )} + \mathbf {p}_{PM_o(\pi )})\), and the completion time by \(\mathbf {c_o(\pi )} = \mathbf {s_o(\pi )} + \mathbf {p_o}\). The makespan is computed as the completion time of the last task to be processed according to \(\pi \) thus, \(\mathbf {C_{max}(\pi )}=\max _{o\in O}\{c_o(\pi )\}\). If there is no possible confusion regarding the processing order, we may simplify notation by writing \(\mathbf {s_o}\), \(\mathbf {c_o}\) and \(\mathbf {C_{max}}\).

2.2 Robustness on Interval JSP

When uncertainty is present, solution robustness may become a concern. In fact, makespan values obtained for IJSP are not exact values, but intervals. It is only after the solution is executed on a real scenario that actual processing times for tasks \(P^{ex}=\{p_o^{ex} \in [\underline{p}_o, \overline{p}_o], o \in O\}\) are known. Therefore, it is not until that moment that the actual makespan \(C_{max}^{ex} \in [\underline{C}_{max}, \overline{C}_{max}]\) can be found. It is desirable that this executed makespan \(C_{max}^{ex}\) does not differ much from the expected value of the makespan according to the interval \(\mathbf {C_{max}}\).

This is the idea behind the concept of \(\epsilon \)-robustness first proposed in [3] for stochastic scheduling, and later adapted to the IJSP in [5]. For a given \(\epsilon \ge 0\), a schedule with makespan \(\mathbf {C_{max}}\) is considered to be \(\epsilon \)-robust in a real scenario \(P^{ex}\) if the relative error made by the expected makespan \(E[\mathbf {C_{max}}]\) with respect to the makespan \(C_{max}^{ex}\) of the executed schedule is bounded by \(\epsilon \), that is:

$$\begin{aligned} \frac{|C_{max}^{ex}-E[\mathbf {C_{max}}]|}{E[\mathbf {C_{max}}]} \le \epsilon . \end{aligned}$$
(1)

Clearly, the smaller the bound \(\epsilon \), the more robust the interval schedule is.

This measure of robustness is dependent on a specific configuration \(P^{ex}\) of task processing times obtained upon execution of the predictive schedule \(\boldsymbol{s}\). In the absence of real data, as is the case with the usual synthetic benchmark instances for job shop, we may resort to Monte-Carlo simulations. We simmulate K possible configurations \(P^k = \{p_o^{k} \in [\underline{p}_o, \overline{p}_o], o \in O\}\) using uniform probability distributions to sample durations for every task and compute for each configuration \(k=1,\dots ,K\) the exact makespan \(C_{max}^k\) that results from executing tasks according to the ordering provided by \(\boldsymbol{s}\). Then, the average \(\epsilon \)-robustness of the predictive schedule across the K possible configurations, denoted \(\overline{\epsilon }\), can be calculated as:

$$\begin{aligned} \overline{\epsilon }=\frac{1}{K}\sum _{k=1}^K \frac{|C_{max}^k-E[\mathbf {C_{max}}]|}{E[\mathbf {C_{max}}]}. \end{aligned}$$
(2)

This value provides an estimate of how robust the solution \(\boldsymbol{s}\) is across different processing times configurations.

3 An Artificial Bee Colony Algorithm

The Artificial Bee Colony Algorithm is a bioinspired swarm metaheuristic for optimisation based on the foraging behaviour of honey bees. Since it was introduced in [7] it has been successfully adapted to a variety of problems [8]. In this paper, we adapt it to solve the Interval Job Shop Scheduling problem.

In ABC, a swarm of bees exploit a changing set of food sources with two leading models of behaviour: recruiting rich food sources and abandoning poor ones. In our case, each food source \( fs \) encodes an IJSP solution using permutations with repetition [4] and the decoding of a food source follows an insertion strategy, consisting in iterating along the food source and scheduling each task at its earliest feasible insertion position [5]. The richness or nectar amount of each food source is proportional to the makespan of the schedule it represents, so lower makespan values translate into richer food sources.

The ABC starts by generating and evaluating initial pool \(P_0\) of random food sources, so the best food source in the pool is assigned to the hive queen. Then, ABC iterates over a number of cycles, each consisting of three phases mimicking the behaviour of three types of foraging bees: employed, onlooker and scout. In the employed bee phase, each food source is assigned to one employed bee, so this employed bee explores a new candidate food source between its own food source and the queen’s one, evaluating the candidate and sharing this information with the rest of the hive. If the new food source is equivalent to queen’s (i.e. the best food source found so far), it is discarded for the sake of maintaining diversity in the pool. If it is not discarded and it improves the original food source (i.e. smaller makespan value), it replaces it. Otherwise, the number of improvement trials \( fs.numTrials \) of the original food source is increased by one. In the next phase, each onlooker bee chooses a food source and tries to find a better neighbouring one. The new food source receives the same treatment as in the previous phase. Finally, in the scout bee phase, if the number of improvement trials of a food source reaches a maximum number \(NT_{max}\), the scout bee finds a new food source to replace the former one in the pool of solutions. Finally, the algorithm terminates after a number \( maxIter \) of consecutive iterations without finding a food source that improves the queen’s one. The following subsections provide more detail on each of the phases; the pseudo-code of the resulting ABC is given in 1.

3.1 Employed Bee Phase

Originally, the employed bees search is always guided by the queen’s food source. However, this strategy may in some occasions cause a lack of diversity in the swarm and lead to premature convergence [2]. To address this issue, we propose to modify the original algorithm and select the guiding food source from an elite group that will contain the most suitable food sources according to one of the following strategies. In the first strategy, denoted \(\mathsf {Elite}_1\), it only contains the best-found food source, so it is equivalent to the classical ABC. In the second strategy, denoted \(\mathsf {Elite}_2\), the elite group contains the food sources with the highest number of improvement trials at the beginning of the iteration and the best food source in the set is selected to guide the employed bee. Finally, in the third strategy, denoted \(\mathsf {Elite}_3\), the elite group contains the best B food sources in the current swarm and a solution from this group is chosen at random to guide the employed bee. B is a parameter of the algorithm that helps balancing diversity: when \(B=1\), this strategy is equivalent to \(\mathsf {Elite}_1\), and the larger B is, the more diversity is inserted into the phase.

Once two food sources are selected for each employed bee, a recombination operator is applied with probability \(p_{emp}\) to find a new food source to explore. Here, taking advantage of the solution encoding, we propose to use the following operators: Generalised Order Crossover (GOX), Job-Order Crossover (JOX) and Precedence Preservative Crossover (PPX).

3.2 Onlooker Bee Phase

In this phase, food sources are selected from those that have not reached the maximum number of improvement trials. Each selected food source is assigned to an onlooker bee that will explore a neighbouring solution with probability \(p_{on}\) to explore a neighbouring solution. Neighbours are obtained by performing a small change on the food source using one of the following operators for permutations: Swap, Inversion or Insertion.

3.3 Scout Bee Phase

In this last phase, a scout bee is assigned to each food source that has reached the maximum number of improvement trials. Since this food source has not been improved after the given number of attempts, it is discarded and the scout bee is in charge of finding a replacement. To implement this phase, every food source \( fs \) having \( fs.numTrials > {\text {NT}}_{max}\) is replaced by a random one \( fs' \) with \( fs'.numTrials = 0\).

4 Experimental Results

The objective of this Section is to evaluate the performance of the three variants of the ABC algorithm in comparison with the state-of-the-art for interval JSP with makespan minimization, which, to our knowledge is the genetic algorithm from [5], referred to as GA hereafter.

We consider 12 well-known instances for the job shop problem (in brackets, the size \(n \times m\)): FT10 (\(10\times 10\)), FT20 (\(20\times 5\)), La21, La24, La25 (\(15\times 10\)), La27, La29 (\(20\times 10\)), La38, La40 (\(15\times 15\)), ABZ7, ABZ8, and ABZ9 (\(20\times 15\)). Processing times are modified to be intervals, so given the original deterministic processing time of a task \(p_o\), the interval time is \(\mathbf {p_o}=[p_o-\delta , p_o+\delta ]\), where \(\delta \) is a random value in \([0, 0.15p_o]\). The resulting IJSP instances are available onlineFootnote 1. We use a PC with Intel Xeon Gold 6132 processor at 2.6 Ghz and 128 Gb RAM with Linux (CentOS v6.10) and a C++ implementation. Every variant of the algorithm is run 30 times on each instance to obtain representative data.

A parameter tuning process has been carried out for the three variants of ABC, namely \(ABC_{E1}\), \(ABC_{E2}\) and \(ABC_{E3}\), where \(ABC_{Ei}\) incorporates the strategy \(\mathbf {Elite}_i\), \(i=1,2,3\), in the employed bee phase. In all cases, the population size is equal to 250 and the stopping criterion consists in \( maxIter =25\) consecutive iterations without improving the best solution found so far. For \(ABC_{E3}\), the size of the elite set is \(B=50\) food sources. The final configuration for the remaining parameters for each variant of ABC is shown in Table 1.

Table 1. Final parameter setup for each variant of ABC

Table 2 summarises the results obtained by the GA from [5] and the three ABC variants. For each algorithm and instance it reports the expected makespan (or midpoint) of the best-found solution (\(m(\mathbf {Best})\)), the average expected makespan across all runs, the standard deviation, and the average CPU time in seconds. The best result for each instance is highlighted in bold. Additionally, ANOVA or Kruskall Wallis statistical tests have been performed on the results depending on the normality of the data, followed by a multi-variable analysis. Grey cells highlight those algorithms with no significant difference w.r.t. the best solution on that instance.

Table 2. Computational results and times of GA and ABC

In terms of \(m(\mathbf {Best})\), GA is outperformed by \(ABC_{E2}\) and \(ABC_{E3}\) on every instance. \(ABC_{E2}\) improves GA \(1.82\%\) on average, being up to \(5.63\%\) for instance La29. If we pay attention to the average behaviour, \(ABC_{E3}\) obtains the best results on 10 out of 12 instances, while it is not significantly different from the best on the remaining 2 instances. On average, its results are \(3.02\%\) better than those of GA. However, it is not significantly different than \(ABC_{E2}\) on any instance. What is more interesting is that \(ABC_{E1}\) is never in the set of best methods, which reinforces the hypothesis that the standard ABC is not adequate for our problem and the proposed alternatives offer a significant improvement both w.r.t. the standard ABC and the state-of-the-art GA.

We can also observe that all ABC variants take longer running times than GA. The reason is that one iteration of ABC takes longer than an iteration of GA and also the dynamic stopping criterion translates into more iterations (hence, longer running times) for ABC. However, the efficiency of \(ABC_{Ei}\) per time unit is comparable to if not better than that of GA. Figure 1 depicts the evolution of the midpoint of the best makespan for representative instances La29 and La40. In both cases, \(ABC_{E1}\) (red line) and \(ABC_{E3}\) (green line) not only outperform GA (black line) in the final result, but also present a better makespan improvement rate per time unit. For \(ABC_{E2}\) (blue line) this improvement rate is very similar to that of GA, but \(ABC_{E2}\) achieves a better final result by taking longer to converge. This longer time to converge is also observed in \(ABC_{E3}\), the version that achieves better results in average.

Fig. 1.
figure 1

Evolution along time of the makespan’s midpoint for the best schedules obtained with GA (in black) and the different variants of ABC on instances La40 and La29. (Color figure online)

Finally, we perform a robustness analysis on the solutions obtained by each method. To do so, for each variant of ABC, each instance and each one of the 30 runs we take the expected makespan according to the obtained solution as well as the associated task processing order. This task order is then executed for \(K=1000\) deterministic realisations of each instance to calculate the \(\overline{\epsilon }\) value. Figure 2 shows the boxplots of the resulting \(\overline{\epsilon }\) values on three representative instances. Statistical tests on all instances allow to conclude that there is no significant difference between the robustness of the three variants of the ABC. This homogeneity shows that the newly proposed variants \(ABC_{E2}\) and \(ABC_{E3}\) can obtain better results than a standard ABC (\(ABC_{E1}\)) and GA without deteriorating the robustness of the solutions.

Fig. 2.
figure 2

\(\overline{\epsilon }\)-robustness of schedules obtained with the different variants of ABC on instances La29, La38 and La40.

figure a

5 Conclusions

We have considered the IJSP, a version of the JSP that models the uncertainty on task durations appearing in real-world problems using intervals. We have used an ABC approach as solving method, adapting the general scheme to our problem, and we have tackled the issue of lack of diversity in the swarm by redesigning certain aspects of the employed and onlooker bee phases. This has resulted in three variants of the ABC algorithm. An experimental analysis has shown the potential of these variants, especially those introducing more diversity, \(ABC_{E2}\) and \(ABC_{E3}\), which outperform the results of a more standard \(ABC_{E1}\) as well as the state-of-the-art from the literature. This improvement is present not only when all methods are allowed to converge and stop after \( maxIter \) iterations without improvement, but it would also be the case if the stopping criterion were changed and they were given equal runtime to the state-of-the-art method. Finally, a robustness analysis has shown that the makespan improvement of the new methods is not obtained at the expense of deteriorating the solutions’ robustness.