Keywords

1 Introduction

The classical job shop problem (CJSP) deals with scheduling n jobs on m machines, which is known as a NP-hard Problem [1]. Each job involves a set of operations with their pre-specified sequences as well as processing times. However, in today’s competitive businesses, companies often need to apply more flexible and efficient production systems in order to satisfy their requirements. More specifically, not only automation and flexible machines need to be used, but also a flexible scheduling should be designed as well.

Flexible job shop scheduling problem (FJSP) extends CJSP which does not restrict the operations to be processed on pre-specified machines [2, 3]. Flexibility allows the problem to be modeled in a more realistic manner, however, exact methods are unable to solve the problem efficiently. The FJSP scheduling encompasses two sub-problems: assigning an operation to a machine through existing machines and specifying the sequence of the jobs’ operations. Brucker and Schlie [4] studied the FJSP for the first time. They introduced a polynomial algorithm for the problem with two jobs. Although in some cases the exact methods can in theory find the optimal solution for the problems, computational time is so long that it is not practical to use them. Researchers have been trying to find ways in which optimal or near optimal solutions can be obtained in reasonable computational time. In recent years, some heuristic and meta-heuristic methods have shown to be promising in achieving this goal, including tabu search (TS), simulated annealing (SA), ant colony optimization (ACO), genetic algorithm (GA) [58].

A new evolutionary algorithm named imperialist competitive algorithm (ICA), has been proposed recently by Atashpaz and Lucas [9]. This meta-heuristic algorithm has shown promising results on several engineering problems and industrial engineering field [1015]. Combining two or more meta-heuristic methods seem to help achieve good efficiency that is not possible by applying each one alone. Here, TS, SA and variable neighborhood search (VNS) play an important role. Tavakkoli-Moghaddam et al. [16] and Naderi et al. [17] presented a hybridization of electromagnetic-like mechanism and SA. Some other hybrid meta-heuristics for solving the abovementioned problem are also available [1822]. Furthermore, Shahsavari-pour and Ghasemishabankareh [23] presented a novel hybrid GA and SA algorithm to solve the FJSP, where for the first time, an efficient hybrid ICA and SA has been applied for solving the FJSP.

As mentioned earlier, since the FJSP is well-known to be NP-hard, meta-heuristic algorithms have significant advantages to solve the problem over exact methods. Hybridization of meta-heuristic methods has attracted much attention of many researchers. This paper proposes a new hybridized algorithm named as hybrid imperialist competitive algorithm with Simulated Annealing (HICASA), where SA is applied as a local search algorithm, while ICA does global search in the solution space. In this study the FJSP is considered as a single-objective problem and the proposed algorithm is applied to minimize the makespan. The robust parameter setting procedure is applied to set all parameters for HICASA, GA and ICA. By solving the same benchmarks, our results show that HICASA is superior to GA and ICA.

The remaining sections of the paper are organized as follow: Sect. 2 gives problem representation. Section 3 describes solution methodologies for solving the FJSP. The experimental design and computational results are provided in Sect. 4. Finally the conclusions are presented in Sect. 5.

2 Problem Representation

The FJSP includes n jobs which are scheduled on m machines. The jobs are represented by the set J = {1, 2, …, n} and the set M = {1, 2, , m} indicates the machines. The purpose of the optimization task is to generate a feasible schedule consistent with minimization of the objective function and satisfying problem constraints at the same time. In this FJSP problem, all machines are assumed to be available at time zero, all jobs can be processed at time zero, each machine can have only one operation at a time, each job can be processed by only one machine at a time and transportation times are not considered. Notations and variables of the FJSP are presented as follows:

The mathematical model of the FJSP is given as follows:

$$ Z = Min\{ \hbox{max} \{ C_{K(j)ji} \} \} ;K(j) \in K; j \in J;i \in M $$
(1)

s.t:

$$ C_{lji} - P_{lji} \ge C_{l - 1jp} \quad l = 1,2, \ldots,K(j)\quad j = 1,2,\ldots,n\quad i,p = 1,2,..,m $$
(2)
$$ C_{lji} - P_{lji} \ge C_{qei} \quad q,l = 1,2,\ldots,K(j)\quad e,j = 1,2,\ldots,n\quad i = 1,2,\ldots,m $$
(3)
$$ C_{lji} \ge 0\quad l = 1,2,\ldots,K(j)\quad j = 1,2,\ldots,n\quad i = 1,2,\ldots,m, $$
(4)

where Eq. (1) implies the objective function (makespan), which should be minimized. As noted above, the problem contains two basic restrictions: the first one is precedence constraint belonging to the operations of a job. It means that the operation l of job j, cannot be started until the whole previous operations (operation 1 to l − 1) to be completed (Eq. (2)). The second restriction is non-overlapping constraint of the operations on a machine which is specified by Eq. (3). It means that the machine does not start to process the next operation until the current operation is finished completely.

3 Solution Procedure

3.1 Proposed HICASA Algorithm

The imperialist competitive algorithm is one of the efficient evolutionary algorithms in solving discrete optimization problems [9]. In this algorithm, there are some countries (or colonies) which are divided into two categories, imperialists and colonies. The imperialist with its colonies, is called an empire. Competition among imperialists continues and the most powerful imperialist has a higher chance to take the weakest colony of the weakest imperialist. This process continues until just one imperialist remains. Finally all the imperialists and colonies become the same.

HICASA is a novel meta-heuristic algorithm that integrates ICA with the SA algorithm. SA has functions as a neighborhood search process and improves the convergence of the solutions. During this process, some colonies are chosen randomly from each empire and the SA algorithm is used as neighborhood search to alter chosen colonies. The altered colonies replace the previous ones in each empire and the algorithm goes on. The structure of HICASA is as follows (all equations of ICA are captured from Atashpaz and Lucas [9]):

Establishing Initial Empires.

In ICA the initial population, which is generated randomly, as colonies, is located in the form of array. Each individual in the population is equivalent to the chromosomes in GA. Each array in the FJSP consists of two strings [24]. The first ordered string represents the number of machines and the second string represents the order of the job operations. Figure 1 shows an example of an array for our experiments in Sect. 4. Note that the fifth element in the first string is 2 and the counterpart element in the second one is 3. It means the second operation of job 3 should be performed on machine 2 (O ij : operation j from job i). The number of initial population is considered as NC (number of countries) in this paper.

Fig. 1.
figure 1

Array structure of a country (colony).

Calculating Objective Function and Generating Colonies.

In order to evaluate the colonies’ power, a cost function should be calculated. In the FJSP, the cost function is equivalent to the value of makespan (C max ). For each colony the value of C max is equal to:

$$ C_{\hbox{max} } = \hbox{max} \{ C_{K(j)ji} \} ;\quad K \in K(j);j = 1,2,\ldots,n;\quad i = 1,2,\ldots,m $$
(5)

Then N imp (number of empires) of the best members are chosen as imperialists. N col is the number of remaining countries which should be distributed between the imperialists. The number of colonies for each imperialist depends on the imperialists’ power. In order to calculate their power, first the normalized cost should be computed according to Eq. (6) (i.e., normalized makespan):

$$ C_{O} = c_{O} - \mathop {\hbox{max} }\limits_{{i = 1,2,\ldots,N_{imp} }} \{ c_{i} \} ;\forall O = 1,2,\ldots,N_{imp} , $$
(6)

where c O is the makespan of the Oth imperialist and C O is its normalized value. Now the relative power of each imperialist (P O ) is calculated through the following:

$$ p_{O} = \left| {\frac{{C_{O} }}{{\sum\limits_{i = 1}^{{N_{imp} }} {C_{i} } }}} \right|;\forall O = 1,2,\ldots,N_{imp} $$
(7)

Obviously, each imperialist’s power is the portion of colonies that should be possessed by that imperialist. Hence the number of colonies of each imperialist is computed by Eq. (8):

$$ N.C._{O} = round\{ p_{O} \times N_{col} \} ;\forall O = 1,2,\ldots,N_{imp} , $$
(8)

where N.C. O is the initial number of Oth imperialist’s colonies. round is a function which rounds a decimal number into the nearest integers. So each imperialist in an empire, has N.C. O colonies, which are chosen from the remained initial countries randomly.

Assimilating.

The empires attempt to increase their power by improving their colonies. In other words, they propel their colonies to become similar to their imperialist through making constructive changes in their structures. This changing process is similar to the crossover process in GA. The assimilating process in the FJSP is shown as a designed algorithm in Fig. 2.

Fig. 2.
figure 2

Assimilating process.

Now the cost function (makespan) for the colonies resulting from assimilating process is calculated by Eq. (1).

Neighborhood Search Through SA Algorithm.

As mentioned before, neighborhood search is suitable for further fine-tuning solutions produced by global optimization methods. In this paper, SA is integrated with ICA to deal with the FJSP problem. The algorithm works as follows: NSA colonies are chosen randomly from each empire and SA is carried out for each chosen colony. At first the neighborhood search algorithm works with the second string of each array, if it does not get improved after NI iterations, it goes to the first string (machines string) of that array. The ultimate array replaces the current solution. The function of SA algorithm in this neighborhood search is shown in Fig. 3.

Fig. 3.
figure 3

Pseudo code of neighborhood search using SA.

The Replacement of Colony and Imperialist.

During the movement of colonies towards the imperialist and the neighborhood search, some colonies have better objective values than their imperialist. In this case, in each empire, the imperialist is replaced by the colony that has the smallest cost.

The total power of an imperialist is the summation of the power related to the imperialist and a percentage of its all colonies’ power and calculated by the following:

$$ T.C._{n} = C_{n} (imperialist_{n} ) + \xi \cdot mean\{ C_{n} (\begin{array}{*{20}c} {colonies} & {of} & {impire_{n} } \\ \end{array} )\} , $$
(9)

where T.C. n is the total cost of the nth empire and ζ is a positive value, which shows the level of influence of colonies’ power in calculating the total power of the empire.

During an algorithm run, each empire that cannot increase its power loses its competition power gradually. There is always competition among empires and the stronger empires have the higher chance to seize the colony, and for this purpose the chance of possession is defined by possession probability. Total normalized cost of each empire (N.T.C. o ) is calculated by Eq. (10) and the possession probability of each empire is calculated by Eq. (11):

$$ N.T.C_{O} = T.C._{O} - \mathop {\hbox{max} }\limits_{{i = 1,2,\ldots,N_{imp} }} \{ T.C._{i} \} ;\forall O = 1,2,\ldots,N_{imp} $$
(10)
$$ p_{{p_{O} }} = \left| {\frac{{N.T.C._{O} }}{{\sum\limits_{i = 1}^{{N_{imp} }} {N.T.C._{i} } }}} \right|;\forall O = 1,2,\ldots,N_{imp} $$
(11)

The colonies are divided among empires randomly and with regards to the probability of acquiring each empire. To do this, first P vector is formed as follows: \( P = [p_{{p_{1} }} ,p_{{p_{2} }} ,p_{{p_{3} }} ,\ldots,p_{{pN_{imp} }} ]. \) Then vector R should be generated randomly in the closed interval [0,1] with the same size as P (\( R = [r_{1} ,r_{2} ,r_{3} ,\ldots,r_{{N_{imp} }} ] \)). Finally we calculate vector D (\( D = P - R = [D_{1} ,D_{2} ,D_{3} ,\ldots,D_{{N_{imp} }} ] = [p_{{p_{1} }} - r_{1} ,p_{{p_{2} }} - r_{2} ,p_{{p_{3} }} - r_{3} ,\ldots,p_{{pN_{imp} }} - r_{{N_{imp} }} ] \)) and a colony belongs to the empire which has the maximum index in D vector.

In each iteration, the algorithm eliminates the empire which has no colonies. In the algorithm’s each iteration, all the empires collapse gradually except for the strongest one. The algorithm stops when just one empire is remained. Notations of parameters for HICASA are as follows: NC is the number of countries, N imp is the number of empires, Pc is assimilation rate, ζ is constant value, S and R are the number of internal and external loop in SA respectively, NSA is the number of local search performed by SA algorithm and α is decreasing rate for temperature.

3.2 Genetic Algorithm

GA is one of the most widely-used population-based stochastic search algorithms proposed by Holland [25]. GA begins with an initial population and improves the solutions based on the evolutionary process. In this regard, GA utilizes two important operators to modify solutions and produce offspring. A selection procedure is used to generate offspring in the next generation. In the selection procedure better solutions have higher probability to be chosen. This process continues until the termination condition is satisfied. The crossover and mutation are captured from [23, 24].

4 Design of Experiments

In this paper, to evaluate the proposed algorithm, the FJSP has been considered. The problem includes 4 jobs and 5 machines. Data containing processing times have been extracted from [26] and shown here in Table 1. The objective function of the abovementioned FJSP problem has been treated as a single-objective through minimizing the makespan. The results of HICASA, ICA and GA have been compared.

Table 1. Processing time of 4 × 5 problem.

4.1 Taguchi Parameter Design

Since the three algorithms are population-based and their parameters’ values affect the final solution qualities significantly. There are different methods to calibrate the parameters of algorithms [14]. In this paper Taguchi method is used. Taguchi method has been utilized for optimization [27, 28] including evolutionary algorithms [29, 30]. Taguchi method has three phases: system design, parameter design and tolerance design. In this paper, Taguchi method is used as a robust parameter design. In this approach parameters’ design is used to define factors which provide the best performance of processes/products.

In Taguchi method, instead of doing full factorial trails, an orthogonal array is used to carry out fewer experiments which examine the effect of noise. The orthogonal array suggests a definite number of combinations of factor levels which have the same results as full factorial trails. A robust parameter design tries to minimize the effect of noise factor through achieving a higher ratio of signal-to-noise (S/N). In other words, a higher value of S/N causes less effect of uncontrollable and noise factors in the performance of the algorithm. The value of the S/N is calculated as [28]:

$$ S/N = - 10 \times \log_{10} (objectivefunction)^{2} $$
(12)

In this study, we select crucial factors of the algorithms (GA, ICA) according to the previous researches. Three factors of HICASA are the same as ICA but we add NSA parameter to show the effect of neighborhood search in the proposed algorithm. Interested readers can refer to [14, 30, 31]. By using the Taguchi method the best combination of the factors and their levels can be obtained for each algorithm. This process is used to compare the performance of the algorithms. The factors and their levels for the algorithms are shown in Table 2. Notations for GA algorithm are as follows: GN is the number of generation, Pop_size is the number of individuals, Pc and Pm are the probabilities of crossover and mutation respectively.

Table 2. Factors and their level in GA, HICASA and ICA.

As shown in Table 2 for GA there are four 3-level factors, for ICA three 3-level factors and for HICASA four 3-level factors. In order to facilitate and decrease the number of the experiments, the orthogonal array is used. Appropriate orthogonal arrays assigned for GA and HICASA is L9 and for ICA is L9 [14]. Table 3 shows the orthogonal arrays.

Table 3. Orthogonal array L9 for GA and HICASA.

The mean value of S/N is calculated and shown in Figs. 4, 5 and 6 for all algorithms.

Fig. 4.
figure 4

Mean S/N ratio for each level of factors in GA.

Fig. 5.
figure 5

Mean S/N ratio for each level of factors in ICA.

Fig. 6.
figure 6

Mean S/N ratio for each level of factors in HICASA.

According to the Figs. 4, 5 and 6 the optimal levels of the factors are the set {A3, B3, C1 and D2}, the set {A2, B1 and C2} and the set {A2, B2, C1 and D3} for GA, ICA and HICASA respectively.

4.2 Experimental Results

The algorithms are implemented in Visual Basic Application (VBA) and run on PC 2 GHz with 512 MB RAM. According to the parameters set in the previous section, the problem presented in Table 1 is solved by the proposed HICASA, ICA and GA. Each algorithm has been run 50 times and the averaged value was recorded as the final results. As shown in Fig. 7, the objective function values (makespan) for HICASA in all 50 runs converged to the optimal value of 11 but for GA and ICA the objective function values (makespan) did not converge to the optimal makespan in most of the runs. Clearly, the proposed algorithm (HICASA) obtains better solution in solving the same benchmark. Figure 8 illustrates the solution obtained by HICASA for the problem presented in Table 1. The numbers in the Gantt chart (Fig. 8) illustrate jobs’ number.

Fig. 7.
figure 7

Results of GA, ICA and HICASA.

Fig. 8.
figure 8

Gantt chart of the obtained solution by HICASA for 4 × 5 Problem.

5 Conclusions

In this paper a novel hybrid meta-heuristic method (HICASA) has been developed to solve the FJSP. The proposed algorithm is a hybridization of ICA and SA algorithm which attempts to minimize makespan as the objective function. HICASA algorithm is compared with ICA and GA. Results from calculating and comparing between these three algorithms demonstrate that HICASA algorithm performs better than GA and ICA. By using the neighborhood search in the procedure of HICASA algorithm, it is able to solve the FJSP optimization problems effectively. HICASA can be used for solving different scheduling problems. It is also possible to integrating this algorithm with other meta-heuristic algorithms.