1 Introduction

This paper deals with the flexible job shop problem (FJSP) which is a generalisation of the job shop problem (JSP) by assuming that an operation may be processed by more than one type of machines, with possibility of variable performances inside the set of machines.

In the FJSP, there are n independent jobs J = {1, …, j, …, n} and a set of K machines M = {mk, 1 ≤ k ≤ m} available at time t0. Each job j is composed of a predefined sequence of nj operations Oij, where Oij denotes the ith operation of the job j. To each operation Oij is associated a pool of machines mij. Given a machine mk from mij than the execution time of Oij on mk is eijk. A scheduling is a definition for each operation Oij of a machine mk from its pool, a starting date sij and a completion time cij of Oij on that machine. The most considered objective is the minimization of the maximum completion time (makespan). The FJSP has a strongly NP-hard nature since it includes two sub-problems: operation’s assignment problem and classical job shop scheduling problem.

A feasible schedule of the FJSP can be modelled by a disjunctive graph where the set of nodes is the set of operations to which are added dummy starting and terminating nodes. Two types of arcs join the nodes. The conjunctive arcs connect two successive operations according to the job execution order. The disjunctive arcs connect two adjacent operations processed on the same machine. The makespan is the length of the longest path in the disjunctive graph. This path is said to be the critical path and a graph may admits more than one critical path. Critical operations are those belonging to a critical path.

Much research has addressed the FJSP and most are based on metaheuristics and heuristics. The efficiency of population based metaheuristic is always enhanced with local search algorithms (Gao et al. 2008; Gen et al. 2009; Zhang et al. 2011; Singh and Mahapatra 2016). Several neighbourhood structures, mainly based on the disjunctive graph model, were designed for the FJSP. Particularly, those based on the critical path have demonstrated their performance to converge to good optima (Hurink et al. 1994; Mastrolilli and Gambardella 2000). Given the literature of the FJSP, none approach has established its superiority among the others and the work of Mastrolilli and Gambardella (2000) still realises among the best results.

Actually, much research literature addresses evolutionary algorithms (EAs) to solve the FJSP because of their ability to perform global search and provide good solutions in a short computation time. However, evolutionary algorithms suffer from stagnation of the search after several genetic iterations due to the domination of few even one high efficient individual. To deal with, we have developed a new leader guided optimization (LGO) search technique. The main idea is to run independently many evolutionary instances where each one is guided by a leader. A leader is a powerful solution discovered in the previous evolution process. We have combined the LGO with a new designed co-evolutionary algorithm (CEA) to solve the FJP. The CEA applies adaptively multiple crossover and mutation operators so as to ensure the effect of multiple evolution schemes into the same algorithm. The hybrid algorithm has shown its performance in the resolution of the FJSP comparing to the literature results. Two new optimal solutions are found in rdata instances of Hurink et al. (1994).

The remainder of the paper is organized as follows: Sect. 2 presents the leader guided optimization. Section 3 describes the co-evolutionary algorithm. Section 4 presents the hybridization scheme and the results.

2 Leader guided optimization

The main idea is to guide the search process by a leader. The leader has a good performance or it contains some promising characteristics. The leader is declared as a winner of a race. In each race, a group of individuals are guided by the leader. These individuals evolve during a race. They can whether exchange experience or adopt evolution and diversification strategies. The winner of a race is saved as a leader. Later, it will initiate another race with a new generated group of individuals. One individual has a limited extreme performance period during his life. Thus, he has a limited number of races to run. The winners of the races are stored in the memory. They form a bank of leaders.

When a leader reaches its performance limit, which is a number of races that it performs during its life, a new leader is selected in the bank of leaders to become the new race leader. The race’s leader enters the race with the sole purpose of guiding the other individuals and not to win the race. During the race, the individuals evolve by combination and diversification operators through a population based search technique. A race may be an independent run of population based search technique guided by the leader.

The leader guided optimization uses the following mechanisms:

  • Initiation of the bank of leaders. These leaders may be generated by different ways. They can be randomly generated than locally optimized or obtained by using some rapid heuristics.

  • Leader selection: this mechanism selects from the bank of leader the one that will guide the next races. This individual may be the most performing or selected according to a priority rule such as LIFO or FIFO.

  • Population initialization: the initial population of the race is composed of a set of new generated solutions added to the race’s leader. These solutions are created independently of the leader whether randomly or by constructive heuristics.

  • A race: a race is a population based metaheuristic which allow the combination of the leader’s features with other individuals.

The global functioning of the leader guided optimization is described in the Algorithm 1.

figure a

The leaders are stored in a k array tree data structure where k is a parameter of the algorithm. The root of the tree is a null node. Each node of the tree has at most k children. For a given node of the tree, its children are leaders that have won a race guided by their parent node. In fact, a leader performs at most k races in its life. Only the winners of the races are stored as leaders. It is obvious that the child’s efficiency is higher than that of its parent. The children of a node are stored from right to left by decreasing order of performances. The children of the root node are generated independently by an initiation mechanism of the bank of leaders. The leaders that have accomplished their races are locked and the others are unlocked. The new leader is selected from the unlocked ones. This data structure injures depth and width search. One can whether starts by a node and performs the races on his children first or on the next node in the same level of the tree.

Let suppose we have a minimization problem and we have generated three initial leaders (x1, 100), (x2, 107) and (x3, 108). The winners of three independent races guided by x1are: (x11, 94), (x12, 90) and (x13, 93). Only two races guided by x2 have discovered new leaders: (x21, 94) and (x22, 96). The races guided by x3 have generated the leader (x31, 85). The obtained tree is given in Fig. 1.

Fig. 1
figure 1

The bank of leader

We have applied the leader guided search to mathematical optimization functions. These functions are used to evaluate optimization algorithms.

We have considered the following known minimization functions:

  • Rosenbrock function:

    $$ \mathop \sum \limits_{{{\text{i}} = 1}}^{{{\text{d}} - 1}} \left( {1 - {\text{x}}_{i}^{2} } \right)^{2} + 100\left( {{\text{x}}_{i + 1} - {\text{x}}_{i}^{2} } \right)^{2 } ,\quad - \,2.048 \le {\text{x}}_{i} \le 2.048 $$

    The global optimum is fmin = 0 for the point (1, 1) in 2D dimension. The parameter d fixes the number of dimensions.

  • Eggcrate function:

    $$ {\text{g}}\left( {{\text{x}},{\text{y}}} \right) = {\text{x}}^{2} + {\text{y}}^{2} + 25\left( {\sin^{2} {\text{x}} + \sin^{2} {\text{y}}} \right), \;\left( {{\text{x}},{\text{y}}} \right) \in \left[ { - 2\uppi,2\uppi} \right] \times \left[ { - 2\uppi,2\uppi} \right] $$

    The eggcrate function is multimodal. The global minimum is gmin = 0 at point (0, 0).

  • Michalewicz function:

    $$ {\text{f}}\left( {\text{x}} \right) = - \mathop \sum \limits_{{{\text{i}} = 1}}^{\text{d}} \sin ({\text{x}}_{\text{i}} )\left[ {\sin \left( {\frac{{{\text{ix}}_{i}^{2} }}{\uppi}} \right)} \right]^{{2{\text{m}}}} ,\; \left( {{\text{m}} = 10} \right) $$

    This function admits d! local optima in the interval 0 ≤ xi ≤ Π for i = 1, …, d. The global minimum in 2D dimension is fmin = − 1.801.

We have considered a standard genetic algorithm with binary encoding, one crossover and one point mutation and wheel selection. A race in the leader-guided optimization is a genetic algorithm of 100 individuals with the probabilities respectively of crossover 0.65 and mutations 0.1.

We run 18,000 generations of the standard genetic algorithm. For the LGO we run 6 races of 3000 generations of the GA. Every leader runs two races in its life. The results are given in the following table.

For the Rosenbrock function the bank of leader is initiated with the individuals (s1; 11.2) and (s2; 1.28). s2 is better than s1. Thus s2 guides two races that are winned by (s21; 10.07) and (s22; 1.17). The solution s22 is the best child. So, it guides the following two races that were won by the solutions (s221; 0.6) and (s222; 7.1).

By comparison to the standard genetic algorithm, the LGO has performed better for the 3 problems (Table 1). For the eggarte function, the LGA reaches an optimum equal to 0.03 (global optimum = 0) while the GA returns the value 4.29. For the Michalewicz function, the optimal value is − 1.801. The LGO (− 1.71) is nearest to the optimum than the GA (− 1.61). For the Rosenbrock, both algorithms returns the same value. The LGO has proved its efficiency when applied to well known mathematical optimization functions. In the next section, we demonstrate its efficiency in the resolution of a combinatorial optimization problem, which is the flexible job shop problem.

Table 1 Results for the test functions

3 A co-evolutionary algorithm for the FJSP

The adaptation of the evolutionary algorithm to the FJSP is mentioned as co-evolutionary because multiple crossover and mutation operators are used to obtain a diversified set of combination and diversification schemes. These genetic operators are applied with adapted frequency that depends on their performance to generate new efficient solutions. Each new generated solution is optimized locally by local search.

The overall structure of the co-evolutionary process can be described by the Algorithm 2.

figure b

Coding

the coding is similar to the parallel machine encoding proposed in Mesghouni et al. (2004). A solution is coded as a table of K lists. The kth list is the sequence of the machine mk. This encoding is direct since it gives directly the assignment and the sequencing of the operations. To obtain the final scheduling, each operation receives the earliest possible starting time.

Scheduling heuristics

the solutions are constructed through the application of scheduling and assignment rules. Two priority rules are used for operation selection:

  • Select randomly one operation from the queue list.

  • Select the operation that has the least available time.

The machine selection is based on three assignment rules:

  • Assign the operation to a random machine from its pool

  • Assign the operation to the machine that can start it earlier

  • Assign the operation to the least loaded machine from its pool

Six combined heuristics are then used to generate the initial solutions where two of them are deterministic and the others are stochastic. The obtained solutions are improved by local search before their insertion in the initial population.

Selection

the tournament selection is applied with variable size of the tournament. The selection incorporates the elite model since the best solution is always transferred to the next generation. A high pressure of selection is authorized in the start of the search to accelerate the convergence then binary tournament selection is performed to avoid search stagnation. The pressure of selection is decreased by one every T generations (λ(t) = max(λ(t − 1) − 1, 1) if t modulo T = 0), where T is a parameter of the CEA.

Local search

local search is performed on every new generated solution to improve locally the solution generation process. The genetic evolution gives promising starting points to the local search to intensify the search in many directions.

The local search is based on the disjunctive graph model of Roy and Sussmann (1964) and the neighbourhood of Nowicki and Smutnicki (1996). It starts by decomposing the critical path into maximal blocks of adjacent critical operations being treated by the same machine. A neighbouring solution is obtained by a swap of the last two and first two block operations.

Stop criterion

the algorithm is stopped if a maximum number of iterations is reached or the best solution is not changed for a fixed number of iterations.

Multiple crossover evolution

the combination of the chromosomes is performed via distinct crossover operators. The crossovers exchange sequences between parents to form new encodings in different ways. The generated encodings are feasible but incomplete. To preserve feasibility, operations that violate the scheduling constraints, are inserted into a waiting list to be re-inserted later by the scheduling heuristic. The generation of partial encoding by combination of the parent’s encoding avoids the use of the reparation techniques which may modify the assignment or the operation’s sequencing of their parents. Some good features may be omitted by reparation and lost. Four crossover operators are proposed.

One Point Crossover with Priority List (OPCPL)

Let P1 and P2 be a couple of mates and k be a cut point on the set of machines (1 < k < K). The cut point divides the mate P1 into two blocks of machine sequences A and B (respectively P2 into C and D). The child E1 inherits from the parent P1 the block A and from the parent P2 (respectively the parent P1), a part D’ of the block D. Operations in D that violates the scheduling constraints if they are copied to the encoding of the child E1 are inserted into a waiting list. In addition, omitted operations are added to the waiting list. If the waiting list L is not empty (L ≠ Φ), the formation of the complete encoding and the computation of the scheduling are performed by the scheduling heuristic. The child E2 is formed in the same way as E1 while inverting the role of P1 and P2 in the combination process.

Two Points Crossover with Priority List (TPCPL)

Let P1 and P2 be a couple of mates and two cut points k1 and k2 (1 < k1 < k2 < K) to delimitate the crossover zone. The parent is cut into three blocks of machines. The block A of the machines mi i ∈ <1, k1>, block B of machines mi k1 < i ≤ k2, and the block C of machines mi, k2 < i ≤ K. By the same way, the parent P2 is divided into three blocks D, E and F. The child E1 receives from P1 the same sequences on the machines inside the crossover zone and from P2, those outside the crossover zone without violating the scheduling constraints. The obtained child is an incomplete encoding finalized by the scheduling heuristic. In fact, only feasible operations are copied from parent P2 to child E1. By the same way, the encoding of the child E2 is composed of the block E from P2 and feasible parts of blocks B and C from P1.

Selective Machine Sequences Crossover (SMSC)

Only one offspring is produced by the SMSC operator. This child inherits, for each machine, the sequence satisfying one of the following criteria: the minimum completion time, the least loaded machine or one randomly chosen machine amongst the two parents. The obtained encoding is partial since there are omitted and not feasible operations which are stored in a waiting list and after that rescheduled by the scheduling heuristics.

Multi-ParentCrossover (MPC)

To guarantee a high level of diversification the MPC try to join sequences from three parents to construct a child. Once, a crossover zone is created by randomly generating two positions k1 and k2 (1 ≤ k1 < k2 < K), the child E1 receives from P1 the same sequences inside the crossover zone, from P2 the k1 first sequences and from P3 the last sequences. This step is achieved by setting up a list L of un-rooted operations. The MPC takes end when all pending operations are inserted into their appropriate machines by the scheduling heuristics.

Multiple mutation evolution

The proposed mutation operators are trivial heuristics that can guide the search to new promising directions. We have designed multiple mutation operators to conjugate their effect on the evolutionary process. All the mutation operators are designed for multipurpose operations except the swap mutation on one machine and the rescheduling of the last finishing job, they don’t modify the operation’s assignments to the machines. Six mutation operators are designed.

Mutation of operation

reschedule a multi-purpose operation on another machine from its pool.

Mutation of the loaded machine

reschedule a multipurpose operation from the sequence of the most loaded machine on another machine from its pool.

Mutation of the last finishing job

if the most loaded machine executes a multipurpose operation of the last finishing job then reschedule it if possible.

Mutation by job rescheduling

extract all the operations of the last finishing job and reschedule without assignment modification.

Swap mutation on one machine

swap two operations on a machine sequence.

Swap mutation on two machines

swap two multipurpose operations each one from a machine.

All the mutation operators proceed by removing one or many operations from a given scheduling than by rescheduling the removed operations. A candidate operation is inserted in the first available position in the sequence of the machine. Let Oij be an operation to be inserted in the sequence of the machine mk. We start by seeking the last job predecessor of Oij executed by the machine mk. Let \( {\text{O}}_{{{\text{i}}^{{\prime }} {\text{j}}}} \) be that job predecessor if it exists. Oij can’t be inserted before Oi’j.

By the same way, the operation Oij can’t be processed after one of its job successor on the machine mk. Let \( {\text{O}}_{{{\text{i}}^{{\prime \prime }} {\text{j}}}} \) be the first job successor of Oij on the machine mk.

We look know for the operations comprised between \( {\text{O}}_{{{\text{i}}^{{\prime }} {\text{j}}}} \) and \( {\text{O}}_{{{\text{i}}^{{\prime \prime }} {\text{j}}}} \). Let O be one of these operations. If a job predecessor of O is placed after a job successor of Oij in the sequence of another machine \( m_{{k^{{\prime }} }} \), the insertion of Oij after O generates a cycle and this insertion is forbidden. The operation Oij can be inserted before the first operation outside its job that hasn’t one of its job predecessors executed after a job successor of job Oij. Let O′ be that operation.

The operation Oij is inserted in the first possible feasible position: after both Oi’j and O′ and before \( {\text{O}}_{{{\text{i}}^{{\prime \prime }} {\text{j}}}} . \)

Adaptive operator selection

a dynamic probability of application is assigned to each evolutionary operator (mutation, crossover). The operator that performs well during the previous iterations will have more chance to be applied. When an operator generates an offspring better than its parents, the probability of that operator is increased. During the search weaker operators are penalized and stronger ones are encouraged for a given problem instance. The probability of application of a given operator is reinforced if the later performs well otherwise it will be decreased. The operators aren’t allowed to die and neither operator directs alone the search. The probabilities are maintained in a pre-fixed interval. To do so we have adapted the principle of the Adaptive Pursuit Algorithm (APA) as described in Thierens (2007).

Given A operators i = 1 … A. We associate to the operator i, a count ti that equals the number of times the operator i was applied. The operator i has also a probability of application Pi such that (0 ≤ Pi(t) ≤ 1; \( \sum\nolimits_{i = 1}^{A} {\mathop {\text{P}}\nolimits_{\text{i}} ( {\text{t) = 1}}} \). Every time an operator i is applied, its quality estimator is updated as follows:

$$ {\text{Q}}_{\text{i}} \left( {{\text{t}}_{\text{i}} + 1} \right) = {\text{Q}}_{\text{i}} \left( {{\text{t}}_{\text{i}} } \right) +\upalpha\left[ {{\text{R}}_{\text{i}} \left( {{\text{t}}_{\text{i}} } \right){-}{\text{Q}}_{\text{i}} \left( {{\text{t}}_{\text{i}} } \right)} \right] $$
(1)

where\( \left\{ {\begin{array}{*{20}l} {{\text{R}}_{i} \left[ {{\text{t}}_{i} } \right]\text{ := } {\text{R}}_{Fix} + \left( {\frac{{{\text{C}}_{p} }}{{{\text{C}}_{c} }}} \right) - 1,} \hfill & {{\text{if}}\;{\text{the}}\;{\text{operator}}\;i\;{\text{improves}}\;{\text{the}}\;{\text{solution}}\;{\text{quality}}} \hfill \\ {{\text{R}}_{i} \left[ {{\text{t}}_{i} } \right]\text{ := }0,} \hfill & {\text{otherwise}} \hfill \\ \end{array} } \right. \) where the quantity Ri(ti) is a reward received by the operator i when its application improves the results. This reward depends on the objective function. The terms Cp and Cc are respectively the makespan of the parent and the descendent after the application of the operator i. In case of a mutation operator, the child is the mutated individual. The parameter α is an adaptation rate: 0 < α ≤ 1.

Initially, all the operators have equal probabilities to be applied (Pi(0) = 1/A, ∀ i ∈ [1, A]).

When a given operator has been applied a number of times multiple of max Q, the probability vector is updated.

$$ {\text{i}}* = \arg \;\max_{{{\text{i}} \in 1 \ldots A}} \left( {{\text{Q}}_{\text{i}} \left( {{\text{t}}_{\text{i}} + 1} \right)} \right) $$
(2)
$$ {\text{P}}_{{{\text{i}}*}} \left[ {{\text{t}}_{{{\text{i}}*}} + 1} \right]: = {\text{P}}_{{{\text{i}}*}} \left[ {{\text{t}}_{{{\text{i}}*}} } \right] +\upbeta*({\text{P}}_{\hbox{max} } - {\text{P}}_{{{\text{i}}*}} \left[ {{\text{t}}_{{{\text{i}}*}} } \right] $$
(3)
$$ {\text{For}}\quad {\text{i}} \ne {\text{i}}*,\;{\text{P}}_{\text{i}} \left[ {{\text{t}}_{\text{i}} + 1} \right]: = {\text{P}}_{\text{i}} \left[ {{\text{t}}_{\text{i}} } \right] +\upbeta*\left( {{\text{P}}_{\hbox{min} } - {\text{P}}_{\text{i}} \left[ {{\text{t}}_{\text{i}} } \right]} \right) $$
(4)

The parameters of the adaptive pursuit are fixed after fine-tuning with: α = 0.85, β = 0.85, RFix = 0.9. The parameters Pmax is set as follow: Pmax = 1 − (A − 1) * Pmin with 0 < Pmin < 1. This setting ensures that the best operator is applied with a probability near to 0.5 and that the others are equally probability applied. So, neither operator is neglected or dominates the evolution process.

4 Experimental results and discussion

The proposed method was implemented in C++ language on an i5 processor running at 2.4 GHz. The parameters of the EA were fixed after carrying out multiple fine tuning as follow: crossover probability = 0.71, mutation probability = 0.6, population size = 50, 100, 200 and 300 and generation number = 10,000). We have varied the mutation and crossover operators from 0 to 1 by step of 0.1. For each, combination we run the algorithm 5 times for a population of 50 individuals. The algorithm performs well for high rate of mutation and crossover. But it converges rapidly since each generated solution is improved by local search. A rate of reproduction comprised between 0.6 and 0.8 realizes a good compromise between exploration and exploitation in the evolutionary process.

Three FJSP different sets of instances are used: the instances “mt06”, “mt10”, and “mt20” are taken from Fisher and Thompson (1963). The three data sets of 40 instances “la01” to “la40” which are “edata” with the least amount of flexibility, “rdata” where the average size of mij is equal to 2 and “vdata” where the average reach m/2 (Hurink et al. 1994). We compare five algorithms: the evolutionary algorithm (EA), the co-evolutionary algorithm (CEA), the hybridization of the leader guided optimization with the CEA (LGO_CEA) and the tabu search TS of Mastrolilli and Gambardella (2000). The LGO_CEAruns multiple races guided by dominant solutions discovered in the previous search. Each race is a population of individuals that evolve according to the CEA.

To illustrate the behavior of the leader guided optimization when applied to the flexible job shop problem we give in Fig. 2 a branch of the leader tree for the problem la38 vdata from Hurink et al. (1994).

Fig. 2
figure 2

Leader tree for the instance la38 vdata

The initial leader has a makespan of 997. He has leaded 7 independent races. Its most left child is the best one among its 7 children. This later leads also 7 races but he was dominated only in 3 of them. Thus, the node x1 has only three children where the best of them is x11. This individual will guide the next races. At this step of the algorithm, the bank of leaders contains 11 leaders where 2 of them are locked since they have still compete their races.

In Table 2, we compute the mean of relative error to the optimum for the 4 algorithms and for each test instances set. In Table 3, we give the number of found solutions for a given deviation from the optimum and we give the number of optima found.

Table 2 Comparison of the MRE of algorithms EA, CEA, LGO_CEA, GA and TS for solving FJS
Table 3 Performance comparison between EA, CEA, LGO_CEA and TS sets for solving FJSPs

The algorithm EA is a multiple crossover and mutation genetic algorithm. The EA selects randomly the reproduction operator to apply for crossover or mutation. It uses the same operators as those implemented in the CEA. The algorithm CEA implements the guided selection strategy that favours the application of the best performant operators during the search.

The CEA discovers more optima than the EA for the three test instances. The mean deviation to lower bounds is also minimised. Among 129 instances, the CEA outperforms the EA on 78 instances, realizes the same performance on 47 and it is less performant only on 3 instances. This result confirms the efficiency of the adaptive selection strategy of the genetic operators.

The CEA performs better when the flexibility increases. This result is expected. The heuristics and the operators that consider the assignment sub-problem will have no effect and a lack of diversity will slow down the convergence. The CEA realised near results to those of TS for the vdata instances. The mean error deviation is comprised between 0 and 1 for 18 instances among 43. The CEA is near the optimal performances on a large set of instances.

We combine the CEA with the LGO to escape local optima by re-starting the search while guiding the population by a dominant individual discovered in previous searches. The hybrid algorithm LGO_CEA outperforms the CEA on all the test instances. LGO_CEA succeeds to characterise 65 optimal solutions where 31 among them were found by the CEA. For the remaining instances, LGO_CEA is nearest to the optimal than the CEA. Tables 2 and 3 show that the new Leader Guided Optimisation when combined with the CEA has brought remarkable ameliorations in the obtained results due to the enhanced exploration of the search space. The leader tree guided optimization has allowed a deep oriented exploration of the search space and thus has discovered always more efficient solution then the CEA if applied alone.

As for the CEA, the LGO_CEA realizes better performance when the flexibility increases. The number of optimal solutions found is higher for the test instances vdata by comparison to edata and rdata. The mean relative error decreases when the flexibility increases, it is lower for the vdata test instances. The scheduling heuristics and the genetic operators that consider both the scheduling and the assignment problem have a lower diversification effect as the flexibility decreases.

By comparison to the TS results, the LGO_CEA discovers two new optimal solutions for the problem LA01 and LA15 (Fig. 3) in rdata set. We can also notice that in term of number of optimal solutions, LGO_CEA is very close to TS of Mastrolilli and Gambardella (2000) in Edata and Vdata instances but exceeds it in Rdata. Additionally, our proposed method outperforms GA of Pezzella et al. (2008) for all types of instances. It is interesting to note that our approach finds two new optimal solutions: La01 and La15 in Hurinkrdata as mentioned.

Fig. 3
figure 3

LA01 and LA15 optimal solutions

In Table 4, we give the five number summary of the makespan and the relative standard deviation. This statistics measures are obtained from 20 independent runs of the LGO_CEA on four test instances la25 from edata and la22, la28 and la30 from vdata.

Table 4 Study of the robustness of the LGO_CEA

For the problem la25 the standard deviation is 2.03% of the mean so the obtained performances are tightly clustered around the mean. For the problems la28 and la30, the RSD is less than 1% so the values are around the mean. For the la22, the RSD is also low. The LGO_CEA realizes near performances when launched independently many times on the same test instances. We conclude that the algorithm is robust.

The table below shows the execution time of LGO_CEA and CEA. The LGO_CEA runs races guided by the leaders until the bank of leaders becomes empty. Each race is an independent execution of the CEA where a leader from the bank joins the initial population of the evolutionary algorithm. As seen in Table 5, the hybrid algorithm is from 3 to 22 times slower than the CEA. The LGO_CEA realizes higher performances according to the objective function and succeeds to solve the stagnation of the search in population based metaheurists in despite of an increase in the execution time.

Table 5 Study of execution time LGO_CEA and CEA

The hybridization of the LGO with the CEA has shown its efficiency since the results are improved for almost all the test instances. The CEA alone converges to local optima and the evolutionary search is blocked due to the presence of a dominant solution. When a dominant solution is inserted in a new generated population, the evolutionary search evolves toward better performances since it takes advantages both from the leader’s characteristics and from the diversity of the population. The LGO_CEA is robust. While executing the algorithm many times on the same problem it realises near performances. The main drawback of the LGO_CEA is its execution time. The algorithm launches many times the population based metaheuristic so its execution time is higher. By comparison to the best results found in the literature, the LGO_CEA succeeds to characterise new optimal solutions and to realize whether the same or very close performances.

5 Conclusion

In this paper, a new co-evolutionary approach based on the combination of genetic algorithm with local search algorithm for solving FJSP with minimization of the makespan, is presented. The CEA applies multiple crossovers and mutations, as also as, many constructive scheduling heuristics. The algorithm selects adaptively the genetic operator to apply by favoring dynamically those realizing the best performance. The use of multiple genetic operators enhance the performance of the evolutionary process. The CEA performs better than random choice of operators. To escape local optima the CEA is combined with the leader guided optimization metheuristic. A leader is injected at the beginning of each simulation to guide the research process to a better solution. The performance of the new approach are evaluated and compared with the results obtained from other works in the literature. The effectiveness of the developed method is proved by reaching two new optimal solutions. The LGO is a generic algorithm. As a future work, it can be combined with other population based metaheuristics and evaluated on other optimization problems. Another direction is to distribute the algorithm to deal with the problem of execution time.