1 Introduction

Scheduling is one of the most important issues in different applications, including manufacturing and various services, such as airport schedules, train schedules, and others. Efficient scheduling is done by designating a number of jobs to a number of limited resources according to the operational restrictions. Therefore, efficient scheduling has a significant impact on such applications and fields (Lin and Ying 2014; Santos et al. 2019). Parallel machine scheduling (PMS) is widely employed in different systems, including manufacturing and other services. PMS problems have received wide attention in the past decades, and they include three well-known categories, called identical, uniform, and unrelated PMS problems (Ying et al. 2012). In the identical category, the job time is the same in all machines, whereas in uniform category, each machine has a different speed and works at constrain rates. More so, in unrelated one, each machine works at different rates, and each machine processes different assigned jobs (Afzalirad and Rezaeian 2016).

More so, the setup time of scheduling problems has two types, called, sequence-dependent and sequence-independent (Hamzadayi and Yildiz 2017). For sequence-dependent type, the setup time is depended on both jobs that already being scheduled and the last scheduled jobs, whereas in the sequence independent type, the setup time is added to the processing time of the job (Hamzadayi and Yildiz 2017; Kim and Lee 2012).

In general, the existed studies addressed the unrelated PMS problems (UPMSPs) with job sequence-dependent and machine-dependent setup times (JSDMDSTs) to minimize the makespan. The UPMSPs can be considered as a number of N jobs, which is required to be assigned to m machine from a set of (\(R_M\)) unrelated parallel machines, and the makespan (\(C_{max}\)) must be minimized. Each nth job has a single task which requires a processing time. More so, the JSDMDST (\(S_{ijk}\)) is addressed because it is a critical issue in this field. Therefore, there are differences between the \(S_{ijk}\) that needed for the two i and j consecutive jobs on \(k, k=1,...,M\) machine and reverse two jobs (\(S_{ijk}\) on k machine between j and i jobs). Moreover, the \(S_{ijk}\) between i and j jobs on k machine is different from the \(S_{ijk}\) of other j jobs on other k1. Therefore, this is an NP hard problem and can be signified by \(R_M/S_{ijk}/C_{max}\) (Lin et al. 2011).

In previous decades, many studies had been investigated to address the unrelated parallel machine scheduling problems (UPMSPs); for example, the first study had been proposed by McNaughton (1959) in 1959. Many efforts had been presented during the past decades; however, only a few solution models had been improved to solve UPMSPS without considering the JSDMDSTs, for example, a branch and bound algorithm (Rocha et al. 2008). Also, in (Helal et al. 2006), the authors introduced a mixed integer programming model (MIP) to address UPMSPs. More so, in (Rocha et al. 2008), two MIP models were proposed to solve UPMSPs and an efficient algorithm, called branch-and-bound(B&B). In (Fanjul-Peyro et al. 2019), the authors presented a MIP model and a mathematical programming to solve UPMSPs with JSDMDSTs.

Recently, meta-heuristic (MH) methods have been widely employed in various optimization problems, including UPMSPs and cloud scheduling (Attiya et al. 2020). For example, the variable neighborhood search (VNS) was employed by Pacheco et al. (2018) to deal with UPMSPs with sequence-dependent setup times, but with only one machine. The VNS also had been applied for large instances of UPMSPs with setup times in (De Paula et al. 2007). In (Logendran et al. 2007), Tabu search (TS) algorithm was applied to enhance six algorithms for solving UPMSPs. Genetic algorithm (GA) is also utilized for UPMSPs as proposed by Vallada and Ruiz (2011). Yilmaz Eroglu et al. (2014) proposed a modified GA to address the UPMSPs with setup time. In their study, the GA is enhanced by applying a local search algorithm that enhances the search power of the GA. In (Bektur and Saraç 2019), two MH methods called TS and simulated annealing (SA) with a MIP model were proposed to address UPMSPs by minimizing the tardiness. The SA also applied by (Hamzadayi and Yildiz 2016a, b) with several dispatching rules methods to address the problems of identical parallel machines. In (Pakzad-Moghaddam 2016), particle swarm optimization (PSO) was implemented to address jobs scheduling problems of the uniform parallel machines (Pakzad-Moghaddam 2016). Lin and Ying (2014) proposed an enhanced artificial bee colony (ABC) algorithm to address UPMSP problems by minimizing the makespan. Moreover, the improved ABC had been evaluated and compared with several existed methods and showed better performance. Jouhari et al. (2019) proposed a combination of the SA and sine–cosine algorithm (SCA) for solving UPMSPs with JSDMDSTs. The SCA is applied to improve the search performance of the SA, and the combined model was evaluated and compared to several MH methods, such as the originals SA and SCA, and it showed a better performance.

In (Arroyo et al. 2019), the authors presented an iterative greedy (IG) algorithm and a MIP model to solve the UPMSPs. As authors mentioned, the IG algorithm showed better performance than several MH methods, including ant colony optimization (ACO), discrete differential evolution, and SA. Mir and Rezaeian (2016) proposed a hybrid PSO and GA to solve UPMSPs by minimizing the total machine load.

Besides the performance achieved by the previous UPMSPs methods based on MH techniques, they still suffer from some limitations that affect their performance. For example, some of these methods give the exploration phase more attention than the exploitation phase. In contrast, some method made the improvement of the exploitation ability is the main target. Thus, solutions can attract to the optimal local point. In addition, according to the no free lunch theorem that assumes there is no one algorithm that can be solved all optimization problems with the same performance. So, this motivated us to present an alternative method to tackling the UPMSPs problem with JSDMDSTs. This method depends on an improved whale optimization algorithm (WOA) using firefly algorithm (FA) that is used as a local search algorithm; the proposed method is called WOAFA. The WOA is an MH algorithm presented by Mirjalili and Lewis (2016). WOA simulates behaviors of humpback whales and it has been applied in various applications, such as feature selection (Mafarja and Mirjalili 2017), image segmentation (El Aziz et al. 2017), global optimization problems (Trivedi et al. 2016), flow shop scheduling problem (Abdel-Basset et al. 2018), sentiment analysis (Akyol and Alatas 2020), gas consumption prediction (Qiao et al. 2020), and others (Mirjalili et al. 2020). The FA is also a nature-inspired MH that mimics the flashing behavior of fireflies, proposed by Yang (2009); Yang and He (2013). It has been utilized in numerous applications, such as image processing (Yang 2020), cloud computing scheduling (Rajagopalan et al. 2020), forecasting models (Zhou et al. 2019), opinion leader classification (Jain and Katarya 2019), and other applications (Nayak et al. 2020).

The proposed WOAFA is a new hybrid approach to solve UPMSPs by exploiting the power of the WOA, which is improved using the operators of the FA, where the FA is utilized as the local search method for the WOA. Therefore, the proposed WOAFA starts by generating random individuals to represent UPMSPs solutions. The dimension of each individual is represented by job numbers, and each individual value represents the index of the machine that executes a corresponding job. Moreover, to determine the best individual solution, the fitness function is measured to evaluate each individual solution. Thereafter, the probability fitness value of each individual is computed. Therefore, the individual solutions will be updated based on the computed probability by exploiting WOA or FA operators. Thus, previous steps are executed until meeting the terminal criteria.

In short, the contributions of this study are described as follows:

  • Propose an alternative solution for UPMSPs with JSDMDSTs based on a modified WOA.

  • The FA is applied as a local search for WOA to enhance its search ability.

  • The proposed method had been evaluated using a set of UPMSP benchmark problems and compared to several state-of-arts methods. The evaluation outcomes approved the high performance of the WOAFA.

The rest of this study is given as follows. The problem definition is presented in Sect. 2, where the proposed WOAFA is presented in Sect. 3. The experimental evaluation and comparison outcomes are given in Sect. 4. Section 5 presents the conclusion and future direction.

2 Preliminaries

This section describes the problem definition, the main concept of the whale optimization algorithm (WOA), and the firefly algorithm (FA).

2.1 Problem definition

The problem of UPMSP can be mathematically modeled as a mixed integer programming (MIP). The following equations define this model (Ezugwu and Akutsah 2018):

$$\begin{aligned} Min\ C_{max} \end{aligned}$$
(1)

Subject to

$$\begin{aligned}&\sum _{i=0, i\ne 1}^{N_J}\sum _{k=1}^{N_m} x_{ijk}=1; \forall j=1,...,N_J \end{aligned}$$
(2)
$$\begin{aligned}&\sum _{i=0, i\ne h}^{N_J} x_{ihk}\nonumber \\&\quad -\sum _{j=0, j\ne h}^{N_J} x_{hjk}=0;\, \forall h=1,...,N_J, k=1,...,N_m \end{aligned}$$
(3)
$$\begin{aligned}&C_j\ge C_i+\sum _{k=1}^{N_m} x_{ijk}(S_{ijk}+p_{ik})\nonumber \\&\quad +V\left( \sum _{k=1}^{N_m}x_{ijk}-1\right) ,\, i=0,...,N_J \end{aligned}$$
(4)
$$\begin{aligned}&\sum _{j=0}^{N_J}x_{0jk}=1, \forall k=1,...,N_m \end{aligned}$$
(5)
$$\begin{aligned}&C_j\le C_{max}, \forall j=1,...,N_J, \end{aligned}$$
(6)
$$\begin{aligned}&x_{ijk}\in \{0,1\}, \forall i=0,...,N_J,\nonumber \\&\forall j=1,...,N_J,\ \forall k=1,...,N_m,\ \end{aligned}$$
(7)
$$\begin{aligned}&C_0=0 \end{aligned}$$
(8)
$$\begin{aligned}&C_j\ge 0,\ \forall j=0,...,N_J \end{aligned}$$
(9)

where Eq. 1 works to minimize \(C_{max}\), whereas \(C_{max}\) denotes the total time required to complete the given process (makespan).

Equation 2 works to ensure that each job is performed on only one machine. When the job jth is executed after the job ith on the machine kth, the \(x_{i,j,k}\) value will equal to 1; else, the value will be 0. Also, the \(x_{i,j,k}\) value will equal to 1 if the jth job is the last job on machine k. \(N_J\) and \(N_m\) are jobs’ number and of machines number, respectively.

Equation 3 is applied to ensure that there is only one succeeding job and one preceding job. \(C_j\) in Eq. 4 is the completion time of jth job, whereas \(S_{i,j,k}\) denotes the sequence-dependent setup time to execute the jobs jth and ith in sequence order on kth machine. The jth job can be the first job on machine kth if the i equals to 0 as \(S_{0,j,k}\) \(p_{j,k}\) denotes the computation time of the jth job on machine k; this constraint is used to ensure the order of the jobs. This can be performed using a large number (\(V=\infty \)), where \(\sum _{k=1}^{N_m} x_{ijk}=1\) if the job jth is ordered after the job ith, therefore, \(V(\sum _{k=1}^{N_m} x_{ijk}-1)=0\) and \(C_j=C_i+p_{jk}+S_{ijk}\). Else, when the job jth does not ordered after the job ith, then \(\sum _{k=1}^{N_m} x_{ijk}=0\), and thus, \(V(\sum _{k=1}^{N_m} x_{ijk}-1)=-V\).

In Eq. 5, \(x_{0,j,k}=1\) when the jth job is the first job on machine kth; else \(x_{0,j,k}=0\). This equation is employed to ensure that only one job is listed first at each machine.

Equation 6 checks the completion time \(C_j\) of any job to be less than the \(C_{max}\), whereas the solution x takes a binary value by Eq. 7. Equation 8 gives zero completion time to job 0, while Eq. 9 ensures that all completion times for all jobs are larger than zero.

2.2 Whale optimization algorithm (WOA)

The WOA is an optimization method proposed by Mirjalili and Lewis (2016). It emulates the behavior of the humpback whales. In WOA, the whale position represents the solution of the problem and is updated based on the behavior of the whale in attacking the prey \(Z_b\). There are two attacking methods; the first one is the encircling method. It updates the whale position using Eq. 10

$$\begin{aligned}&D_i =|B\times Z_b(t) -Z_i(t)| \end{aligned}$$
(10)
$$\begin{aligned}&Z_i({t+1})=Z_b(t) -A \times D_i \end{aligned}$$
(11)

where \(D_i\) denotes the distance between \(Z_b(t)\) and \(Z_i(t)\). t is the current iteration. A and B are coefficients and calculated by:

$$\begin{aligned}&A =2a \times r-a \end{aligned}$$
(12)
$$\begin{aligned}&B =2r \end{aligned}$$
(13)
$$\begin{aligned}&a=a-t\frac{a}{t_{m}} \end{aligned}$$
(14)

where r is a random value \(\in [0,1]\). a is constantly decreased from 2 to 0 in the updating phase. \(t_{m}\) is the max iterations number).

The second one is the bubble-net method. It applied the shrinking encircling (if a is decreased in Eq. 12) or spiral updating position by applying the helix-shaped to simulate the whale movement that can be applied by the following equation:

$$\begin{aligned} Z(t+1)=D' \times e^{bl} \times cos (2\pi l)+Z_b(t) \end{aligned}$$
(15)

where b is a random variable. l is also a random number \(\in [-1,1]\)

In addition, the whales can swim around the \(Z_b\) using the shrinking circle and a spiral-shaped path at the same time. Therefore, Eq. 16 can be applied to update the position:

$$\begin{aligned} Z(t+1)={\left\{ \begin{array}{ll} Z_b(t)-A\times D &{} if \;\; p\ge 0.5 \\ D'\times e^{bl} \times cos (2\pi l)+Z_b (t) &{} if \;\; p< 0 .5 \end{array}\right. }\nonumber \\ \end{aligned}$$
(16)

The WOA can also update its position by using a random search whale \(Z_{r}\) instead of the best whale \(Z^{*}\) as in the following equation:

$$\begin{aligned}&Z(t+1)=Z_{r}-A\times D \end{aligned}$$
(17)
$$\begin{aligned}&D =|B \times Z_{r}-Z(t)| \end{aligned}$$
(18)

The steps of the WOA are described in Algorithm 1.

figure a

2.3 Firefly algorithm (FA)

The FA is an optimization algorithm proposed by Yang (2010). It is based on the simulation of the flashing behavior of fireflies in nature. The behaviors of the FA are based on the rule: as the firefly is unisex, each one attracts to other ones. Also, the brighter one will attract the less bright one. Therefore, there is a relation between the brightness and attractiveness, whereas the attractiveness and brightness are increased if the distance between fireflies is decreased. If any firefly is brighter than the others, it will fly randomly. In addition, the firefly’s brightness is depended on the landscape of the objective function.

Moreover, the attractiveness (\(\beta \)) between two fireflies can be calculated by Eq. 19:

$$\begin{aligned} \beta = \beta _0 \times e^{(-\gamma m^2 )} \end{aligned}$$
(19)

where \(\gamma \) denotes the light absorption coefficient. \(\beta _0=1\) is the attractiveness at the distance m between i-th and \(Z_i\) fireflies equals 0. The m is calculated by the following equation:

$$\begin{aligned} m_{ij}= ||Z_i- Z_j|| =\sqrt{ \sum _{k=1}^d(Z_{ik}- Z_{jk} )^2 } \end{aligned}$$
(20)

The i firefly moves to another brighter j firefly by the following equation:

$$\begin{aligned} Z_i = Z_i+ \beta \times (Z_i- Z_j )+ r_4 \times \varepsilon _i \end{aligned}$$
(21)

where, \(r_4\) is a random value \(\in [0,1]\). \(\varepsilon _i \in N(\mu ,\sigma )\) denotes a random vector.

figure b

3 The proposed WOAFA

This section introduces the proposed WOAFA to solve UPMSP with setup times. It utilizes both WOA and FA algorithms. In more detail, the operators of FA are utilized to enhance the exploitation ability of the WOA by serving as a local search. Figure 1 illustrates the flowchart of the proposed WOAFA.

In general, the WOAFA begins by creating a random integer population which represents the solution to start solving the UPMSP. Then, the WOA initials optimizing the solutions X. The solutions are evaluated using the objective function (to minimize the makespan). Therefore, the smallest makespan indicates the best solution. After that, each solution is updated using either the operators of WOA or the operators of FA. This switching is performed using a condition based on the probability. This condition is calculated by Eq. 22. All optimizing steps are repeated until reaching the stop condition. In the following subsections, the proposed WOAFA is described in detail.

Fig. 1
figure 1

Phases of the proposed method

3.1 Initial solution

The WOAFA begins by initializing the values of all parameters. Then it creates a random population X. The dimension of this population is the number of jobs \(N_J\) in the interval \([1, N_m]\). For instance, assume there are 10 jobs and 4 machines; therefore, the population X can be represented as \([x_{1},x_{2},x_{3},...,x_{N_J}]=[3\ 1 \ 4 \ 3 \ 2 \ 3 \ 1 \ 1 \ 3 \ 2] \). As in this example, the jobs (1, 4, 6, 9) will be implemented on machine number three, whereas jobs (2, 7, 8) will be implemented on machine number one, while jobs (3) and jobs (5, 10) will be implemented on machine number four and two, respectively. After that, the fitness function is computed for the population X by Eq. 1 to determine the best solution.

3.2 Updating solution

To update the population, the proposed WOAFA evaluates each solution (\(Z_i, i=1,2,...,N\)) to check its quality by applying the fitness function as in Eq. 1 and the solution with the smallest select \(C_{max}\) is determined as the best solution \(Z_b\). Then, the WOAFA calculates the probability (\(Pr_i\)) for each individual as in Eq. 22:

$$\begin{aligned} Pr_i=\frac{Fit_i}{\sum _{k=1}^{N}Fit_k}, \end{aligned}$$
(22)

where \(Fit_i\) is the fitness value for the current solution and \(Fit_K\) is the total of the fitness values. Thence, if \(Pr_i>0.5\), then the FA will be applied to update the solution \(Z_i\); else, the WOA will be applied.

The stop condition is checked after each loop; if it is true, the WOAFA will terminate the process ad output the best solution; else, the WOAFA will repeat its steps. The steps of the WOAFA are given in Algorithm 3.

figure c

The complexity of the developed WOAFA depends on the number of iterations, size of the population, and the dimension of the tested problem. In addition, it depends on the complexity of the quick sort algorithm. Therefore, the complexity of WOAFA can be formulated as:

$$\begin{aligned} O(WOAFA)= & {} N_{Prob}*O(WOA)\nonumber \\&+(N-N_{Prob})*O(FA) \end{aligned}$$
(23)

So, in the best case,

$$\begin{aligned} O(WOAFA)= & {} O(t\times (N_{Prob}\times N\times D+(N-N_{Prob})\nonumber \\&\times (N\times D+N log N ))) \end{aligned}$$
(24)
$$\begin{aligned} O(WOAFA)= & {} O(t\times N(N\times D+(N-N_{Prob}) \nonumber \\&\times log N )), \end{aligned}$$
(25)

while, in the worst case,

$$\begin{aligned} O(WOAFA)=O(t\times N^2( D+N-N_{Prob})) \end{aligned}$$
(26)

where \(N_{Prob}\) is the number of solutions updated using operators of WOA.

4 Evaluation experiments and discussions

In this section, the WOAFA is evaluated for solving UPMSP using a benchmark dataset. Different numbers of jobs and machines are used. The results of the WOAFA are evaluated and compared with well-known meta-heuristic optimization methods, namely SA, PSO, GA, FA, ABC, ACO, WOA, and SCA.

The dataset used in this study is well-known dataset (WebSite 2019 (accessed Oct. 1, 2019).

It contains six types of jobs \(N_J\) (i.e., 20, 40, 60, 80, 100, and 120) and six types of jobs machines \(N_m\) (i.e., 2, 4, 6, 8, 10, and 12) in a discrete uniform distribution U[50, 100]. We evaluate 36 problems; each problem (\(N_J \times N_m\)) has 15 replication instances; thence, the total number of running is 540 times.

The results in the experiments are presented by the relative percentage deviation (RPD). This measure is used to compare the results and determine the performance of the proposed method against other methods. RPD is calculated by Eq. 27.

$$\begin{aligned} RPD= \frac{C_{max}(method) - C_{max}(WOAFA)}{C_{max}(WOAFA)}\times 100, \end{aligned}$$
(27)

where \(C_{max}(method)\) denotes the mean of the makespan values of the each method.

The experiments were performed using Core i5 CPU and 8 GB RAM under MS-Windows 10 (64bit) using MATLAB R2014b. The average results of 15 different instances were computed for each problem. For a fair comparison, all methods were applied in the same environment; besides, the stop condition in this study was set to the max computation time in milliseconds. Therefore, the optimization process runs for a part of the time proportional to the job numbers and machine numbers.

$$\begin{aligned} maxTime= n(\frac{m}{2}) \times tc \end{aligned}$$
(28)

where n and m are the numbers of current jobs and current machines, respectively. tc is set to 50 as recommended by Diana et al. (2015). The experiments’ parameters for all methods are set as in their original papers. Besides, several previous papers are revised to determine the best paper meters for the proposed method such as (Mirjalili and Lewis 2016; Yang 2009; AbdElaziz et al. 2021; Alameer et al. 2019; Abd ElAziz et al. 2018; Mohamed et al. 2016).

Table 1 Results of the relative percentage deviation (RPD) for all methods
Table 2 Results of the STD for all methods

4.1 Computational results

The comparison results between the WOAFA and other algorithms are illustrated in Tables 1-2 that show the average of RPD and standard deviation, respectively. From the results of RPD, it can notice that the WOAFA outperforms other algorithms overall the tested instances except at the machines 2 and Jobs from 80:120, the FA is better. By analyzing the results of RPD at each job, it can be observed that there are variants of RPD values: some of them are under 25% and this includes SA, GA, FA, ACO, WOA, and SCA. The second category has RPD higher than 25%, and this contains three algorithms, namely PSO, GA, and ABC, as shown in Fig. 2.

Moreover, Figure 3 shows the average of RPD at each machine, and it can be seen that the WOAFA method provides better makespan overall tested machines except at machine 2 where the FA provides better results WOAFA. Moreover, the average over all the tested machine and jobs that given in Fig. 4 illustrates the following; the large improvements are achieved at PSO, GA, and ABC, while the smallest improvement can be noticed by comparing the proposed WOAFA and the FA.

Fig. 2
figure 2

Average of RPD at each job

Fig. 3
figure 3

Average of RPD at each machine

Fig. 4
figure 4

Average of PRD overall the tested machine and job

Furthermore, the results of STD are shown in Table 2 and Fig. 5. We can notice that the proposed WOAFA is more stable than others, followed by FA and SA, that allocate the second and third rank, respectively. Also, PSO and GA are the worst stable methods. This high performance of the developed WOAFA method results from combining the advantages of WOA and FA. This combination makes the WOA and FA to update the solutions in competitive manner according to the quality of each solution.

In addition, the computational times for all methods are recorded in Table 3. From the table, we can see that the WOAFA method showed competitive results compared to the other method, especially with SA, PSO, and WOA. The WOAFA improved the computational time of the original FA because it has an effective ability to balancing between the two algorithms to obtain the optimal results in a short time. The average CPU time for all machines and jobs is illustrated in Fig. 6.

Fig. 5
figure 5

Average of STD overall the tested machine and job

Table 3 Results of the computation time for all methods
Fig. 6
figure 6

Average of the computation time (in seconds) overall the tested machines and jobs

4.2 Statistical analysis

This subsection shows the results of the Wilcoxon-rank sum test as a non-parametric test. It is employed to indicate if there are significant differences between the WOAFA and the compared methods or not at the value of the \(p-value\) less than 0.05. Table 4 lists the test results, and we can notice the significant differences between the proposed method (WOAFA) and PSO, GA, and ABC. Whereas other methods obtained the \(p-value\) greater than 0.05, this can be because the results’ difference of these methods is small in the same computation time; however, the WOAFA obtained the best results in most of all problems.

Furthermore, the Friedman test is used to statistically rank the algorithms, where the lowest Friedman’s value refers to the best algorithm; Table 5 lists the results for each machine. From the table, we can see that the WOAFA method obtained the best rank in all machines, and therefore, it achieved the first rank. The FA obtained the second rank in all machines except for machine 2, followed by the SCA, WOA, SA, ACO, ABC, GA, and PSO, respectively.

Table 4 Results of Wilcoxon rank-sum test
Table 5 Results of Friedman rank

To sum up the above discussions, it can be noticed that the best results of the WOAFA because the WOAFA befits from the advantages of both WOA and FA such as the WOA have many stages and strategies to update the solutions and explore the search domain, whereas the FA helps the proposed method to increase its exploitation and exploration phases. In contrast, the WOAFA has some limitations, such as it falls in local optima in the machine (2) at jobs (80, 100, and 120); therefore, it needs to be improved in solving the small number of machines. The proposed method’s stability is good, but it needs more enhancement, especially in machines (2 and 10).

5 Conclusions

This study proposed an alternative meta-heuristic-based solution for unrelated parallel machine scheduling problems (UPMSPs) with job sequence-dependent and machine-dependent setup times (JSDMDSTs). The proposed WOAFA method is a hybrid of whale optimization algorithm (WOA) and firefly algorithm (FA). The FA is employed as a local search method to enhance the exploitation ability of the WOA. We used a well-known benchmark dataset to test the performance of the proposed WOAFA, including six machines (i.e., 2, 4, 6, 8, 10, and 12 machines) and six jobs (i.e., 20, 40, 60, 80, 100, and 120 jobs). Moreover, we compare the proposed WOAFA with several meta-heuristics, such as FA, WOA, SA, PSO, GA, ACO, ABC, and SCA. Furthermore, the Wilcoxon rank-sum test was employed to evaluate the proposed method over other methods. Overall results showed that the proposed WOAFA outperforms several previous methods.

According to the good performance of the proposed WOAFA, in the future, it may be applied in other optimization problems, such as feature selection, scheduling issues of cloud computing, image processing, or time series forecasting.