Keywords

1 Introduction

In most real-world applications, there is always a need to minimize (cost, time or waste) or maximize (performance, benefits or profit). Both minimization and maximization are usually considered the main objectives of optimization problems. In many cases, traditional deterministic algorithms fail to solve optimization problems in practice. Real problems are usually nonlinear, complex and multimodal with many local optima, they might also suffer from complex constraints and high number of dimensions, which motivated researchers and scientists to design non-deterministic optimization algorithms to solve such problems.

Almost all new optimization algorithms are inspired from Nature, hence they are referred to as nature-inspired algorithms. Some algorithms were inspired from chemical and physical systems of nature like simulated annealing [9] and harmony search [6], others were inspired from the success of biological systems in solving problems like genetic algorithm [7, 8] and differential evolution (DE) [12], while the most popular algorithms were inspired from swarm intelligence like particle swarm optimization [5], ant colony optimization [3, 4], cuckoo search [16], bat algorithm [15], Greywolf optimizer [10] and firefly algorithm [14].

In this paper, we propose a new hybrid Grey Wolf/Bat optimization algorithm (GWOBA). The proposed algorithm is designed to use the best of GWO and BA strategies in order to achieve better overall exploration and exploitation behavior. During the first half of iterations, GWO is used to explore the search space effectively using its high exploration behavior. After GWO is done, BA which has better exploitation capabilities, is initiated using the best set of solutions found by GWO and continues searching for the rest of iterations.

The performance of GWOBA is evaluated using 30 benchmark problems of CEC2017 [1] and is compared to GWO, BA and WOA performance. The results are evaluated using the guidelines provided in CEC2017 and they clearly indicate that GWOBA provides better results in most benchmark problems.

2 Preliminaries

In this section, we review the standard metaheuristics: Grey wolf optimizer (GWO) and bat algorithm. GWO simulates the leadership hierarchy and hunting procedure of grey wolves in nature proposed by Mirjalili et al. in 2014 [10], while Bat algorithm was proposed by Xin-She Yang in 2010 [15], inspired by the echolocation of microbats. Both algorithms have shown superiority over many other metaheuristics over wide range of applications.

2.1 Grey Wolf Optimizer

GWO algorithm is inspired form the hunting mechanism of grey wolves and their social hierarchy. The closest wolves (solutions) to the prey (optimum) are called \(\alpha \) wolves. The \(\beta \) and \(\delta \) wolves are the second and the third best solutions respectively. Their location is denoted in the search space as \(X_\alpha \), \(X_\beta \) and \(X_\delta \). The rest of the wolves follow these three wolves as shown in the following equations:

$$\begin{aligned} X(t+1)=X_p(t) - A.|C.X_p(t) - A.X(t)| \end{aligned}$$
(1)

where t is the current iteration, A and C are coefficient vectors, \(X_p\) is the position vector of the prey, and X indicates the position vector of a grey wolf. The vectors A and C are calculated as follows:

$$\begin{aligned} A=2a.r_1 - a, \,C=2.r_2 \end{aligned}$$
(2)

where \(r_1\), \(r_2\) are random vectors in [0, 1] and the exploration rate (a) is linearly decreased from 2 to 0 over the course of iterations as shown below:

$$\begin{aligned} a = 2-t.\frac{2}{Max_{Iter}} \end{aligned}$$
(3)

where t is the current iteration and \(Max_{Iter}\) is the total number of iterations allowed for the optimization. From Eqs. (2) and (3), the random vector A resides in the interval \([-a, a]\). Exploration and exploitation are guaranteed by the adaptive values of a [10] allowing GWO to transit smoothly between exploration and exploitation.

Supposing that the \(\alpha \), \(\beta \) and \(\delta \) wolves have better knowledge about the potential location of prey, other agents are obliged to update their positions to follow them.

$$\begin{aligned} D_{\alpha } =|C_1.X_{\alpha }-X|,\, D_{\beta } =|C_2.X_{\beta }-X|,\, D_{\delta } =|C_3.X_{\delta }-X| \end{aligned}$$
(4)

where \(X_{\alpha }, X_{\beta },\) and \(X_{\delta }\) are the best three solutions at a given iteration.

$$\begin{aligned} X_1=X_{\alpha }-A_1.(D_{\alpha }),\, X_2=X_{\beta }-A_2.(D_{\beta }),\, X_3=X_{\delta }-A_3.(D_{\delta }) \end{aligned}$$
(5)
$$\begin{aligned} X(t+1)=\frac{X_1+X_2+X_3}{3} \end{aligned}$$
(6)

2.2 Bat Algorithm

Bats use echolocation to sense distance and hunt, they emit sound pulses and process the signal of the echo. They can adjust the wavelength, the emission rate and the loudness of the emitted pulses. In the Bat algorithm, bats move randomly with velocity \( V_i \) at position \(X_i\) with varying wavelength \(\lambda \) and loudness \(A_0\) to search for prey. Virtual bats adjust their position according to the following equations:

$$\begin{aligned} F_ i=F_{min} +(F_{max} - F_{min}) { \beta } \end{aligned}$$
(7)
$$\begin{aligned} V_i^t = V_i^{t-1}+(X_i^t - X_*)F_i \end{aligned}$$
(8)
$$\begin{aligned} X_i^t=X_i^{t-1} + V_i^t \end{aligned}$$
(9)

Where \(\beta \) is a random vector in the range [0,1] drawn from uniform distribution. \(X_*\) is the current global best location. \(F_{min}\) and \(F_{max}\) represent the minimum and maximum frequency needed depending on the problem. \(V_i\) represents the velocity vector.

Probabilistically a local search is to be performed using a random walk as in the following equation:

$$\begin{aligned} X_{new}=X_{*}+ {\epsilon }A^t \end{aligned}$$
(10)

Where \(A^t\) is the average loudness of all bats at this time and \({\epsilon }\) is a random number uniformly drawn from \([-1,1]\).

The updating of the loudness is performed using the following equation:

$$\begin{aligned} A_i^{t+1} ={\delta } A^t \end{aligned}$$
(11)

Where \({\delta }\) is a constant selected experimentally.

The emission rate \(r_i\) controls the application of the local search and is updated using the equation:

$$\begin{aligned} r_i^{t+1}=r_i^0[1-exp(-{\gamma }t)] \end{aligned}$$
(12)

Where \( r_i^0 \) is the initial pulse emission rate and is a constant greater than 0.

3 Proposed Hybrid Grey Wolf-Bat Optimization Algorithm

The hybrid GWOBA algorithm is about mixing the high exploration skills of GWO algorithm with the high exploitative properties of BA. First, GWO algorithm is initialized with a random set of solutions then it iterates to find a better set. After \(Max_{Iter}/2\) iterations, only the best two solutions \((X_\alpha , X_\beta )\) are passed to the bat algorithm (BA) as initial guess to help the algorithm to focus on them. BA then runs for \(Max_{Iter}/2\) iterations and returns the best solution which is considered the best solution of the hybrid algorithm.

In order to hybrid GWO and BA algorithms, we needed to make some modifications to both algorithms. The first modification was in GWO algorithm, and the question was: how many leading wolves do we need? By default GWO algorithm makes use of three leading wolves (alpha, beta and delta) which leads the rest of the herd to the prey location. In some problems, the three wolves are located in three different areas within the search space, which slows down the convergence of the algorithm. After many experiments, we decided to use the best two solutions only (alpha and beta) to lead the rest of the solutions to the prey location.

The reason behind this decision is that passing three good solutions to BA might create a divergence, bats are following the best solution and then keep following it with a very low chance to switch this best with another far one. In a three pole function, where one pole is the optimal solution and the other two poles are just local optimas, if GWO has three leaders, having each leader settling down in a pole far away from each other, BA will start searching around them and it might find a best solution around the worst leader and then it will keep searching around the fake best with a very low chance to get out of the pole. This is a typical scenario where GWOBA fails to jump out of a local optima. This modification raised the ability of the algorithm to locate where to search more precisely and helped our algorithm to have significance over the original GWO. Equations (4, 5, 6) are modified to be:

$$\begin{aligned} D_{\alpha }=|C_1.X_{\alpha }-X|,\, D_{\beta }=|C_2.X_{\beta }-X| \end{aligned}$$
(13)
$$\begin{aligned} X_1=X_{\alpha }-A_1.(D_{\alpha }),\, X_2=X_{\beta }-A_2.(D_{\beta }) \end{aligned}$$
(14)
$$\begin{aligned} X(t+1)=\frac{X_1+X_2}{2} \end{aligned}$$
(15)

The second modification affects Eq. (3), to enhance the exploration capabilities of the wolves, we forced \(|A|>1\) by decreasing the exploration rate a from two down to one, so Eq. (3) becomes:

$$\begin{aligned} a = 2-\frac{t}{Max_{Iter}} \end{aligned}$$
(16)

where t is the iteration number and \(Max_{Iter}\) is the total number of iterations allowed for the optimization. The algorithm describing the hybrid GWOBA algorithm is outlined in Algorithm 1.

figure a

4 Experimental Results and Discussion

This section summarizes the results from applying the proposed GWOBA on 30 benchmark functions from the new CEC2017 test suit. The next two subsections contain the results and the analysis for the whole set of functions. We use a set of qualitative measures to analyze the results obtained by the methods we apply. The first four metrics (Mean Fitness, Best Fitness, Worst Fitness and Median Fitness) give a measure of the mean, best, worst, and median expected performance of the algorithms. The fifth measure (Standard Deviation) is adopted to show the precision of each optimizer. The sixth metric (Root-Mean-Square Error (RMSE)) shows how accurate is the optimizer. The last two metrics (T-Test [2, 11] and Wilcoxon Rank Sum Test [13]) are used to directly comparing algorithms in pairs and show whether the difference between them is significant or not.

4.1 CEC2017 Benchmark Suite

In this suite [1], benchmark problems were developed with several novel features such as new basic problems, composing test problems by extracting features dimension-wise from several problems, graded level of linkages, rotated trap problems, and so on. All test functions are minimization problems with a shifted global optimum randomly distributed in the range \([-80, 80]\). The suite consists of 30 functions divided into different categories unimodal, simple multimodal, hybrid and composition functions to simulate different types of real life problems.

Fig. 1.
figure 1

Mean convergence curves of the compared algorithms (D = 100)

4.2 Results and Parameter Settings

In order to benchmark our hybrid optimizer, we compared it to its primitive algorithms GWO, BA and to the whale optimization algorithm (WOA) as well. All experiments were done using 20 agents and the standard parameters for all algorithms.

In our experiments, we used 30 minimization problems with four different dimensions: \(D=10, 30, 50, 100\) from CEC2017 benchmark suite. Each optimizer runs 51 runs per problem with maximum function evaluations (\(MaxFES= 1000*D\)) (i.e. 10000 function evaluations for 10 dimensions’ problem). All problems have the global optimum within the search range \([-100,100]\). All four optimizers were initialized with uniform random initialization within the search space with a random seed based on time.

Table 1. Mean error and std. for dimension (\(D=100\)
Table 2. Best and worst error for dimension (\(D=100\))

In Fig. 1, the four algorithms GWOBA, WOA, GWO and BA are compared using the same dimensions and against the same benchmark functions. We plotted the mean convergence curve for each algorithm along 51 runs for dimension (\(D=100\)) and functions (F5, F8, F9, F16). As can be noticed, our hybrid algorithm mean convergence curve is diving down when it passes half the maximum number of iterations causing the results to be far better than the other algorithms.

In Table 1, mean error and standard deviation results are tabulated for all 30 benchmark functions (F1 through F30) along 51 runs for dimension (\(D=100\)). The results show that GWOBA mean error is better than the others for 25 out of 30 functions, even when it fails to achieve the best results, it loses with a very low order of magnitude. Regarding the standard deviation results, GWOBA lost the lead for 6 out of 30 functions, but without order of magnitude.

In Table 2, best and worst error values are tabulated for all 30 benchmark functions (F1 through F30) along 51 runs for dimension (\(D=100\)). The results show that GWOBA found the best solution for 20 out of 30 functions. On the other hand, GWOBA has the lowest worst solutions for 27 out of 30 functions, which means that it might not get the best solution every time but it will not get the worst.

Wilcoxon rank sum test and T-test results with \(\alpha = 0.05\) are tabulated in Table 3, the tests are calculated on different dimensions for all four optimizers. Since we are interested in the performance of GWOBA against the other algorithms, we report all the comparisons with GWOBA only. It is clear that GWOBA performs better than GWO, BA, and WOA as per T-test results at a significance level \(\alpha =0.05\) for all dimensions.

Table 3. Wilcoxon rank sum test and T-test for all dimensions

The last measure to test our hybrid algorithm performance is to sum all wins for the proposed algorithm against GWO, BA, and WOA for different dimensions and for every qualitative measure. Table 4 summarizes the overall performance of GWOBA, showing excellent success rates in mean, std., worst and RMSE measures. The results also show that the algorithm performance is very solid against higher dimensions, while other algorithms fail to get competitive results in the same conditions.

Table 4. Overall success rate of GWOBA for all set of qualitative measures

The mean error and standard deviation results for all dimensions proves that GWOBA continues its great job defeating other algorithms even for the highest dimensions. Having both low mean error and low standard deviation for all dimensions means that GWOBA has the best precision and accuracy compared to the other algorithms. As a result, GWOBA showed excellent performance against its primitive algorithms GWO and BA as well as WOA.

5 Conclusion and Future Work

In this paper, a hybrid GWO/BA algorithm is proposed. The proposed GWOBA algorithm makes use of both GWO and BA strengths in exploration and exploitation. GWOBA is compared to GWO, BA, and WOA on CEC2017 benchmark suite. Results were assessed using a set of performance indicators. Significant improvement was observed in the performance of GWOBA against other methods. GWOBA proved excellent precision and accuracy in different search space terrains with overall best mean value, standard deviation, median, best, worst, and RMSE. The different tests show that GWOBA is very solid regarding higher dimensions as well as lower ones. The algorithm passed two significance tests on four different dimensions. In the near future, we will test GWOBA in other applications to measure its performance in real life problems.