1 Introduction

Innumerous problems in the real life involve a set of possible solutions, from which the one with the best quality is termed as the optimal solution, and the process of searching for such a solution is known as (mathematical) optimization. The quality of solutions is represented by the ability to maximize or minimize a certain function, called the objective function, while the pool of possible solutions that can satisfy the required objective is called the search space. One can traverse all possible solutions in some way, examine the result of the objective function in each case, and select the best solution. This process can be cleverly formulated in what is so called exact algorithms that would always guarantee the discovery of the optimal solution over all other alternatives. However, many real problems are intractable using this exhaustive search strategy within the affordable computational resources. In these problems, the search space expands exponentially with the input size, and exact optimization algorithms are no longer practical to use. The historical alternative in such situations is to resort to heuristics, similar to simple rules of thumb that humans would utilize in a search process. Heuristic algorithms implement such heuristics to explore the otherwise prohibitively large search space, but they do not guarantee finding the actual optimal solution, since not all areas of the space are examined. However, a close solution to the optimal is returned, which is “good enough” for the problem at hand.

The next step would be to generalize those heuristics in higher level algorithmic frameworks that are problem independent and that provide strategies to develop heuristic optimization algorithms. The latter are known as metaheuristics [1]. Early metaheuristics were based on the concept of evolution, where the best solutions among a set of candidate solutions are selected in successive iterations, and new solution is generated by applying genetic operators such as crossover and mutation to the parent solutions. An eminent example of evolutionary algorithms is the genetic algorithm [2]. Other examples include genetic programming [3], evolution strategies [4], differential evolution [5], and stud genetic algorithms [6]. Another notable metaheuristic strategy is the Tabu Search [7,8,9], where a tabu list of previously visited solutions are remembered and forbidden from being revisited again as long as they are on the list. This class of metaheuristics utilizes a sort of memory, similar to what humans would use in their own heuristic strategies.

Similar to and including evolutionary algorithms, many metaheuristics were based on a metaphor, inspired by some physical or biological processes. Many recent metaheuristics commonly mimic the biological swarms in performing their activities, in particular the important tasks including foraging, preying, and migration. Popular examples of developed metaheuristic algorithms in this category include particle swarm optimization (PSO) [10], which is inspired by movement of the swarms of birds or fishes, ant colony optimization (ACO) [11, 12], which is inspired by the foraging behavior of ants, where ants looking for food sources in parallel employ the concept of pheromone to indicate the quality of the found solutions, and artificial bee colony (ABC) algorithm, inspired by the intelligent foraging behavior of honey bees [13, 14].

In fact, the idea of deriving metaheuristics from natural-based metaphors proved so appealing that much more of such algorithms have been and continue to be developed. A few more examples include Cuckoo Search (CS) [15, 16], Biogeography Based Optimization (BBO) [17], Animal Migration Optimization (AMO) [18], Chicken Swarm Optimization (CSO) [19], Gray Wolf Optimization (GWO) [20], Krill Herd (KH) [21], The Ant Lion Optimizer [22], Dragonfly algorithm [23], Elephant Herding Optimization [24], Earthworm optimization algorithm [25], Moth-flame optimization algorithm [26], A Sine Cosine Algorithm [27], The Whale Optimization Algorithm [28], and Monarch Butterfly Optimization (MBO) [29]. The Bat algorithm (BA) [30] also belongs to the metaheuristics that are based on animal behavior, which is inspired in this case by the echolocation behavior of bats in nature. On the other hand, several metaphor-based metaheuristics are derived from physical phenomena such as simulated annealing (SA) [31] which is inspired by the annealing process of a crystalline solid, and harmony search (HS) [32], which is inspired by the process of improvising musical harmonies by musicians in an orchestra.

The aforementioned metaheuristics are classified as stochastic optimization techniques. To avoid searching the whole solution space, they invariably include a randomization component to explore new areas, generate new solutions, or adopt a best solution. Although these random operators are essential, they can introduce two types of problems. On the one hand, if the randomization is too strong, the metaheuristic algorithm might keep moving between candidate solutions, loosely examining each localized region and failing to exploit promising solutions enough to actually find the best solution. This might lead to increasing the time of localizing an acceptable solution, which is referred to as slow convergence. On the other hand, if the search process is too localized, exploiting the first found good solutions very well but failing to explore more regions, the algorithm might indeed miss the real optimal solution (called the global optimum) and trap into some local optima. This exploitation vs exploration characteristic is also known as intensification vs diversification.

The perfect balance between exploitation and exploration is a sought property in all metaheuristics. It is essential that the algorithm explores as diverse as possible solution areas while exploiting the promising solutions enough to settle on a good optimum. In fact, it is whether and how this balance is achieved that distinguishes most metaheuristics from each other and forms a source of new attempts to improve existing algorithms, possibly by augmenting ideas from some techniques into another metaheuristic strategy. The work in this paper follows this path, improving on one of the known metaheuristic in the literature with positive and promising performance: the BA.

Bat algorithm is a recent metaheuristic algorithm, introduced in 2010, and proved useful in a multitude of applications [30]. BA is based on the echolocation or bio-sonar characteristic of micro bats and employs a concept of frequency-tuning to increase the diversity of the search process. An advantageous feature of BA over other similar metaheuristic algorithms is using a technique of automatic zooming into promising solutions during the search process by mimicking the variations of pulse emission rates and their loudness when searching for preys. This zooming is accompanied by the automatic switch from exploration mode to local intensive exploitation. As a result, BA has a quick convergence rate, at least at early stages of the iterations, compared with other algorithms [33]. The problem with this feature, however, is that a quick shift to exploitation might lead to prematurely trapping into local optimum. It is this shortcoming that motivated the attempts by few works to increase the diversity of the algorithm search in order to enhance the performance, producing a few variants as well as hybrids of BA.

A common technique in metaheuristics is to use the concept of mutation, to introduce diversity to the search process. The basic idea is to modify a current solution that is being examined using some random-based operator to move the search process into new solution areas. This concept is very clear (at least with respect to terminology) in the GA algorithm. Motivated by the concept of mutation operator, we introduce in this work a new enhancement that improves the diversity of BA by adding a mutation operator. We consistently found the proposed operator superior to other mutations in our research, during numerous experiments with various algorithms. We name the resulting algorithm as the EBat algorithm, which we present in this paper, and evaluate its performance compared to the original BA, as well as to other popular metaheuristics in the literature.

The rest of this article is organized as follows. Section 2 includes the necessary background on the formulation of numerical optimization problems, explains the working principles of the standard BA algorithm, and introduces the proposed EBat method. Section 3 presents the setup of experimental evaluation and results and discusses the obtained results, and finally, Sect. 4 concludes the paper.

2 Materials and methods

This section provides basic background on the formulation of numerical optimization problems, which is the underlying application for the proposed metaheuristic, and on its underlying algorithm: the BA metaheuristic.

2.1 Numerical optimization

Metaheuristics have been proved successful in practice to solve global optimization problems. Global optimization seeks to find the best solution (could be the one that maximizes or minimizes a desired entity) among a set of feasible solutions [34]. In other words, optimization is the process of searching for a vector in a given domain that makes the best solution among a large number of possible feasible solutions. Optimization problems are also called mathematical programming or mathematical optimization in different fields, such as control theory, mathematics, management science, and computer science. The general concept of optimization is to use some objective function, restricted with a specific domain, and seek to maximize or minimize the function by systematically examining a set of entered values (of allowed form and domain), and calculating the output of the function for each input value, in order to find the optimal values. The standard formulation of an optimization problem can be described as follows:

Minimize/Maximize f(x) : X → R; x represents a set of real numbers. Subject to:

$$ \begin{array}{l} f\left({x}_0\right)\le f(x)\ f or\ all\ x\ in\ X\left({}^{"}{minimization}^{"}\right)\hfill \\ {} f\left({x}_0\right)\ge f(x)\ f or\ all\ x\ in\ X\left({}^{"}{maximization}^{"}\right)\hfill \end{array} $$
(1)

where

$$ f(x):\kern0.5em {R}^n\to R\kern0.5em \left( is\ the\ \boldsymbol{objective}\ \boldsymbol{function}\kern0.5em to\ be\ minimized\ or\ maximized\ over\ the\ variable\ x\right) $$

The previous formulation is for numerical optimization problems. There are many practical problems in science and engineering which can be represented in the previous general framework. The domain X of the function f is called the search space, and the subset variables of X are called candidate solutions (feasible solutions). The function f is called the objective function and is used to evaluate the optimal solution, either for minimization or maximization according to the specific application for which optimization is required. Traditionally, the usual formulation of an optimization problem is used in terms of minimization. Other than the actual extreme value (minimum or maximum), there may be more than one local minimum in a search space. A local minimum x* is defined as a point for which the following expression holds:

$$ f\left({x}^{\ast}\right)\le f(x)\ for\ all\ x\ in\ some\ neighborhood $$
(2)

In general, optimization techniques can be categorized into two major classes: stochastic and deterministic. The fundamental difference between the two is that the deterministic algorithm provides effective and good performance in searching for local optima. Also, it produces the same final solution if starting from the same initial set of solutions. A stochastic algorithm, on the other hand, is more practical with global optimization methods where the search space is often prohibitively large. It starts with an initial set of solutions and works to explore the search space through the use of random operators until it reaches a global optimum, which could be different from the solution in another run of the same algorithm, even with the same initialization.

2.2 Bat algorithm

BA is classified as one of the swarm intelligence algorithms, which is inspired by swarms of bats using echolocation to sense for obstacles and preys. To simplify the formulation of the algorithm, the characteristics of bat echolocation have been idealized as follows [30]:

  1. 1.

    All bats use echolocation to sense distance, and they also ‘know’ the difference between prey and obstacles in some way;

  2. 2.

    Bats fly randomly with velocity vi at position xi with a frequency fmin, varying the wavelength λ and loudness A0 to search for prey. They can automatically adjust the wavelength (or frequency) of their emitted pulses and adjust the rate of pulse emission r ∈ [0, 1], depending on the proximity of their target;

  3. 3.

    It is assumed that the loudness varies from a large (positive) A0 to a minimum constant value Amin.

This algorithm is based on two main stages; the first stage initializes all the variables, as each bat must be defined by a position\( {\boldsymbol{x}}_i^t \), velocity \( {\ \boldsymbol{v}}_i^t \), emission pulse rate \( {\ \boldsymbol{r}}_i^t \), loudness \( {\boldsymbol{A}}_i^t \), and frequency \( {\ \boldsymbol{f}}_i^t \), in the search space at time step t. The population of bats is defined randomly where each bat represents a potential solution to the optimization problem. The second stage of the algorithm comprises the generation of new population by applying the modifications described by the following equations:

$$ {f}_i={f}_{min}+\left({f}_{max}-{f}_{min}\right)\beta, $$
(3)

where β∈ [0, 1] is the random vector drawn from a uniform distribution.

$$ {v}_i^t={v}_i^{t-1}+\left({x}_i^t-{x}^{\ast}\right){f}_i, $$
(4)

where x is the current global best location (solution), which is located after comparing all the solutions among all the bats.

$$ {x}_i^t={x}_i^{t-1}+{x}_i^t, $$
(5)

In general, depending on the domain size of the problem of interest, the frequency  f is assigned to fmin = 0 and fmax = 100 in practical implementations. Initially, each bat is randomly given a frequency which is drawn uniformly from [fmin, fmax]. For the local search part, once a solution is selected among the current best solutions, a new solution for each bat is generated locally using random walk where ε∈ [−1, 1] is a scaling factor which is a random number, while \( {A}^t=<{A}_i^t> \) is the average loudness of all the bats at time step t.

$$ {x}_{new}={x}_{old}+\varepsilon {A}^t $$
(6)

Furthermore, the loudness Ai and the rate ri of pulse emission are updated according to the equations:

$$ {A}_i^{t+1}=\alpha {A}_i^t,\kern2em {r}_i^{t+1}={r}_i^0\left[1-\mathit{\exp}\left(-\gamma t\right)\right], $$
(7)

where α and γ are the constants.

Based on these approximations and idealization, the Bat algorithm can be described as shown in algorithm 1.

figure a

2.3 The enhanced Bat algorithm

This section introduces the enhanced Bat (EBat) algorithm, which is based on the standard BA as presented in the previous section. The standard Bat algorithm has the capacity to exploit the search space; however, it also at times falls into the trap of local optima, which affects its performance with respect to global search. In order to avoid trapping into local optima in BA, there is a need to increase the diversity of the search. There is a lot of research that has provided solutions to this problem, including the research that will be mentioned in Sect. 3.4.

The main idea behind the algorithm we introduced in this paper is to augment the BA with a very effective operator. This operator is a set of random based modifications that aim to increase the diversity of BA and allow for more mutations in the inspected solutions within the BA search, and hence jump out of potential local optima traps. In other words, the BA’s ability to exploit solutions in the local neighborhood is backed with the ability to explore new areas in the search space. The difference between EBat and BA is that the added mutation operator is used to improve the original BA generating a new solution for each bat. In the light of this rule, an exploitation and exploration are two important crucial characteristics in the design of an effective optimization algorithm [35,36,37].

A minor change to the proposed algorithm is that we use fixed loudness A instead of various loudness \( {\boldsymbol{A}}_i^t \). Similar to BA, each bat in EBat is defined by its position \( {\boldsymbol{x}}_i^t \), velocity vi, the emission pulse rate \( {\ \boldsymbol{r}}_i^t \) , and the fixed loudness in a d-dimensional search space. The new solutions \( {\boldsymbol{x}}_i^t \) and velocities \( {\ \boldsymbol{v}}_i^t \) at time step t are given by eqs. (3), (4), and (5). The main improvement to the proposed algorithm is to add the mutation operator in order to increase the diversity of the population to improve the search efficiency and speed up the convergence to the optimal value.

The proposed algorithm is similar to the standard BA on the side of local search: a new solution is first obtained by a local random walk from the best available solution (Eq. (6)).The generation of this first solution is subject to the condition that a random real number drawn from a uniform distribution is larger than the pulse rate parameter.

The new mutation operator in the EBat algorithm offers a new pair of tuning parameters, Limit1 and Limit2, based on the previous researches [38, 44]. The research [44] was based on the hybridization of the harmony search algorithm with the standard Bat algorithm. In the mutation operator stage, if a random value is less than the value of Limit1, then a solution \( {x}_v^t \) is randomly chosen from the population of NP as shown in Eq. (9).

$$ {v}_r=\mathit{\operatorname{rand}}\kern0.5em \ast \kern0.5em NP; $$
(8)
$$ {x}_v^t={x}_{v_r}^t; $$
(9)

where the r(1, 2… NP);

Further, if a random value is less than Limit2, more mutation is introduced into the elements of the current solution, drawing the search back to a better position with respect to the best and worst solutions recorded so far. This mutation proves very useful in case the BA component traps in a local optimum that is far from the actual global one. The modification of the mutation operator is formulated in Eqs. (10) and (11).

$$ {x}_v^t=0.5\times \left(\ {x}_{worst}^t-{x}_v^t\right)\times \mathit{\operatorname{rand}}\left[0,1\right); $$
(10)
$$ {x}_v^t=0.5\times \left(\ {x}_v^t-{x}_{best}^t\right)\kern0.5em \times \kern0.5em \mathit{\operatorname{rand}}\left[0,1\right); $$
(11)

In the equations above, \( {x}_v^t \) is a new solution of the tth iteration; \( {x}_v^t \) is the random solution selected by Eq. (9), and variables \( {x}_{worst}^t \) and \( {x}_{best}^t \) represent the worst and best solutions ever found, respectively. Otherwise, the randomization rule intends to add population diversity, it helps the mutation operator to explore the search space very efficiently, leading to increase the probability of finding the global optimal solution. Therefore, the randomization rule generates a new value for the ith element in the individual \( {x}_v^t \) as illustrated in Eq. (12).

$$ {x}_v^t= Lb+\mathit{\operatorname{rand}}\ \left( Ub- Lb\right) $$
(12)

The introduced mutation maintains the attractive features of the original Bat algorithm, especially in terms of fast convergence, while allowing the algorithm to make use of more mutation towards a better diversity. Based on the aforementioned analyses, the pseudocode of the EBat algorithm is shown in algorithm 2.

figure b

2.4 Benchmark function

To provide for a common reference for measuring the performance of optimization techniques, a set of standard optimization functions are extensively used to benchmark optimization methods, including metaheuristic algorithms. Each function has one or more optimal value, with a global optimum to be achieved, a dimension for its vector values and a definite range for its domain. This paper uses a set of 24 such test functions for global numerical optimization [39]. The functions are classified into four categories according to their characteristics: (1) unimodal functions (U) have only one local optimum and one global optimum, and used to examine the exploitation ability of the algorithms; (2) multimodal functions (M) have more than one local minimum; (3) separable functions (S) have n-variables and can also be written as the sum of n functions of one variable; and (4) non-separable functions (N) their optimization more difficult than optimizing separable ones. These functions are listed in Table 1 alongside their respective equations and properties. Further information on each function with graphical plots as well as implementation codes in MALTLAB and R can be found at http://www.sfu.ca/~ssurjano/optimization.html.

Table 1 Benchmark global numerical functions used for evaluating optimization methods

3 Experimental results and discussion

In this section, we layout the experimental setup through which we have evaluated the proposed algorithm, EBat. Following the usual methodology in the metaheuristic literature, we expose the algorithm to a set of standard benchmark functions related to numerical optimization and compare the performance of the algorithm with other known and published metaheuristic techniques.

In this work, we have implemented four sets of experiments, comparing the performance of the proposed algorithm with the performance of the standard Bat algorithm, old-fashioned optimization algorithms, a neoteric optimization algorithm, and with the performance of few hybrid algorithms that were selected carefully from some conferences and high-quality journals.

All the experiments were conducted on a laptop with an Intel Core 5i processor running at 2.4 GHz and equipped with 8 GB of RAM. The software implementation of the proposed EBat algorithm was based on the implementation of BA in reference [30]. In all experiments, the population size NP was set to 50, and the function dimension was set to 20, 50, and 100 in a set of three experiments. The maximum number of generations was 50. To mitigate the impact of randomness in individual runs, we report the results over a 100 implementation runs for each algorithm on each benchmark function.

The proposed EBat metaheuristic algorithm inherits a set of control parameters from its underpinning algorithm, BA, and introduces a couple of random control parameters. The set of EBat’s parameters include the loudness which was set to 0.95, pulse rate which was set to 0.1, and the new Limit1 and Limit2 parameters which were set to 0.8 and 0.5, respectively. These are the default parameter values in all our experiments. The choice of the control parameters is very important for the performance of the optimization metaheuristic and can vary with different applications. For example, it is shown in reference [30] that BA can outperform GA, HS, and PSO if its parameters are adjusted properly.

3.1 Performance evaluation of the proposed algorithm against the standard Bat algorithm

In this experiment, we aim to evaluate the performance of the proposed EBat metaheuristic algorithm against the standard Bat algorithm, based on the 24 optimization test functions, which are presented in Table 1. The algorithms were run 100 times using 50 generations each run, with random seeds.

Before commencing the experiment, one must notice that there is a fundamental difference in the manner the parameters are tuned in the standard Bat algorithm and EBat. The parameters in the proposed algorithm are neither increased nor decreased during execution; rather, they are assumed constant values that are tuned at before running the algorithm. In contrast, in the standard Bat, these parameters would have to be amended with each generation. In this experiment, the parameters in our EBat have been tuned as follows: loudness was set to 0.95, pulse rate was set to 0.1, Limit1 was set to 0.8, and Limit2 was set to 0.5. The parameters in the standard Bat algorithm start with some initial value and change with each generation, the loudness was set to 0.9, and the pulse rate was set to 0.8.

Table 2 shows the results of evaluating EBat against BA; this assessment is based on the best or the optimal value obtained by the algorithm, through the following statistical variables: min or best, mean, and standard deviations. The mean optimum (along with the standard deviation) of test function values found by BA and EBat is averaged over 100 experimental runs. The best mean and Std. dev for each function are marked in bold font. The min value is the best optimization result found by each algorithm (closest value to the global optimum over all runs) and is shaded in gray. During this experiment, all objective functions are set with 20, 50, and 100 dimensions.

Table 2 The best, mean results and standard deviations obtained by the Bat and EBat on the benchmark functions after 100 runs

As it is clear from Table 2, the proposed EBat algorithm shows a high accuracy on all functions (Dim = 20, 50, 100), with the exception of two functions: F19 and F22. However, the standard Bat algorithm is better than the EBat on F19 (Dim = 100) and F22 (Dim = 20, 50, and100). From these results, we note the superiority of the EBat algorithm over the standard Bat algorithm for almost all benchmark functions.

On another perspective, we also graphed the optimization process of each algorithm (for some benchmark function) as the value of the so-far best solution versus the current iteration, that is, to show the search path in terms of selected best solution per iteration. The curve of this kind is expected to decline overall at a slope that reflects the convergence speed of the algorithm (there is no degradation during the process of any included metaheuristic algorithm, as the best solution is either improved or kept unchanged, at all iterations). Therefore, these graphs can be called the convergence plots of the algorithms. Because of the large number of plots, we include hereby a representative samples of the convergence plots divided into sets. The first set of figures (Fig. 1a–d) compares the convergence of EBat with that of the standard BA.

Fig. 1
figure 1

Performance of BA and EBat algorithms for a F5, b F1, c F24, and d F22 benchmark functions with 20 and 100 dimension

Figure 1a–d shows that the EBat algorithm enjoys not only a superior overall performance in terms of the quality of the found optimal solution, but also a faster convergence in most of the cases. As shown in Fig. 1a, c, the convergence speed of EBat on function F1 and F24 was so fast and quickly found the global optimum 0 with Dim = 20 and 100. According to the results in Table 2, EBat algorithm in terms of the convergence speed can get the global optimum 0 for F1, F2, F3, F4, F9, F11, F14, F18, F20, F21, F23, and F24 functions with dimensions (20, 50, and 100).

3.2 Performance evaluation of the proposed algorithm against old-fashioned optimization algorithms

The experiments in this section involve three sets of results, similarly to the previous experiments (Sect. 3.1) where the number of generations is set to 50 and the dimensions of the benchmark functions are modified in order to test the scalability of the proposed algorithm against higher dimensional optimization functions. In this same manner, the set of benchmark functions are also kept the same as previously stated. However, the contested algorithms are 11 metaheuristic methods selected from both swarm intelligence and the field of evolutionary computing.

To put the performance of EBat in perspective and illustrate its merits among similar metaheuristic methods, we compare its performance on global numeric optimization problems with 11 other algorithms. The set of comparisons is benchmarked using the same group of the aforementioned 24 global optimization functions. The algorithms included are ABC, ACO, BBO, CS, DE, ES, GA, GSA, HS, PBIL, and PSO.

The implementations of ACO, BBO, DE, ES, GA, and PSO algorithms are based on the implementation by Simon (http://academic.csuohio.edu/simond/bbo/), and the rest of the algorithms have been attained from the original sources. All software is compiled using MATLAB R2009b (V7.9.0.529) running under Windows 7. Because this experiment deals with a lot of algorithms, we have laid out all the important parameters for all algorithms in Table 3.

Table 3 Set of used parameters in experiment for performance analysis of the proposed algorithm with an old-fashioned algorithm

In all cases, the population size NP was set to 50, function dimension was set to 20, 50, and 100 in three sets of experiments, and the maximum number of generations was 50.

To mitigate the impact of randomness in individual runs, we report the results over a 100 implementation runs for each algorithm on each benchmark function (Tables 4, 5, and 6).

Table 4 The best, mean and standard deviation of test function values found by EBat, ABC, ACO, BBO, CS, DE, ES, GA, GSA, HS, PBIL, and PSO algorithms, averaged over 100 experimental run
Table 5 The best, mean and standard deviation of test function values found by EBat, ABC, ACO, BBO, CS, DE, ES, GA, GSA, HS, PBIL, and PSO algorithms, averaged over 100 experimental run
Table 6 The best, mean and standard deviation of test function values found by EBat, ABC, ACO, BBO, CS, DE, ES, GA, GSA, HS, PBIL, and PSO algorithms, averaged over 100 experimental runs

Table 4 lists the results of the first set of experiments, running the 12 optimization methods, including our proposed EBat algorithm, against the 24 benchmark functions from Table 1. The dimensions of each function are set to 20, and each algorithm is allowed 50 generations to converge. The values in the table include the standard deviations and the average optimum value found by each algorithm over the 100 runs, as well as the minimum (best) optimal value found by the algorithms over the same runs.

It is clear from the table that the EBat algorithm has achieved the best overall performance, compared to the other 11 algorithms. EBat has successfully accomplished, as a result, a better standard deviations (Std. dev) in 20 out of 24 test function, the better average (Mean) in 21 out of 24 test functions, and the better the best result (best) in 18 of 24 test functions, followed by BBO, ACO, GSA, CS, GA, and PBIL algorithms as shown in Table 4.

The EBat algorithm also shows the best performance, especially in terms of the quality of the reached optimum value in most of the cases. Moreover, it was the fastest among the algorithms to get the global optimum 0 on functions: F1, F2, F3, F4, F9, F11, F14, F18, F20, F21, F23, and F24, with the exception of some of the functions that the proposed algorithm has not been able to win in all statistical measurements (best, mean, and standard deviation), particularly functions F5, F19, and F22.

Table 5 lists the results of the second set of experiments, running the 12 optimization methods, including our proposed EBat algorithm, against the 24 benchmark functions from Table 1. The dimensions of each function are set to 50, and each algorithm is allowed 50 generations to converge. The values in the table include the standard deviations and the average optimum value found by each algorithm over the 100 runs, as well as the minimum (best) optimal value found by the algorithms over the same runs.

It is evident from the table that the EBat algorithm has the best performance overall, compared to the other 11 algorithms. EBat achieved as a result the best standard deviations (Std. dev) in 21 out of 24 test function, the best average (mean) in 21 out of 24 test functions, and the best minimum result (best) in 20 out of 24 test functions, followed by PSO, GSA, ACO, BBO, CS, GA, and PBIL algorithms, as shown in Table 5. Also, the EBat algorithm with 50 dimensions shows the best performance, especially in terms of the quality of the reached optimum value in most of the cases. It was the fastest among the algorithms to get the global optimum 0 on functions: F1, F2, F3, F4, F9, F11, F14, F18, F20, F21, F23, and F24, with the exception of some of the functions that the proposed algorithm has not been able to win in all statistical measurements (best, mean, and standard deviation). Particularly, functions are F5, F19, and F22.

Table 6 lists the results of the last set of experiments, running the 12 optimization methods, including our proposed EBat algorithm, against the 24 benchmark functions from Table 1. The dimensions of each function are last set to 100, and each algorithm is allowed 50 generations to converge. The values in the table include the standard deviations and the average optimum value found by each algorithm over the 100 runs as well as the minimum (best) optimal value found by the algorithms over the same runs.

It is obvious from the table that the EBat algorithm has again attained the best overall performance, compared to the other 11 algorithms. EBat achieved as a result the best standard deviations (Std. dev) in 21 out of 24 test function, the better average (mean) in 21 out of 24 test functions, and the better the best result (best) in 21 out of 24 test functions, followed by GSA, ES, PBIL, ACO, BBO, CS, GA, and ES algorithms as shown in Table 6. Also, the EBat algorithm with 50 dimensions shows the best performance, especially in terms of the quality of the reached optimum value; in most of the cases, it was the fastest among the algorithms to get the global optimum 0 on functions: F1, F2, F3, F4, F9, F11, F14, F18, F20, F21, F23, and F24, with the exception of some of the functions that the proposed algorithm has not been able to win in all statistical measurements (best, mean, and standard deviation), which are functions F5, F19, and F22.

We can conclude that the EBat algorithm conspicuously shows a good scalability and maintains outstanding performance compared to similar metaheuristic optimization algorithms. Remarkably, in the three previous experiments, which are based on the same settings except modified dimensions for all test functions in each experiment, that the proposed algorithm was able to obtain a striking performance in terms of convergence speed compared to all test functions utilized in the experiments, with the exception of some of the functions that the proposed algorithm has not been able to win in all statistical measurements (best, mean, and standard deviation), particularly functions F5, F19, and F22.

This is because functions F5, F19, and F22 are relatively hard test problems. Function F5 has many local optima and its second better local optimum is far from the global optimum; functions F19 and F22 have a very narrow valley from local optimum to global optimum. According to the results in Tables 4, 5, and 6, we can observe that the proposed algorithm (won algorithm) has a better performance compared with other algorithms, for all benchmark problems.

Figure 2a–f shows that the EBat algorithm enjoys not only a superior overall performance in terms of the quality of the found optimal solution, but also a faster convergence in most of the cases.

Fig. 2
figure 2

Performance of EBat algorithms against the old-fashioned algorithm of optimization algorithms for a F1, b F5, c F6, d F24, e F3, and f F5 benchmark functions with 20, 50, and 100 dimension

From Fig. 2a–f, it is apparent that the EBat method is similarly superior to other metaheuristic algorithms in terms of effective ultimate result (within 50 generations, according to our experiments). In terms of convergence, EBat converges very fast in the case of most functions, compared to the other algorithms. Overall, the performance of EBat is significantly better than the other selected algorithms, except with few optimization functions (e.g., F05 as shown in Fig. 2b, f).

Figure 2a, b shows the result for function F1 and F5 with 20 dimension; Fig. 2c, d shows the result for functions F6 and F24 with 50 dimensions; and Fig. 2e, f shows the result for functions F3 and F5 with 100 dimensions. We can see from Fig. 2a–e that the EBat algorithm enjoys not only a superior overall performance in terms of the quality of the found optimal solution, but also a faster convergence in most of the cases. In terms of convergence, EBat converges very fast in the case of most functions, compared to the other algorithms. Overall, the performance of EBat is significantly better than the other selected algorithms, except with few optimization functions (e.g., F5 with 20 dimensions and F5 with 100 dimensions as shown in Fig. 2b, f). As shown in Fig. 2b, the proposed algorithm obtained the sixth place after GSA, BBO, HS, ACO, and ABC algorithms, respectively. In Fig. 2f, the proposed algorithm obtained the third place after BBO and HS algorithms, respectively, in terms of best result.

3.3 Performance evaluation of the proposed algorithm against new optimization algorithms

In recent years, a lot of optimization algorithms have been proposed that have shown significant superiority over many of the antique optimization algorithms. Consequently, we had to carry out a third experiment in order to compare the proposed EBat metaheuristic algorithm with eight of the new optimization algorithms. The incorporated algorithms are ALO, DA, EWA, GWO, KH, MBO, MFO, and SCA. Moreover, in this experiment, we used the same group of 24 test benchmark functions, which have been used in the previous experiments.

The implementations of ALO, DA, EWA, GWO, KH, MBO, MFO, and SCA algorithms are based on the implementations provided by the original sources. All software is compiled using MATLAB R2009b (V7.9.0.529) running under Windows 7. In this set of experiments, we ran three experiments based on diversity in the dimensions (20, 50, and 100) for all algorithms. For each benchmark function, we set the population size = 50, an elitism parameter = 2, and maximum generation = 50, in each of the experiments. We ran each algorithm on each benchmark function 100 times in order to obtain representative performances.

In this experiment, we used the same parameters for EBat algorithm that were already used in previous experiments. For ALO, DA, EWA, GWO, KH, MBO, MFO, and SCA, we set the parameters based on the original sources as follows. ALO, DA, GWO, MFO, and SCA algorithms are based on the implementation by Mirjalili (http://www.alimirjalili.com/Projects.html). For KH, foraging speed = 0.02, the maximum diffusion speed = 0.005, and the maximum induced speed and the inertia weight are in the range [0, 1]. For EWA, crossover probability = 1, initial mutation probability = 0.01, elitism parameter = 2, similarity factor = 0.98, and initial proportional factor = 1. For MBO, butterfly adjusting rate BAR = 5/12, migration Peri = 1.2, max step Smax = 1.0, keep = 2, and migration ratio ρ = 5/12.

Tables 7, 8, and 9 list the optimization results when applying the 24 optimization test functions to ALO, DA, EWA, GWO, KH, MBO, MFO, SCA including our EBat algorithm. The listed values are the optimal values of the objective function achieved by each algorithm after iterating 50 times (number of generations). The mean values in the table are averaged over 100 runs (each run constitutes 50 iterations) and listed along the standard deviation. The Min value are the best results achieved by each algorithm. By the “best result,” we mean the closest result to the actual optimal value of the function, as per Table 1.

Table 7 The best, mean and standard deviation of test function values found by EBat, ALO, DA, EWA, GWO, KH, MBO, MFO, and SCA algorithms, averaged over 100 experimental runs
Table 8 The best, mean and standard deviation of test function values found by EBat, ALO, DA, EWA, GWO, KH, MBO, MFO, and SCA algorithms, averaged over 100 experimental runs
Table 9 The best, mean and standard deviation of test function values found by EBat, ALO, DA, EWA, GWO, KH, MBO, MFO, and SCA algorithms, averaged over 100 experimental runs

It is evident from Table 7 that the EBat algorithm can reach a better optimum on average, at least with respect to the set of benchmark functions used in the experiments (EBat has better Min, mean, and standard deviation results in the case of 13 out of 24 test functions). For ease of recognition, the best Min value (best), mean, and Std. dev results are marked with bold font and shaded in gray to identify the absolute best minimum achieved for each function.

The superiority of the proposed algorithm is evident in Table 7 compared to the rest in terms of the speed of convergence where by EBat algorithm can get the global optimum 0 on 13 of benchmark functions when compared to ALO, DA, EWA, GWO, KH, MBO, MFO, and SCA. EBat algorithm was able to score the best min value, mean, and standard deviations in most of the optimization functions (13 best, 14 mean, and 16 Std. dev out of 24 test function) with 20 dimension test functions.

In order to further assess the performance of the EBat in problems with higher dimensions (>20), we have run the same set of experiments against the same benchmark functions, setting the dimensions to 50 and 100 for each function; Tables 8 and 9 list the results of these two dimensions, respectively. Again, EBat algorithm was able to record the best average and minimum optima at most of the functions (18 best Std. dev, 17 best mean, and 14 best minimum out of 24 test functions with 50 dimension test functions; and 18 best Std. dev, 18 best mean, and 13 best minimum out of 24 test functions with 100 dimension test functions).

Figure 3a–f shows sample convergence plots of the same set of experiments described previously, with 50 generations and 20, 50, and 100 dimensioned optimization functions. The plots illustrate sample runs for each function and show the comparative performance of all nine optimization algorithms, including the proposed EBat algorithm. The outcome is again consistent with the previous results in Tables 7, 8, and 9.

Fig. 3
figure 3

Performance of EBat algorithms against the new algorithm of optimization algorithms for a F16, b F21, c F1, d F10, e F21, and f F24 benchmark functions with 20, 50, and 100 dimensions

As demonstrated above, the EBat algorithm exhibits the best performance especially in terms of the quality of reached optimum value in most of the cases, except for a few functions. In terms of convergence, we can notice that the EBat algorithm has a relatively very fast convergence towards its final optimal value, though we can see that other algorithms might converge faster in the case of some functions; however, they converge to an inferior result and so often trapped in local optima. For instance, Fig. 3a shows that most of the other algorithms converge quickly to their optima that are significantly worse than the optimal value of EBat algorithm.

In order to analyze the convergence speed, the convergence curves are drawn in Fig. 3 for some functions (F1, F10, F21, and F24) in a particular run of ALO, DA, EWA, GWO, KH, MBO, MFO, SCA, and EBat. As shown in Fig. 3b–f, the convergence speed of EBat on function (F21: Dim = 20, F1: Dim = 50, F10: Dim = 50, F21: Dim = 100, and F24: Dim = 100) was very fast in finding the global optimum 0. However, in the case of function F16 with Dim = 20, it could not converge to the global optimum as fast. Overall, EBat converges faster than ALO, DA, EWA, MBO, MFO, and SCA, except in comparison to KH and GWO that are superior to the proposed algorithm in terms of getting the minimum cost, as shown in Fig. 3a.

3.4 Performance evaluation of the proposed algorithm (EBat) against another hybrid and improved Bat algorithms

This section presents the final performance evaluation set of experiments in the article, which involves three sets of results and differs from the previous experiments (Sects. 3.1, 3.2, and 3.3) in a main major way: the compared algorithms are seven algorithms that are either hybridizations or improvements to the standard Bat algorithm. The three sets of results are based on number of generations of 50 and varied dimensions of the benchmark functions in order to test the scalability of the proposed algorithm to higher dimensional optimization functions. However, the set of benchmark functions are kept the same as previously.

Algorithms that have been selected in the experiment were varied between hybrids and improved Bat algorithm. The purpose is to diversify the selection of algorithms in order to acquire a fair and more accurate assessment or EBat performance. Here, we briefly introduce all algorithms applied in this experiment. For more details, please refer to the original references.

Algorithms that have been selected are as follows: ABA algorithm proposed by Xiaowei Wang et al. [40]; the improvement in ABA is based on the dynamic and adaptive capacity of bats in adjusting their flight direction and flight speed during the search for preys. Selim Yılmaz and Ecir enhanced the standard Bat algorithm by enhancing the local and global search characteristics through three different methods [41]. In the EvBA, the authors presented an enhanced version of the Bat algorithm, based on different movement of the bat and a different random walk process [42]. Another proposal from Iztok, Dusan and the original author for the Bat algorithm hybridized the algorithm with differential evolution strategies, in order to modify the local solution [43]. Wang and Lihong proposed the HS/BA, which tries to improve the tendency of BA to trap into local optima by adding a pitch adjustment operation from HS serving as a mutation operator during the process of the bat exploration, in an attempt to increase its diversity [44]. The A. Kaveh and P. Zakian proposed IBA algorithm [45], where they improve the original Bat algorithm by improving its exploration ability through defining a new dynamic scale factor. This factor is used to limit the step sizes of random walks for local search. The last algorithm utilized in this experiment, the MBDE algorithm, was presented by Adis and Milan; it is a hybrid of the Bat algorithm and differential evolution. They proposed MBDE algorithm based on mutation and crossover operators which improve the elitism during a selection of the best solution. They also developed a new loudness and pulse rate functions in order to establish better balance between exploration and exploitation [46].

We notice from the suggestions mentioned above that they worked to improve the original algorithm by adding a certain mutation operator in order to establish better balance between exploration and exploitation. This goal is evident in the proposals of EBA, HS/BA, and MBDE. Hence, our proposal is similarly pursuing the same idea sought by other researchers but with a distinct adjustment in the mutation operators. It is this unique adjustment that has empowered EBat as shown in its superiority over the other algorithms as we will see in the results presented later.

For the purpose of evaluation process, we list all the control variables of all used algorithms in Table 10. In all cases, the population size NP was set to 50, function dimension was set to 20, 50, and 100 in three sets of experiments, and the maximum number of generations was 50. To mitigate the impact of randomness in individual runs, we report the results over a 100 implementation runs for each algorithm on each benchmark function that is in Table 1. Table 10 lists the set of parameters used in all experiments except those described in previous experiments, in which we vary the control parameters to study their effect on the performance. The listed parameters include those for the compared methods for the sake of reproducibility.

Table 10 Set of used parameters in experiment for performance analysis of the proposed algorithm against another hybrid and improving algorithm

Table 11 lists the results of the first set of experiments, running the eight optimization methods, including our proposed EBat algorithm, against the 24 benchmark functions from Table 1. The dimensions of each function are set to 20, and each algorithm is allowed 50 generations to converge. The values in the table include the minimum (best) optimal value found by the algorithms over the 100 runs, as well as the average and the standard deviation optimum value found by each algorithm over the same runs. It is evident from the table that the EBat algorithm has the best performance overall, compared with the other seven techniques. EBat achieved better mean result in 20 out of the 24 test functions, the better best result in 19 out of the 24 test functions, and the standard deviation result in 18 out of the 24 test function. As it is evident from Table 11, the proposed algorithm won the majority of the functions listed in Table 1, with the exception of some of the functions that the proposed algorithm has not been able to win in all statistical measurements (best, average, and standard deviation), which numbered three functions are F5, F17, and F22.

Table 11 The best, mean and standard deviation of test function values found by EBat, ABA, EBA, EvBA, HBA, HS/BA, IBA, and MBDE algorithms, averaged over 100 experimental runs

Table 12 lists the results of the second set of experiments, running the eight optimization methods, including our proposed EBat algorithm, against the 24 benchmark functions from Table 1. The dimensions of each function are set to 50, and each algorithm is allowed 50 generations to converge. The values in the table include the minimum (best) optimal value found by the algorithms over the 100 runs, as well as the average and the standard deviation optimum value found by each algorithm over the same runs. It is evident from the table that the EBat algorithm has the best performance overall, compared with the seven other techniques. EBat achieved better mean result in 20 out of the 24 test functions, better best result in 16 out of the 24 test functions, and the standard deviation result in 20 out of the 24 test functions. As presented in Table 12, the proposed algorithm won the majority of the functions listed in Table 1, with the exception of some of the functions that the proposed algorithm has not been able to win in all statistical measurements (best, average, and standard deviation), which numbered four functions are F5, F17, F19, and F22.

Table 12 The best, mean and standard deviation of test function values found by EBat, ABA, EBA, EvBA, HBA, HS/BA, IBA, and MBDE algorithms, averaged over 100 experimental runs

Table 13 lists the results of the third set of experiments, running the eight optimization methods, including our proposed EBat algorithm, against the 24 benchmark functions from Table 1. The dimensions of each function are set to 100, and each algorithm is allowed 50 generations to converge. The values in the table include the minimum (best) optimal value found by the algorithms over the 100 runs, as well as the average and the standard deviation optimum value found by each algorithm over the same runs. It is evident from the table that the EBat algorithm has the best performance overall, compared with the seven other techniques. EBat achieved better mean result in 19 out of the 24 test functions, better best result in 16 out of the 24 test functions, and the standard deviations result in 20 out of the 24 test function. In Table 13, it is clear that the proposed algorithm won the majority of the functions listed in Table 1, with the exception of some of the functions that the proposed algorithm has not been able to win in all statistical measurements (best, average, and standard deviation), which numbered four functions are F5, F17, F19, and F22.

Table 13 The best, mean and standard deviation of test function values found by EBat, ABA, EBA, EvBA, HBA, HS/BA, IBA, and MBDE algorithms, averaged over 100 experimental run

Figure 4a–f shows sample convergence plots of the same set of experiments described previously, with 50 generations and 20 dimensioned optimization functions. The plots illustrate sample runs for each function and show the comparative performance of all eight optimization algorithms, including the proposed EBat algorithm. Consistent with the results in Table 11, the EBat algorithm shows the best performance, especially in terms of the quality of the reached optimum value in most of the cases, except few functions (e.g., F19 and f22 in Fig. 4d, e). As is clear from the Fig. 4d, e, not all the algorithms are superior on EBat algorithm, but the HS/BA and EBA algorithms were able to achieve a better result. In terms of convergence, we can notice that the EBat algorithm has a relatively very fast convergence towards its final optimal value, though we can see that other algorithms might converge faster in the case of some functions; nonetheless, they converge to an inferior result, and so often trapped in local optima. For example, Fig. 4c shows that most of the compared algorithms converge quickly to their optima, which are significantly worse than the optimal value of the EBat algorithm.

Fig. 4
figure 4

Performance of EBat algorithms against the another hybrid and improving algorithm for a F2, b F7, c F10, d F19, e F22, and f F23 benchmark functions with 20 dimensions

In order to analyze the convergence speed, the convergence curves are drawn as shown in Fig. 4, for some functions in a particular run of ABA, EBA, EVBA, HBA, HSBA, IBA, MBDE, and EBat. As shown in Fig. 4a, f, the convergence speed of EBat on function (F2 and F23 with Dim = 20) was so fast and quickly to find the global optimum 0. Moreover, the EBat algorithm with function F2 had obtained on the global optimum 0 value in generation 15, for function F23 in generation 12. However, the EBat for function F7 with Dim = 20 cannot converge to the global optimum, but the EBat was able to outperform all other algorithms, in terms of getting the minimum cost, as shown in Fig. 4b.

Figure 5a–f shows sample convergence plots of the same set of experiments described previously, with 50 generations and 100 dimensioned optimization functions. The plots illustrate sample runs for each function and show the comparative performance of all eight optimization algorithms, including the proposed EBat algorithm. Consistent with the results in Table 13, the EBat algorithm shows the best performance, especially in terms of the quality of the reached optimum value in most of the cases. In terms of convergence, we can notice that the EBat algorithm shows the best performance, especially in terms of the quality of the reached optimum value in most of the cases, except few functions (e.g., F22 in Fig. 5d), as is clear from the Fig. 5d, the EBat algorithm superior on all algorithms, except the HS/BA and EBA algorithms.

Fig. 5
figure 5

Performance of EBat algorithms against the another hybrid and improving algorithm for a F1, b F4, c F21, d F22, e F23, and f F24 benchmark functions with 100 dimensions

As shown in Fig. 5a, b, c, e, f, the convergence speed of EBat on function (F1, F4, F21, F23, F24 with Dim = 100) was so fast and quickly to find the global optimum 0. Moreover, the EBat algorithm with functions F1 and F21 has obtained on the global optimum 0 value in generation 10, for in the case of functions F4 and F23 in generation 12 and in function F24 in generation 25. However, the EBat for function F22 with Dim = 100 could not converge to the global optimum; EBat, nonetheless, converges faster than ABA, EVBA, HBA, IBA, and MBDE, except for HSBA and EBA, as they are superior on to the proposed algorithm in terms of getting the minimum cost, as shown in Fig. 5d.

As a last note, we observe that despite the important role of benchmark evaluation in verifying the performance of metaheuristic algorithms, it also has limitations. Optimization algorithms are used in their standard form without special tuning of the parameters, which is often an important factor in the successful application of metaheuristic algorithms to real-world problems. Moreover, actual real applications can differ in important ways from the benchmark optimization functions and can employ different setups according to the available resources; for example, allowing the algorithms to iterate over more generations can significantly change the final results. Nevertheless, the benchmark results shown here are beneficial in validating the concept of the proposed method, at least with respect to similar and proved metaheuristics, and they indeed seem promising for EBat and indicate that this algorithm can have its place in the arsenal of techniques for numerical optimization problems.

4 Conclusion

This paper presented a new metaheuristic variation of the Bat algorithm, enhancing the original Bat algorithm with a mutation operator to increase its diversity. The introduced operator is developed so as to allow for a larger mutation rate while preserving the strength of BA in swift and efficient exploitation. Experimental evaluations against a set of 24 benchmark numerical optimization functions showed that the proposed EBat algorithm converges faster than other metaheuristics and achieve better or at least competitive performance in most cases. We believe that these results are very promising and hope to further explore the specific applications that will benefit from the merits of EBat the most, which is left for future experimentation. We also intend to subject the algorithm to in-depth analysis to gain more insight into the underlying cause of the successful mutation, and thereby further improve the operator possibly for utilization in other well-known metaheuristics.