1 Introduction

Swarm Intelligence has become an emerging and interesting area in the field of nature inspired techniques that is used to solve optimization problems during the past decade. It is based on the collective behavior of social creatures. Swarm based optimization algorithms find solution by collaborative trial and error. Social creatures utilize their ability of social learning to solve complex tasks. Peer to peer learning behavior of social colonies is the main driving force behind the development of many efficient swarm based optimization algorithms. Researchers have analyzed such behaviors and designed algorithms that can be used to solve nonlinear, nonconvex or discrete optimization problems. Previous research [8, 16, 22, 32] have shown that algorithms based on swarm intelligence have great potential to find solutions of real world optimization problems. The algorithms that have emerged in recent years include ant colony optimization (ACO) [8], particle swarm optimization (PSO) [16], bacterial foraging optimization (BFO) [20] etc.

Artificial bee colony (ABC) optimization algorithm introduced by Karaboga [13] is a recent addition in this category. This algorithm is inspired by the behavior of honey bees when seeking a quality food source. Like any other population based optimization algorithm, ABC consists of a population of potential solutions. The potential solutions are food sources of honey bees. The fitness is determined in terms of the quality (nectar amount) of the food source. ABC is relatively a simple, fast and population based stochastic search technique in the field of nature inspired algorithms.

There are two fundamental processes which drive the swarm to update in ABC: the variation process, which enables exploring different areas of the search space, and the selection process, which ensures the exploitation of the previous experience. However, it has been shown that the ABC may occasionally stop proceeding toward the global optimum even though the population has not converged to a local optimum [14]. It can be observed that the solution search equation of ABC algorithm is good at exploration but poor at exploitation [38]. Therefore, to maintain the proper balance between exploration and exploitation behavior of ABC, it is highly required to develop a local search approach in the basic ABC to exploit the search region. In past, very few efforts have been done in this direction. Kang et al. [12] proposed a Hooke Jeeves Artificial Bee Colony algorithm (HJABC) for numerical optimization. In HJABC, authors incorporated a local search technique which is based on Hooke Jeeves method (HJ) [11] with the basic ABC. Further, Mezura-Montes and Velez-Koeppel [18] introduced a variant of the basic ABC named Elitist Artificial Bee Colony. In this work, the authors integrated two local search strategies. The first local search strategy is used when 30, 40, 50, 60, 70, 80, 90, 95 and 97 % of function evaluations have been completed. The purpose of this is to improve the best solution achieved so far by generating a set of 1000 new food sources in its neighborhood. The other local search works when 45, 50, 55, 80, 82, 84, 86, 88, 90, 91,92, 93, 94, 95, 96, 97, 98, and 99 % of function evaluations have been reached.

In this paper, Lévy Flight random walk based local search strategy is proposed. The proposed local search strategy is used for finding the global optima of a unimodal and/or multimodel functions by iteratively reducing the step size to update the candidate solution in the search space inside which the optima is known to be exist. Further, to balance the diversity and convergence capability of ABC, the concept of opposition based learning (OBL) [31] is taken into consideration. The same concept has been applied in Differential Evolution (DE) [22] optimization algorithm to improve its convergence speed [24]. The main concept behind OBL is the simultaneous consideration of a solution and its corresponding opposite counter part in order to find out a better approximation for the current candidate solution. The quality of the solution is measures on the basis of its distance form the global optima. On the basis of probability theory, it can be easily state that there is 50 % chance for an opposite counter part solution to be nearer to the global optima. Therefore, in this paper, the concept of OBL and the proposed local search strategy are integrated with the basic ABC. Further, the proposed algorithm is compared by experimenting on 14 un-biased test problems (i.e. the problems which solutions do not exist on origin, axes or diagonal) and five popular engineering optimization problems to the basic ABC and its recent variants named, Gbest guided ABC (\(GABC\)) [38], Best-So-Far ABC (\(BSFABC\)) [3] and Modified ABC (\(MABC\)) [1].

Rest of the paper is organized as follows: Sect. 2 describes brief overview of the basic ABC. Lévy Flight local search strategy is proposed in Sect. 3. In Sect. 4, concept of opposition based learning is described. Opposition Based Lévy Flight ABC (OBLFABC) is proposed and tested in Sect. 5. In Sect. 6, a comprehensive set of experimental results are provided. Finally, in Sect. 7, paper is concluded.

2 Artificial bee colony (ABC) algorithm

The ABC algorithm is relatively recent swarm intelligence based algorithm. The algorithm is inspired by the intelligent food foraging behavior of honey bees. In ABC, each solution of the problem is called food source of honey bees. The fitness is determined in terms of the quality of the food source. In ABC, honey bees are classified into three groups namely employed bees, onlooker bees and scout bees. The number of employed bees are equal to the onlooker bees. The employed bees are the bees which searches the food source and gather the information about the quality of the food source. Onlooker bees stay in the hive and search the food sources on the basis of the information gathered by the employed bees. The scout bee, searches new food sources randomly in places of the abandoned foods sources. Similar to the other population-based algorithms, ABC solution search process is an iterative process. After, initialization of the ABC parameters and swarm, it requires the repetitive iterations of the three phases namely employed bee phase, onlooker bee phase and scout bee phase. Each of the phase is described as follows:

2.1 Initialization of the swarm

The parameters for the ABC are, number of food sources, number trials after which a food source is considered to be abandoned and termination criteria. In the basic ABC, the number of food sources are equal to the employed bees or onlooker bees. Initially, a uniformly distributed initial swarm of \(SN\) food sources where each food source \(x_i (i= 1, 2, \ldots , SN)\) is a \(D\)-dimensional vector, generated. Here \(D\) is a number of variables in the optimization problem and \(x_i\) represent the ith food source in the swarm. Each food source is generated as follows:

$$\begin{aligned} x_{ij}=x_{min j}+rand[0, 1](x_{max j}-x_{min j}) \end{aligned}$$
(1)

where \(x_{min j}\) and \(x_{max j}\) are bounds of \(x_i\) in jth direction and \(rand[0,1]\) is a uniformly distributed random number in the range [0, 1]

2.2 Employed bee phase

In employed bee phase, employed bees modify the current solution (food source) based on the information of individual experience and the fitness value of the new solution. If the fitness value of the new solution is higher than that of the old solution, the bee updates her position with the new one and discards the old one. The position update equation for \(i\)th candidate in this phase is

$$\begin{aligned} v_{ij}=x_{ij}+\phi _{ij}(x_{ij}-x_{kj}) \end{aligned}$$
(2)

where \(k \in \{1, 2, \ldots , SN\}\) and \(j \in \{1, 2, \ldots ,D\}\) are randomly chosen indices. \(k\) must be different from \(i\). \(\phi _{ij}\) is a random number between [\(-1, 1\)].

2.3 Onlooker bees phase

After completion of the employed bees phase, the onlooker bees phase starts. In onlooker bees phase, all the employed bees share the new fitness information (nectar) of the new solutions (food sources) and their position information with the onlooker bees in the hive. Onlooker bees analyze the available information and select a solution with a probability \(prob_i\) related to its fitness. The probability \(prob_i\) may be calculated using following expression (there may be some other but must be a function of fitness):

$$\begin{aligned} prob_i=\frac{fitness_i}{\sum _{i=1}^{SN} fitness_{i}} \end{aligned}$$
(3)

where \(fitness_i\) is the fitness value of the solution \(i\). As in the case of the employed bee, it produces a modification on the position in its memory and checks the fitness of the candidate source. If the fitness is higher than that of the previous one, the bee memorizes the new position and forgets the old one.

2.4 Scout bees phase

If the position of a food source is not updated up to a predetermined number of cycles, then the food source is assumed to be abandoned and scout bees phase starts. In this phase the bee associated with the abandoned food source becomes scout bee and the food source is replaced by a randomly chosen food source within the search space. In ABC, predetermined number of cycles is a crucial control parameter which is called \(limit\) for abandonment.

Assume that the abandoned source is \(x_i\). The scout bee replaces this food source by a randomly chosen food source which is generated as follows

$$\begin{aligned}&x_{ij}=x_{min j}+rand[0, 1](x_{max j}-x_{min j}), \nonumber \\&\quad \text{ for} j \in \{1, 2,\ldots ,D\} \end{aligned}$$
(4)

where \(x_{min j}\) and \(x_{max j}\) are bounds of \(x_i\) in \(j\)th direction.

2.5 Main steps of the ABC algorithm

Based on the above explanation, it is clear that there are three control parameters in ABC search process: The number of food sources \(SN\) (equal to number of onlooker or employed bees), the value of \(limit\) and the maximum number of iterations. The pseudo-code of the ABC is shown in Algorithm 1 [14]:

figure a1

3 Lévy flight local search

Local search algorithms can be seen as a population based stochastic algorithms, where main task is to exploit the available knowledge about a problem. Generally, in local search algorithms some or all individuals in the population are improved by some local search method. Local search algorithms are basically designed to incorporate a local search strategy between iterations of a population based search algorithm. In this way, the population based global search algorithms are hybridized with local search algorithms and the hybridized algorithms named as memetic algorithms. In memetic algorithms, the global search capability of the main algorithm explore the search space, trying to identify the most promising search space regions while the local search part scrutinizes the surroundings of some initial solutions, exploiting it in this way.

In this paper, we are proposing a local search strategy inspired by Lévy Flight random walk and named Lévy Flight local search (LFLS). In past, the flight behavior of many animals and insects has been analyzed in various studies which exhibit the important properties of Lévy flights [4, 21, 25, 35]. Further, this flight behavior has been applied to optimization and search algorithms, and the reported results show its importance in the field of solution search algorithms [21, 25, 27, 28]. Recently, Xin-She Yang proposed a new metaheuristic algorithm by combining Lévy flights with the search strategy via the Firefly Algorithm [36].

The Lévy Flight is a random walk in which the steps are defined in terms of the step-lengths, which have a certain probability distribution. The random step lengths are drawn from a Lévy distribution which is defined in Eq. (5):

$$\begin{aligned}&L(s)\sim |s|^{-1-\beta },\quad \text{ where} \beta (0< \beta \le 2)\nonumber \\&\quad \text{ is} \text{ an} \text{ index} \text{ and}\,s\,\text{ is} \text{ the} \text{ step} \text{ length}. \end{aligned}$$
(5)

In this paper, a Mantegna algorithm [37] for a symmetric Lévy stable distribution is used for generating random step sizes. Here ‘symmetric’ means that the step size may be positive or negative.

In Mantega’s algorithm, the step length \(s\) can be calculated by

$$\begin{aligned} s=\frac{u}{|v|^{1/\beta }}, \end{aligned}$$
(6)

where, \(u\) and \(v\) are drawn from normal distributions. That is

$$\begin{aligned} u\sim N(0,{\sigma _u}^2),\quad \quad v\sim N(0,{\sigma _v}^2) \end{aligned}$$
(7)

where,

$$\begin{aligned} \sigma _u=\left\{ \frac{\Gamma (1+\beta )\text{ sin}(\pi \beta /2)}{\beta \Gamma [(1+\beta )/2] 2^{(\beta -1)/2}}\right\} ^{1/\beta },\quad \sigma _v=1. \end{aligned}$$
(8)

This distribution (for \(s\)) obeys the expected lévy distribution for \(|s|\ge |s_0|\), where \(s_0\) is the smallest step length [37]. Here \(\Gamma (.)\) is the Gamma function and calculated as follows:

$$\begin{aligned} \Gamma (1+\beta )=\int \limits _0^\infty t^{\beta }e^{-t}dt. \end{aligned}$$
(9)

In a special case when \(\beta \) is an integer, then we have \(\Gamma (1+\beta )=\beta !\).

In the proposed strategy, the step sizes are generated using lévy distribution to exploit the search area and calculated as follows:

$$\begin{aligned} step\_size(t)=0.001\times s(t)\times SLC, \end{aligned}$$
(10)

here \(t\) is the iteration counter for local search strategy, \(s(t)\) is calculated using lévy distribution as shown in Eq. (6) and \(SLC\) is the social learning component of the global search algorithm.

In Levy flights, the step sizes are too aggressive, that is, they may generate new solutions often outside the domain or on boundary. Since, the local search algorithms can be seen as a population based stochastic algorithms, where main task is to exploit the available knowledge about a problem and steps sizes play an important role in exploiting the identified region. Therefore, \(0.001\) multiplier is used in Eq. (10) to reduce the step size. The solution update equation of an ith individual, based on the proposed local search strategy is given in Eq. (11):

$$\begin{aligned} x^{\prime }_{ij}(t+1)=x_{ij}(t)+ step\_size(t)\times U(0,1), \end{aligned}$$
(11)

here \(x_{ij}\) is the individual which is going to modify its position, \(U(0,1)\) is a uniformly distributed random number between 0 and 1 and \(step\_size(t)\times U(0,1)\) is the actual random walks or flights drawn from lévy distribution. The pseudo-code of the proposed LFLS is shown in Algorithm 2. In Algorithm 2, \(\epsilon \) determines the termination of local search.

figure a2

4 Opposition based learning (OBL)

Nature inspired algorithms (NIAs) start searching of the global optima randomly and all the individual randomly initialized in the given search space. Further, the individuals update their positions based on self intelligence and social learning and move towards the optimum solution or required solution. NIAs are iterative and stochastic in nature and solution search process terminates when some predefined criteria is satisfied. The performance of the algorithms are measured in terms of computational complexity which is directly proportionate to the number of objective function evaluations to be optimized. Researchers are continuously working to improve the performance of NIAs in terms of computational complexity. Rahnamayan et al. [24] proposed opposition based differential evolution (OBDE) to improve the convergence rate of DE which was based on the theory of opposite-based learning (OBL). The concept of OBL was introduced by Tizhoosh [31]. The same concept can also be applied in ABC to improve the convergence speed. If the randomized initialized solutions are near to the global optima then the required solution can be found in less computational efforts but if the solutions are very far form the optima then it takes high computational cost or may be some time infeasible to track the required solution. As suggested by Tizhoosh [31] and further by Rahnamayan et al. [24], the computational cost can be reduced by applying the concept of OBL. In OBL we consider the better solutions as well as their opposite counter part solutions and combined them, then we apply greedy approach to find the fittest solutions among them. We judge the solution on the basis of its fitness in respect to global optima. Let \(x\) be a solution and \(\tilde{x}\) is its opposite solution in the search space. On the basis of probability theory, we can say that there is 50 % chance that the opposite solution may be fitter. The concept of OBL can be defined as follows [31]:

Definition 1

(Opposite Number) Let \(x\in \mathfrak R \) be a real number defined on a certain interval: \(x \in [a,b]\). The opposite number \(\tilde{x}\) is defined as follows

$$\begin{aligned} \tilde{x} = a + b - x \end{aligned}$$

For the higher dimensions, the above definition can be extended as follows [24, 31]:

Definition 2

(Opposite Point) Let \(P(x_1, x_2,\ldots , x_D )\) be a point in a D-dimensional coordinate system with \(x_1,\ldots , x_D\in \mathfrak R \) and \(x_i \in [a_i ,b_i ]\). The opposite point \(\tilde{P}\) is completely defined by its coordinates \(x_1,\ldots , x_D\) where

$$\begin{aligned} \tilde{x_i} = a_i + b_i - x_i,\quad i = 1,\ldots ,D. \end{aligned}$$

On the basis of the Definition 2, an Opposition Based Optimization Method (\(OBOM\)) has been developed [24]. The pseudo code of the \(OBOM\) is shown in Algorithm 3:

figure a3

5 Opposition based lévy flight ABC (OBLFABC)

Exploration and exploitation are the two important characteristics of the population-based optimization algorithms such as GA [10], PSO [16], DE [29], BFO [20] and so on. In these optimization algorithms, the exploration refers to the ability to investigate the various unknown regions in the solution space to discover the global optimum. While, the exploitation refers to the ability to apply the knowledge of the previous good solutions to find better solutions. In practice, the exploration and exploitation contradict with each other, and in order to achieve better optimization performance, the two abilities should be well balanced. Dervis Karaboga and Bahriye Akay [14] tested different variants of ABC for global optimization and found that the ABC shows poor performance and remains inefficient in exploring the search space. In ABC, any potential solution updates itself using the information provided by a randomly selected potential solution within the current swarm. In this process, a step size which is a linear combination of a random number \(\phi _{ij}\in [-1, 1]\), current solution and a randomly selected solution are used. Now the quality of the updated solution highly depends upon this step size. If the step size is too large, which may occur if the difference of current solution and randomly selected solution is large with high absolute value of \(\phi _{ij}\), then updated solution can surpass the true solution and if this step size is too small then the convergence rate of ABC may significantly decrease. A proper balance of this step size can balance the exploration and exploitation capability of the ABC simultaneously. But, since this step size consists of random component so the balance can not be done manually. Therefore, in this paper, to balance the diversity and convergence ability of ABC, three modifications are proposed:

  1. 1.

    To enhance the exploitation capability of ABC, Lévy Flight Local Search (LFLS) is incorporated with the basic ABC. In this way, the situation of skipping true solution can be avoided while maintaining the speed of convergence. The LFLS strategy, in case of large step sizes, can search within the area that is jumped by the basic ABC.

  2. 2.

    Inspired by the \(OBDE\) [24], to balance the diversity and convergence capability of ABC, the \(OBL\) strategy is incorporated with the basic ABC. This modification avoids situation of stagnation of the algorithm and speed up the convergence to global optima.

  3. 3.

    In the basic ABC, the food sources are updated, as shown in Eq. (2), based on the random step size. Inspired by PSO [16] and Gbest-guided ABC (GABC) [38] algorithms which, in order to improve the exploitation, take advantage of the information of the global best solution to guide the search of candidate solutions, the solution search equation described by Eq. (2) is modified as follows [38]:

    $$\begin{aligned} v_{ij}=x_{ij}+\phi _{ij}(x_{ij}-x_{kj})+\psi _{ij}(x_{bestj}-x_{ij}), \end{aligned}$$

    here, \(\psi _{ij}\) is a uniform random number in \([0,C]\), where \(C\) is a nonnegative constant. For details description refer to [38].

As described in the proposed first modification, a Lévy Flight random walk inspired local search is incorporated with the basic ABC to improve the exploitation capability. In the proposed local search strategy, step size is calculated as shown in Eq. (12).

$$\begin{aligned} step\_size(t)=0.001\times s(t)\times (x_{bestj}(t)-x_{kj}(t)), \end{aligned}$$
(12)

here, symbols have their usual meanings, \(SLC= (x_{bestj}-x_{kj})\) is the social learning component of the ABC algorithm in which \(x_{best}\) is the best solution in the current swarm and \(x_k\) is the randomly selected solution within swarm and \(x_k \ne x_{best}\). The solution update equation of the best individual within the current swarm, based on the proposed local search strategy is given in Eq. (13):

$$\begin{aligned} x^{\prime }_{bestj}(t+1)=x_{bestj}(t)+ step\_size(t)\times U(0,1), \end{aligned}$$
(13)

In LFLS, only the best particle of the current swarm updates itself in its neighborhood. The pseudo-code of the proposed Lévy Flight search strategy with ABC is shown in Algorithm 4 and

figure a4

In Algorithm 4, \(\epsilon \) is the termination criteria of the proposed local search, \(p_r\) is a perturbation rate (a number between 0 and 1) which controls the amount of perturbation in the best solution, \(U(0,1)\) is a uniform distributed random number between 0 and 1, \(D\) is the dimension of the problem and \(x_{k}\) is a randomly selected solution within swarm. See Sect. 6.2 for details of these parameter settings.

Further, the opposition based learning (OBL) strategy, explained in Sect. 4, is incorporated with the ABC to balance the diversity and the convergence capability. The OBL is applied to the current swarm of the ABC after the scout bee phase in the solution search process on the basis of probability named as jumping rate \((j_r)\) [24]. Unlike the opposition based optimization method \((OBOM)\) explained in Algorithm 3, opposition based solutions are generated dynamically. In the \(OBOM\), evolution of opposition based solutions are in the static range (solution search space). Therefore, there is a chance to jump outside of the already shrunken search space and the knowledge of the current reduced space (converged swarm) would be lost. Hence, Instead of using predefined search range \(([a_j, b_j])\), the solutions are generated by using current interval in the swarm which is, as the search does progress, increasingly smaller than the corresponding initial range. Therefore, in OBOM, the opposition based solutions are generated using following equation:

$$\begin{aligned} OP_{ij}={MIN_J}^P+{MAX_j}^P-P_{ij}, \end{aligned}$$

here, \([{MIN_J}^P, {MAX_j}^P]\) is the current interval in the swarm.

Therefore, \(LFLS\) and \(OBOM\) are incorporated with the basic ABC to speed up the evolutionary process (search process). The proposed algorithm is named as Opposition Based Lévy Flight ABC (OBLFABC). Pseudo-code of the OBLFABC is shown in Algorithm 5:

figure a5

6 Experimental results and discussion

6.1 Test problems under consideration

In order to analyze the performance of OBLFABC, 14 un-biased different global optimization problems (\(f_1\) to \(f_{14}\)) are selected (listed in Table 1). These are continuous optimization problems and have different degrees of complexity and multimodality. Test problems \(f_{1}\) to \(f_{5}\) and \(f_{11}\) to \(f_{14}\) are taken from [2] while test problems \(f_{6}\) to \(f_{10}\) are taken from [30] with the associated offset values. Further, to see the robustness of the proposed strategy, five well known engineering optimization problems (\(f_{15}\) to \(f_{19}\)) which are described as follows, have been solved.

Table 1 Test problems

Compression Spring (\(f_{15}\)): The considered first engineering optimization problem is compression spring problem [19, 26]. This problem minimizes the weight of a compression spring, subject to constraints of minimum deflection, shear stress, surge frequency, and limits on outside diameter and on design variables. There are three design variables: the wire diameter \(x_1\), the mean coil diameter \(x_2\), and the number of active coils \(x_3\). This is a simplified version of a more difficult problem. The mathematical formulation of this problem is:

$$\begin{aligned} x_1&\in\{1,\ldots ,70\}\ \text{ granularity} 1\\ x_2&\in [0.6 ;3] \\ x_3&\in [0.207 ;0.5] \text{ granularity} 0.001 \end{aligned}$$

and four constraints

$$\begin{aligned} g_1&:= \frac{8C_f F_{max}x_2}{\pi x^3_3} -S \le 0 \\ g_2&:= l_f-l_{max}\le 0 \\ g_3&:= \sigma _p-\sigma _{pm}\le 0 \\ g_4&:= \sigma _w-\frac{F_max-F_p}{K} \le 0 \end{aligned}$$

with

$$\begin{aligned}&C_f = 1+0.75\frac{x_3}{x_2 -x_3}+0.615\frac{x_3}{x_2} \\&F_{max} = 1{,}000 \\&S = 189{,}000 \\&l_f = \frac{F_{max}}{K} + 1.05(x_1 +2)x_3 \\&l_{max} = 14 \\&\sigma _p = \frac{F_p}{K} \\&\sigma _{pm} = 6 \\&F_p = 300 \\&K = 11.5 \times 10^6\frac{x_3^4}{8x_1x_2^3} \\&\sigma _w = 1.25 \end{aligned}$$

and the function to be minimized is

$$\begin{aligned} f_{15}(\varvec{x}) = \pi ^2\frac{x_2x_3^2(x_1+2)}{4} \end{aligned}$$

The best known solution is \((7,1.386599591,0.292)\), which gives the fitness value \(f^*\) = 2.6254214578. Here, minimum error is fixed to be \(10^{-10}\), i.e. a run is said to be successful if it finds a fitness \(f\) so that \(|f-f^*|\le 10^{-10}\) in the maximum number of function evaluations.

Pressure Vessel design (\(f_{16}\)) The pressure vessel design is to minimize the total cost of the material, forming, and welding of a cylindrical vessel [33]. There are four design variables involved: \(x_1\), (\(T_s\), shell thickness), \(x_2\) (\(T_h\), spherical head thickness), \(x_3\) (\(R\), radius of cylindrical shell), and \(x_4\) (\(L\), shell length). The mathematical formulation of this typical constrained optimization problem is as follows:

$$\begin{aligned}&f_{16}(\varvec{x})=0.6224x_1x_3x_4 \!+\! 1.7781x_2x_{3}^{2} \\&\quad \qquad \qquad + 3.1611x_{1}^{2}x_4 + 19.84x_{1}^{2}x_3 \end{aligned}$$

subject to

$$\begin{aligned} g_1(\varvec{x})&= 0.0193x_3-x_1 \\ g_2(\varvec{x})&= 0.00954x_3-x_2 \\ g_3(\varvec{x})&= 750\times 1728-\pi x_{3}^2\left(x_4+\frac{4}{3}x_3\right) \end{aligned}$$

The search boundaries for the variables are

$$\begin{aligned}&1.125 \le x_1 \le 12.5,\\&0.625 \le x_2 \le 12.5,\\&1.0\times {10}^{-8} \le x_3 \le 240 \end{aligned}$$

and

$$\begin{aligned} 1.0\times {10}^{-8} \le x_4 \le 240. \end{aligned}$$

The best known global optimum solution is \(f(1.125, 0.625, 55.8592, 57.7315) = 7197.729\) [33]. For a successful run, the minimum error criteria is fixed to be 1.0E\(-\)05 i.e. an algorithm is considered successful if it finds the error less than acceptable error in a specified maximum function evaluations.

Lennard-Jones (\(f_{17}\)) The function to minimize is a kind of potential energy of a set of \(N\) atoms. The position \(x_i\) of the atom \(i\) has three coordinates, and therefore the dimension of the search space is \(3N\). In practice, the coordinates of a point \(x\) are the concatenation of the ones of the \(x_i\). In short, we can write \(x = (x_1,x_2,\ldots ,x_N)\), and we have then

$$\begin{aligned} f_{17}(\varvec{x})=\sum _{i=1}^{N-1}{\sum _{j=i+1}^{N}{\left(\frac{1}{{\Vert x_i-x_j\Vert }^{2\alpha }}-\frac{1}{{\Vert x_i-x_j\Vert }^{\alpha }}\right)}} \end{aligned}$$

In this study \( N = 5, \alpha = 6\), and the search space is \([-2, 2]\) [5].

Parameter Estimation for Frequency-Modulated (FM) [5] (\(f_{18}\)): Frequency-Modulated (FM) sound wave synthesis has an important role in several modern music systems. The parameter optimization of an FM synthesizer is a six dimensional optimization problem where the vector to be optimized is \(\varvec{x} = \{a_1, w_1, a_2, w_2, a_3, w_3 \}\) of the sound wave given in Eq. (14). The problem is to generate a sound (1) similar to target (2). This problem is a highly complex multimodal one having strong epistasis, with minimum value \(f(\varvec{x})=0\). This problem has been tackled using genetic algorithms (GAs) in [1], [2].The expressions for the estimated sound and the target sound waves are given as:

$$\begin{aligned}&y(t)=a_1\text{ sin}(w_1t\theta +a_2\text{ sin}(w_2t\theta +a_3\text{ sin}(w_3t\theta )))\end{aligned}$$
(14)
$$\begin{aligned}&y_{0}(t)=(1.0)\text{ sin}((5.0)t\theta \!-\!(1.5)\text{ sin}((4.8)t\theta \nonumber \\&\qquad \qquad \!+(2.0)\text{ sin}((4.9)t\theta ))) \end{aligned}$$
(15)

respectively where \(\theta =2\pi /100\) and the parameters are defined in the range [-6.4, 6.35]. The fitness function is the summation of square errors between the estimated wave (1) and the target wave (2) as follows:

$$\begin{aligned} f_{18}(\varvec{x})= \sum _{i=0}^{100} {(y(t)-y_0(t))^2} \end{aligned}$$

Acceptable error for this problem is 1.0E\(-\)05, i.e. an algorithm is considered successful if it finds the error less than acceptable error in a given number of generations.

Welded beam design optimization problem (\(f_{19}\)) The problem is to design a welded beam for minimum cost, subject to some constraints [17, 23]. The objective is to find the minimum fabricating cost of the welded beam subject to constraints on shear stress \(\tau \), bending stress \(\sigma \), buckling load \(P_c\), end deflection \(\delta \), and side constraint. There are four design variables: \(x_1, x_2, x_3\) and \(x_4\). The mathematical formulation of the objective function is described as follows:

$$\begin{aligned} f_{19}(\varvec{x}) = 1.10471x_{1}^2x_2 + 0.04811x_3x_4(14.0 + x_2) \end{aligned}$$

Subject to:

$$\begin{aligned}&g_1(\varvec{x}) =\tau (\varvec{x})-\tau _{max} \le 0 \\&g_2(\varvec{x}) = \sigma (\varvec{x})-\sigma _{max} \le 0 \\&g_3(\varvec{x}) = x_1-x_4 \le 0 \\&g_4(\varvec{x}) = \delta (\varvec{x})-\delta _{max} \le 0 \\&g_5(\varvec{x}) = P-P_c(\varvec{x}) \le 0 \\&0.125 \le x_1 \le 5, 0.1\le x_2,x_3 \le 10 \text{ and} 0.1\le x_4 \le 5 \end{aligned}$$

where

$$\begin{aligned}&\tau (\varvec{x})= \sqrt{{\tau ^{\prime }}^2-\tau ^{\prime }\tau ^{\prime \prime }\frac{x_2}{R}+{\tau ^{\prime \prime }}^2},\\&\tau ^{\prime }=\frac{P}{\sqrt{2}x_1x_2},\quad \tau ^{\prime \prime }=\frac{MR}{J},\quad M=P\left(L+\frac{x_2}{2}\right),\\&R=\sqrt{\frac{{x_2}^2}{4}+{\left(\frac{x_1+x_3}{2}\right)}^2},\\&J=2/\left(\sqrt{2}x_1x_2\left[\frac{{x_2}^2}{4}+{\left(\frac{x_1+x_3}{2}\right)}^2\right]\right),\\&\sigma (\varvec{x})=\frac{6PL}{x_4{x_3}^2},\quad \delta (\varvec{x})=\frac{6PL^3}{Ex_4{x_3}^2},\\&P_c(\varvec{x})=\frac{4.013Ex_3{x_4}^3}{6L^2}\left(1-\frac{x_3}{2L}\sqrt{\frac{E}{4G}}\right),\\&P=6{,}000\,\,lb,\quad L=14\,\,in.,\quad \delta _{max}=0.25 \,\,in.,\\&\sigma _{max}=30{,}000\, psi,\quad \tau _{max}=13600\, psi,\\&E=30\times {10}^6 \, psi, \quad G=12\times {10}^6 \, psi. \end{aligned}$$

The best known solution is \((0.205730, 3.470489, 9.036624, 0.205729)\) , which gives the function value 1.724852. Acceptable error for this problem is \(1.0\text{ E}{-}01\).

6.2 Experimental setting

To prove the efficiency of \(OBLFABC\), it is compared with \(ABC\) and recent variants of ABC named Gbest-guided ABC \((GABC)\) [38], Best-So-Far ABC (\(BSFABC\)) [3] and Modified ABC (\(MABC\)) [1]. To test \(OBLFABC, ABC, GABC\), \(BSFABC\) and \(MABC\) over considered problems, following experimental setting is adopted:

  • Colony size \(NP = 50\) [7, 9],

  • \(\phi _{ij} = rand[-1, 1]\),

  • Number of food sources \(SN = NP/2\),

  • \(limit=1{,}500\) [1, 15],

  • The stopping criteria is either maximum number of function evaluations (which is set to be 200000) is reached or the acceptable error (mentioned in Table 1) has been achieved,

  • The number of simulations/run =100,

  • \(C=1.5\) [38],

  • The value of \(\beta =2\) is to be set based on the empirical experiments.

  • To set termination criteria of LFLS, the performance of OBLFABC is measured for considered test problems on different values of \(\epsilon \) and results are analyzed in Fig. 1a. It is clear from Fig. 1a that \(\epsilon =15\) gives better results. Therefore, termination criteria is set to be \(\epsilon =15\).

  • In order to investigate the effect of the parameter \(p_r\), described by algorithm 4 on the performance of \(OBLFABC\), its sensitivity with respect to different values of \(p_r\) in the range \([0.1, 1]\), is examined in the Fig. 1b. It can be observed from Fig. 1b that the test problems are very sensitive towards \(p_r\) and value 0.2 gives comparatively better results. Therefore \(p_r=0.2\) is selected for the experiments.

  • In order to find out the optimal value of the jumping rate constant \(j_r\) for all the considered test problems, \(OBLFABC\) is executed for different values of \(j_r\) in the range \([0.1, 0.9]\). The sensitivity analysis of \(j_r\) is shown in Fig. 1c. It is clear from Fig. 1c that \(j_r=0.1\) gives comparatively better results. Hence, for the experiments, \(j_r=0.1\) is selected.

  • Parameter settings for the algorithms GABC, BSFABC and MABC are similar to their original research papers.

Fig. 1
figure 1

a Effect of LFLS termination criteria \(\epsilon \) on success rate, b effect of parameter \(p_r\) on success rate and c effect of parameter \(j_r\) on success rate

6.3 Results comparison

Numerical results with experimental setting of Subsect. 6.2 are given in Table 2. In Table 2, standard deviation \((SD)\), mean error \((ME)\), average function evaluations \((AFE)\) and success rate \((SR)\) are reported. Table 2 shows that most of the time OBLFABC outperforms in terms of reliability, efficiency and accuracy as compare to the basic ABC, GABC, BSFABC and MABC. Some more intensive analyses based on acceleration rate (AR), performance indices and boxplots have been carried out for results of ABC and its variants.

Table 2 Comparison of the results of test problems

\(OBLFABC, ABC, GABC, BSFABC\), and \(MABC\) are compared through \(SR, ME\) and \(AFE\) in Table 2. First \(SR\) is compared for all these algorithms and if it is not possible to distinguish the algorithms based on \(SR\) then comparison is made on the basis of \(AFE\). \(ME\) is used for comparison if it is not possible on the basis of \(SR\) and \(AFE\) both. Outcome of this comparison is summarized in Table 3. In Table 3, ‘+’ indicates that the \(OBLFABC\) is better than the considered algorithms and ‘\(-\)’ indicates that the algorithm is not better or the difference is very small. The last row of Table 3, establishes the superiority of \(OBLFABC\) over \(ABC, BSFABC, MABC\).

Table 3 Summary of Table 2 outcome

Further, we compare the convergence speed of the considered algorithms by measuring the average function evaluations (AFEs). A smaller AFEs means higher convergence speed. In order to minimize the effect of the stochastic nature of the algorithms, the reported function evaluations for each test problem is the average over 100 runs. In order to compare convergence speeds, we use the acceleration rate (AR) which is defined as follows, based on the AFEs for the two algorithms \(ALGO\) and OBLFABC:

$$\begin{aligned} AR=\frac{AFE_{ALGO}}{AFE_{OBLFABC}}, \end{aligned}$$
(16)

where, \(ALGO\in \) { ABC, GABC, BSFABC, MABC} and \(AR>1\) means OBLFABC is faster. In order to investigate the AR of the proposed algorithm compare to the basic ABC and its variants, results of Table 2 are analyzed and the value of \(AR\) is calculated using Eq. (16). Table 4 shows a clear comparison between \(OBLFABC\) and ABC, OBLFABC and GABC, OBLFABC and BSFABC, and OBLFABC and MABC in terms of AR. It is clear from the Table 4 that convergence speed of OBLFABC is faster among all the considered algorithms.

Table 4 Acceleration Rate (AR) of \(OBLFABC\) compare to the basic \(ABC, GABC, BSFABC\) and \(MABC\)

For the purpose of comparison in terms of consolidated performance, boxplot analyses have been carried out for all the considered algorithms. The empirical distribution of data is efficiently represented graphically by the boxplot analysis tool [34]. The boxplots for ABC, OBLFABC, GABC, BSFABC and MABC are shown in Fig. 2. It is clear from this figure that OBLFABC is better than the considered algorithms as interquartile range and median are comparatively low.

Fig. 2
figure 2

Boxplots graphs for average function evaluation

Further, to compare the considered algorithms, by giving weighted importance to the success rate, the mean error and the average number of function evaluations, performance indices (\(PI\)) are calculated [6]. The values of \(PI\) for the ABC, OBLFABC, GABC, BSFABC, and MABC are calculated by using following equations:

$$\begin{aligned} PI=\frac{1}{N_p}\sum _{i=1}^{N_p}(k_1\alpha _{1}^{i}+k_2\alpha _{2}^{i}+k_3\alpha _{3}^{i}) \end{aligned}$$

where \(\alpha _{1}^{i}=\frac{Sr^i}{Tr^i}\); \(\alpha _{2}^{i}={\left\{ \begin{array}{ll} \frac{Mf^i}{Af^i},&\text{ if} Sr^i > 0.\\ 0,&\text{ if} Sr^i = 0. \end{array}\right.}\) ; and \(\alpha _{3}^{i}=\frac{Mo^i}{Ao^i}\)

\(i = 1,2,\ldots , N_p\)

  • \(Sr^i =\) Successful simulations/runs of ith problem.

  • \(Tr^i =\) Total simulations of ith problem.

  • \(Mf^i=\) Minimum of average number of function evaluations used for obtaining the required solution of ith problem.

  • \(Af^i=\) Average number of function evaluations used for obtaining the required solution of ith problem.

  • \(Mo^i=\) Minimum of mean error obtained for the \(i\)th problem.

  • \(Ao^i=\) Mean error obtained by an algorithm for the \(i\)th problem.

  • \(N_p=\) Total number of optimization problems evaluated.

The weights assigned to the success rate, the average number of function evaluations and the mean error are represented by \(k_1, k_2\) and \(k_3\) respectively where \(k_1 + k_2 + k_3=1\) and \(0\le k_1, k_2, k_3 \le 1\). To calculate the \(PI\)s, equal weights are assigned to two variables while weight of the remaining variable vary from 0 to 1 as given in [6]. Following are the resultant cases:

  1. 1.

    \(k_1=W, k_2=k_3=\frac{1-W}{2}, 0\le W\le 1\);

  2. 2.

    \(k_2=W, k_1=k_3=\frac{1-W}{2}, 0\le W\le 1\);

  3. 3.

    \(k_3=W, k_1=k_2=\frac{1-W}{2}, 0\le W\le 1\)

The graphs corresponding to each of the cases (1), (2) and (3) for ABC, OBLFABC, GABC, BSFABC, and MABC are shown in Figs. 3a–c respectively. In these figures the weights \(k_1, k_2\) and \(k_3\) are represented by horizontal axis while the \(PI\) is represented by the vertical axis.

Fig. 3
figure 3

Performance index for test problems; a for case (1), b for case (2) and c for case (3)

In case (1), average number of function evaluations and the mean error are given equal weights. \(PIs\) of the considered algorithms are superimposed in Fig. 3a for comparison of the performance. It is observed that \(PI\) of OBLFABC are higher than the considered algorithms. In case (2), equal weights are assigned to the success rate and mean error and in case (3), equal weights are assigned to the success rate and average function evaluations. It is clear from Fig. 3b, c that, the algorithms perform same as in case (1).

7 Conclusion

In this paper, a new local search strategy based on the Lévy Flight random walk is proposed for finding the new solutions around the best solution. In this search strategy, new solutions are generated in the neighborhood of the best solution depending upon perturbation rate. The proposed local search strategy along with opposition based learning (OBL) has been employed to improve the convergence of ABC. The proposed Lévy flight search strategy is used to exploit the search space whereas OBL is used to introduce opposition-based swarm generation to speed up the convergence. Further, inspired by PSO and GABC, a modified position update equation is used to generate the solutions in the search process. By embedding these three modifications within ABC, a new variant of ABC is proposed and named as OBLFABC. Further, the proposed algorithm is compared with the basic ABC, GABC, BSFABC, and MABC through the help of experiments over test problems and shown that the OBLFABC outperforms to the considered algorithms in terms of reliability, efficiency and accuracy.