1 Introduction

Many engineering problems can be boiled down to optimization problems. Therefore, optimization has become an indispensable computational tool in various engineering areas. Traditional optimization techniques include gradient descent, Newton method, conjugate gradient, Lagrange multiplier, least square method, and so on. The above methods have complete mathematical theory and fast convergence speed, but they always lack of generality and depend on the characteristics of problems. With the development of science and technology, optimization problems become more and more complex, such as multimodal, discrete, strong constraints, large scale, and nonlinear. For such complex problems, it is difficult to solve them by traditional mathematical methods.

In the past decades, some new optimization algorithms based on swarm intelligence have been proposed/designed, such as particle swarm optimization (PSO) [1], ant colony optimization (ACO) [2], firefly algorithm (FA) [3], artificial bee colony (ABC) [4], cuckoo search (CS) [5], and bat algorithm (BA) [6]. These algorithms are modeled on some social behaviors of nature. For example, PSO simulates the fish schooling and birds flocking. FA is inspired by the mating attraction of fireflies. Among these swarm intelligence algorithms, ABC shows some advantages: (1) it has only one control parameter except for population size; and (2) it has good global search ability. Some previous studies show that ABC performs better than PSO and some population-based algorithms [7]. Due to ABC’s advantages, it has been widely used in many optimization problems.

Although ABC has shown good search performance, it has some shortcomings, slow convergence and low accuracy of solutions. To tackle these issues, this paper proposes a hybrid ABC (called HABC), which employs two improved strategies. In the first strategy, a new search model is constructed based on the best-of-random mutation scheme [8]. In the standard ABC, new solutions are generated by randomly updating a dimension of their corresponding parent solutions. It means that the differences between a new solution and its corresponding parent solution are very small. This easily results in slow convergence. Then, the second strategy focuses on updating multiple dimensions to generate offspring. Experimental study is carried out on twelve numerical optimization problems. HABC is compared with ABC and two other ABC variants. Results show that HABC can achieve better solutions on most test problems.

The rest of the paper is organized as follows. ABC and its literature review are presented in Sect. 2. The proposed approach is described in Sect. 3. Results and discussions are given in Sect. 4. Finally, we summarize the conclusions in Sect. 5.

2 Related works

2.1 Artificial bee colony

ABC is an effective optimization algorithm developed by Karaboga [4]. In ABC, there are three different kinds of bees, which are called employed bees, onlooker bees, and scouts, respectively. The task of bees is to find good food sources with more nectar. Therefore, each food source is regarded as a solution. The nectar of a food source determines the fitness of the corresponding solution.

Let \(X_i =\{x_{i,1} ,x_{i,2} ,\ldots ,x_{i,D} \}\) be the ith food source (solution) in the population, where \(i=1,2,\ldots ,N\). Each employed bee chooses a solution \((X_{i})\) and searches its neighborhood to find a new solution (\(V_{i})\). This process can be defined as follows [4].

$$\begin{aligned} v_{i,j} =x_{i,j} +\phi _{i,j} \cdot \left( {x_{i,j} -x_{k,j} } \right) \end{aligned}$$
(1)

where \(j\in [1,D]\) is a randomly selected dimension index, and D is the problem size. \(X_{k}\) is a randomly chosen solution, and \(k\in [1,N]\). The weight \(\phi _{i,j}\) is randomly generated between −1.0 and 1.0.

The employed bees can pass their search information to the onlooker bees, which select some good food sources and conduct the second search. This can help to find better solutions. The selection method is decided by the fitness values of solutions. In [4], a roulette selection is used and the selection probability is calculated as follows.

$$\begin{aligned} p_i =\frac{fit_i }{\sum _{i=1}^N {fit_i } } \end{aligned}$$
(2)

where \(fit_i\) is the fitness value of \(X_{i}\).

If the fitness value of a solution cannot be improved in limit generations, where limit is a predefined value, a scout generate a new solution to update the original one [4].

$$\begin{aligned} x_{i,j} =low_j +rand_j \cdot (up_j -low_j ) \end{aligned}$$
(3)

where \(\left[ {low_j ,up_j } \right] \) is the constraint, and \({\textit{rand}}_j\) is a random value between 0 and 1.0.

2.2 Literature review

In recent years, the research of ABC has attracted much attention. Different versions of ABC have been proposed to solve benchmark or practical problems. In the following, a brief literature review of these works is presented.

Zhu and Kwong [9] designed a gbest (the global best solution) guided ABC (called GABC), which shares the search experiences of the gbest with other solutions. Simulation results show that GABC can improve the convergence rate on several benchmark functions. Akay and Karaboga [10] presented a modified ABC, which introduced two strategies. A new parameter MR is used to control the number of updating dimensions. The literature [10] pointed out that changing only one dimension of parent solutions in the standard ABC can slow down the convergence rate. By adjusting MR, the ABC algorithm can be accelerated. In [11], Gao and Liu introduced another modified ABC (called MABC), which used a new search equation based on DE mutation scheme. It is known that different search models may result in different search abilities. Some researchers used multiple search models in ABC. Wang et al. [12] presented a multi-strategy ensemble of ABC (called MEABC), which constructed a strategy pool consisting of three search models. Kiran et al. [13] designed a variable search strategy for ABC (ABCVSS). Gao et al. [14] developed a novel ABC with three search strategy (MuABC), which also used a strategy pool.

In [15], Zhou et al. combined Gaussian distribution model with ABC to improve the search performance. In [16], Zhou et al. proposed a neighborhood search operator to make a balance between explorative and exploitative abilities. Cui et al. [17] developed a ranking based adaptive ABC (called ARABC), in which a food source with a high rank can be chosen in a high probability. Results show that ARABC outperforms many state-of-art algorithms. Liang et al. [18] used adaptive differential evolution (DE) operators to enhance the performance of ABC. In [19], Wang et al. used an external archive to accelerate the convergence of ABC. In the search process, some good solutions are stored in an external archive. The search model is modified by introduced these good solutions to guide the search. Like [19], Li and Yang [20] proposed a new ABC with memory (called ABCM), which memorized pervious successful search information to guide the behavior of bees. Simulation results show that ABCM is superior to ABC and quick ABC (qABC) [21]. Sharma and Pant [22] introduced a shuffled ABC to solve constrained optimization problems, in which ABC is combined with the shuffled frog-leaping algorithm.

3 Proposed approach

As mentioned before, the standard ABC suffers from slow convergence rate and low accuracy of solutions. To address these issues, a hybrid ABC (HABC) is proposed as follows.

The search equation used in ABC is similar to the mutation scheme of DE. Then, some researchers modified DE mutation schemes to construct new search equations for ABC. In [11], two new models called ABC/best/1 and ABC/best/2 were constructed based on DE/best/1 and DE/best/2, respectively. In [17], Cui et al. designed another search equation inspired by the DE/rand/1. In this paper, a new search model based on the best-of-random mutation scheme is proposed. In [8], Lin et al. presented a novel mutation base strategy, called best-of-random (BoR), for DE. The best one among randomly selected individuals is chosen as the mutation base. For example, \(X_{r1}\), \(X_{r2}\), and \( X_{r3}\) are three randomly selected individuals. Then, the best one among \(X_{r1}\), \(X_{r2}\), and \( X_{r3}\) is chosen as the mutation base [8].

$$\begin{aligned} v_{i,j} =x_{rb,j} +F\cdot \left( {x_{r1,j} -x_{r2,j} } \right) \end{aligned}$$
(4)

where \(X_{r1}\), \(X_{r2}\), and \(X_{rb}\) are three randomly selected solutions from the population, \(i\ne r1 \ne r2 \ne rb\), and \(X_{rb}\) is best one among three individuals. The parameter \(F\in [0,1]\) is called scale factor. Based on Eqs. (1) and (4), a new search model is proposed as follows.

$$\begin{aligned} v_{i,j} =x_{rb,j} +\phi _{i,j} \cdot \left( {x_{r1,j} -x_{r2,j} } \right) \end{aligned}$$
(5)

where \(j\in [1,D]\) is a random dimension index, and \(\phi _{i,j}\) is a random value within [−1,1]. Firstly, we randomly select three solutions \(X_{r1}\), \(X_{r2}\), and \(X_{rb}\) from the population. Secondly, we compare \(X_{rb}\) with \(X_{r1}\) and \(X_{r2}\), respectively. If \(X_{r1}\) is better than \(X_{rb}\), then swap \(X_{rb}\) with \(X_{r1}\). Now, \(X_{rb}\) is better than \(X_{r1}\). Similarly, we swap \(X_{rb}\) with \(X_{r2}\), when \(X_{r2 }\) is better than \(X_{rb}\). After two comparisons, \(X_{rb}\) is the best one among \(X_{r1}\), \(X_{r2}\), and \(X_{rb}\). The base vector \(X_{rb}\) acts as the local search center. The weighted vector differences determines the step size around the center. Therefore, new solutions are in the neighborhood of \(X_{rb}\).

The literature [10] illustrated that changing only one dimension of parent solutions in ABC may result in slow convergence speed. To tackle this problem, Akay and Karaboga [10] introduced a parameter MR to control the frequency of updating dimensions. In this paper, this method is used as the second strategy to enhance the performance of ABC. For each food source (solution), we check all dimensions \(j, j=1,2,{\ldots },D\). If \(\textit{rand}_{j}<\textit{MR}\), then update \(v_{i,j}\) according to Eq. (5). Then, the new search equation is rewritten as follows.

$$\begin{aligned} v_{i,j} =\left\{ {\begin{array}{ll} x_{rb,j} +\phi _{i,j} \cdot \left( {x_{r1,j} -x_{r2,j} } \right) , &{}\quad \hbox {if } \textit{rand}_j < \textit{MR} \\ v_{i,j} , &{}\quad \hbox {otherwise} \\ \end{array}} \right. \end{aligned}$$
(6)

where MR is a predefined probability parameter and \({{\textit{rand}}}_{j}\) is a random value of the jth dimension.

The steps of the proposed HABC are given as follows.

  1. Step 1.

    Randomly initialize N solutions, and compute their fitness values. Set \(t=0\).

  2. Step 2.

    For each solution \(X_{i}\), a new solution \(V_{i}\) is generated by Eq. (6). The better one between \(X_{i}\) with \(V_{i}\) is used as the new \(X_{i}\). If \(f(V_{i})<f(X_{i})\), then \({{\textit{trial}}}_{i}=0\); otherwise \({{\textit{trial}}}_{i}={{\textit{trial}}}_{i} +1\) (this paper only considers minimization problems).

  3. Step 3.

    Calculate the probability \(p_{i}\) according to Eq. (2). If \(X_{i}\) is selected, then a new solution \(V_{i}\) is created by Eq. (6). The better one between \(X_{i}\) with \(V_{i}\) is used as the new \(X_{i}\). If \(f(V_{i})<f(X_{i})\), then \({{\textit{trial}}}_{i}=0\); otherwise \({{\textit{trial}}}_{i}={{\textit{trial}}}_{i} +1\).

  4. Step 4.

    If \(\max \{{\textit{trail}}_i \}>{\textit{limit}}\), the solution \(X_{i}\) is updated by Eq. (3).

  5. Step 5:

    Set \(t=t+1\). If \(t<{{\textit{MaxG}}}\), then go to Step 2; otherwise stop the algorithm.

Table 1 Twelve test problems
Table 2 Results of HABC with different MR

4 Experimental study

4.1 Test problems

In the experiments, there are twelve numerical optimization problems [23, 24]. The descriptions of these test problems are displayed in Table 1, where D is the dimensional size and Opt is the global optimum. All these problems are to be minimized.

4.2 Parameter analysis of MR

In the standard ABC, only one dimension is updated during the search. This may result in slow convergence. In HABC, the parameter MR can control the number of updating dimensions. Therefore, the value ofMR may seriously affect the search performance of HABC. To investigate the effect of MR, different values of MR are evaluated. In the following experiment, MR is set to 0.1, 0.2, 0.3, 0.5, and 1.0, respectively.

In HABC, the population size (N) and maximum number of generation (MaxG) are set to 50 and 1500, respectively. Thelimit is equal to 100. For each test problem, HABC with each MR runs 30 times. Results of HABC with different MR are given in Table 2, where “Mean” represents the mean function value. The performance of HABC is not affected on problems \(f_{6}\), \(f_{11}\) and \(f_{12}\). For \(f_{1}\), \(f_{2}\), and \(f_{4}\), \({{\textit{MR}}}=0.3\) is the best choice. A larger value of MR is more suitable for problems \(f_{3}\) and \(f_{5}\). \({{\textit{MR}}}=0.2\) performs the best on \(f_{7}\). \({\textit{MR}}=0.1\) and \({{\textit{MR}}}=0.2\) can find the global optimum on \(f_{9}\). For \(f_{8}\), the values of MR between 0.1 and 0.3 are good settings. Results show that the performance of HABC is affected by the parameter MR on most test problems. A fixed MR may not be a good choice. According to the results, \({{\textit{MR}}}=0.3\) is a balanced setting.

To clearly identify which MR is the best, Figs. 1, 2, 3 present the convergence characteristics of HABC with different values of MR. From Figs. 1 and 2, all MR values can help HABC converge fast, and \({{\textit{MR}}}=0.3\) is faster than other values. \({{\textit{MR}}}=0.2\) and \({{\textit{MR}}}=0.5\) obtain the second and third places, respectively. For Fig. 3, \({{\textit{MR}}}=0.3\) shows the fast convergence, while \({{\textit{MR}}}=0.1\) is the worst.

Fig. 1
figure 1

The convergence characteristics of HABC with different values of MR on problem \(f_{1}\)

Fig. 2
figure 2

The convergence characteristics of HABC with different values of MR on problem \(f_{2}\)

Fig. 3
figure 3

The convergence characteristics of HABC with different values of MR on problem \(f_{4}\)

Table 3 Results of ABC, GABC, MABC, and HABC
Fig. 4
figure 4

The convergence curves of ABC, GABC, MABC, and HABC on problem \(f_{1}\)

4.3 Comparison of HABC with other ABC algorithms

To evaluate the performance of HABC, it is compared with ABC, GABC [9], and MABC [11] on twelve test problems. For ABC, GABC, MABC, and HABC, the parameters N, limit, and MaxG are set to 50, 100, and 1500, respectively. The parameter p in MABC is equal to 0.7 [11]. The parameter C is set to 1.5 [9]. The MR used in HABC is set to 0.3.

Computational results of HABC, ABC, GABC, and MABC are presented in Table 3, where “Mean” indicates the mean function value. From the results, all four ABCs can converge to the global optimum on \(f_{6}\). HABC is superior to ABC on eight problems, while ABC outperforms HABC on three problems. GABC, MABC, and HABC achieve the same result on \(f_{8}\). HABC obtains better solutions than GABC on seven problems, but HABC is worse than GABC on three problems. MABC is better than HABC on three problems, while HABC achieves better solutions on five problems. For problems \(f_{11}\) and \(f_{12}\), MABC and HABC obtain the same results.

Fig. 5
figure 5

The convergence curves of ABC, GABC, MABC, and HABC on problem \(f_{2}\)

Fig. 6
figure 6

The convergence curves of ABC, GABC, MABC, and HABC on problem \(f_{4}\)

Fig. 7
figure 7

The convergence curves of ABC, GABC, MABC, and HABC on problem \(f_{10}\)

Figures 4, 5, 6, 7, 8 and 9 present the convergence curves of ABC, GABC, MABC, and HABC on six selected problems. HABC converges much faster than ABC, GABC, and MABC. For \(f_{11}\) and \(f_{12}\), GABC and ABC are faster than HABC at the initial search stage. HABC shows faster convergence rate at the middle and last search stages. The convergence processes confirm that our designed strategies can significantly improve the convergence rate of ABC.

Fig. 8
figure 8

The convergence curves of ABC, GABC, MABC, and HABC on problem \(f_{11}\)

Fig. 9
figure 9

The convergence curves of ABC, GABC, MABC, and HABC on problem \(f_{12}\)

5 Conclusions

In order to address the issues of low accuracy of solutions and slow convergence, a hybrid ABC (HABC) is presented. In HABC, a modified search model is designed based on the best-of-random DE mutation scheme. Moreover, a new parameter MR is introduced to control the number of updating dimensions. To verify the performance of HABC, twelve test problems are used in the experiments. HABC is compared with ABC, GABC, and MABC. Results show that HABC is superior to other three ABC algorithms on the majority of test problems.

The parameter MR can seriously affect the performance of HABC on most test problems. It demonstrates that a fixed vale of MR may not be a good setting for all test problems. In the future work, we will try some dynamic strategies to adjust MR. It is hopeful that the dynamic MR can help HABC to solve different kinds of problems.