Abstract
Artificial bee colony (ABC) is an effective optimization algorithm, which has been used in various practical applications. However, the standard ABC suffers from low accuracy of solutions and slow convergence rate. To address these issues, a hybrid ABC (called HABC) is proposed in this paper. In HABC, two improved strategies are utilized. First, a new search model is designed based on the best-of-random mutation scheme. Second, new solutions are generated by updating multiple dimensions. To verify the performance of HABC, twelve numerical optimization problems are tested in the experiments. Results of HABC are compared the standard ABC and two other improved ABC versions. The comparison show that our approach can effectively improve the optimization performance.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
Many engineering problems can be boiled down to optimization problems. Therefore, optimization has become an indispensable computational tool in various engineering areas. Traditional optimization techniques include gradient descent, Newton method, conjugate gradient, Lagrange multiplier, least square method, and so on. The above methods have complete mathematical theory and fast convergence speed, but they always lack of generality and depend on the characteristics of problems. With the development of science and technology, optimization problems become more and more complex, such as multimodal, discrete, strong constraints, large scale, and nonlinear. For such complex problems, it is difficult to solve them by traditional mathematical methods.
In the past decades, some new optimization algorithms based on swarm intelligence have been proposed/designed, such as particle swarm optimization (PSO) [1], ant colony optimization (ACO) [2], firefly algorithm (FA) [3], artificial bee colony (ABC) [4], cuckoo search (CS) [5], and bat algorithm (BA) [6]. These algorithms are modeled on some social behaviors of nature. For example, PSO simulates the fish schooling and birds flocking. FA is inspired by the mating attraction of fireflies. Among these swarm intelligence algorithms, ABC shows some advantages: (1) it has only one control parameter except for population size; and (2) it has good global search ability. Some previous studies show that ABC performs better than PSO and some population-based algorithms [7]. Due to ABC’s advantages, it has been widely used in many optimization problems.
Although ABC has shown good search performance, it has some shortcomings, slow convergence and low accuracy of solutions. To tackle these issues, this paper proposes a hybrid ABC (called HABC), which employs two improved strategies. In the first strategy, a new search model is constructed based on the best-of-random mutation scheme [8]. In the standard ABC, new solutions are generated by randomly updating a dimension of their corresponding parent solutions. It means that the differences between a new solution and its corresponding parent solution are very small. This easily results in slow convergence. Then, the second strategy focuses on updating multiple dimensions to generate offspring. Experimental study is carried out on twelve numerical optimization problems. HABC is compared with ABC and two other ABC variants. Results show that HABC can achieve better solutions on most test problems.
The rest of the paper is organized as follows. ABC and its literature review are presented in Sect. 2. The proposed approach is described in Sect. 3. Results and discussions are given in Sect. 4. Finally, we summarize the conclusions in Sect. 5.
2 Related works
2.1 Artificial bee colony
ABC is an effective optimization algorithm developed by Karaboga [4]. In ABC, there are three different kinds of bees, which are called employed bees, onlooker bees, and scouts, respectively. The task of bees is to find good food sources with more nectar. Therefore, each food source is regarded as a solution. The nectar of a food source determines the fitness of the corresponding solution.
Let \(X_i =\{x_{i,1} ,x_{i,2} ,\ldots ,x_{i,D} \}\) be the ith food source (solution) in the population, where \(i=1,2,\ldots ,N\). Each employed bee chooses a solution \((X_{i})\) and searches its neighborhood to find a new solution (\(V_{i})\). This process can be defined as follows [4].
where \(j\in [1,D]\) is a randomly selected dimension index, and D is the problem size. \(X_{k}\) is a randomly chosen solution, and \(k\in [1,N]\). The weight \(\phi _{i,j}\) is randomly generated between −1.0 and 1.0.
The employed bees can pass their search information to the onlooker bees, which select some good food sources and conduct the second search. This can help to find better solutions. The selection method is decided by the fitness values of solutions. In [4], a roulette selection is used and the selection probability is calculated as follows.
where \(fit_i\) is the fitness value of \(X_{i}\).
If the fitness value of a solution cannot be improved in limit generations, where limit is a predefined value, a scout generate a new solution to update the original one [4].
where \(\left[ {low_j ,up_j } \right] \) is the constraint, and \({\textit{rand}}_j\) is a random value between 0 and 1.0.
2.2 Literature review
In recent years, the research of ABC has attracted much attention. Different versions of ABC have been proposed to solve benchmark or practical problems. In the following, a brief literature review of these works is presented.
Zhu and Kwong [9] designed a gbest (the global best solution) guided ABC (called GABC), which shares the search experiences of the gbest with other solutions. Simulation results show that GABC can improve the convergence rate on several benchmark functions. Akay and Karaboga [10] presented a modified ABC, which introduced two strategies. A new parameter MR is used to control the number of updating dimensions. The literature [10] pointed out that changing only one dimension of parent solutions in the standard ABC can slow down the convergence rate. By adjusting MR, the ABC algorithm can be accelerated. In [11], Gao and Liu introduced another modified ABC (called MABC), which used a new search equation based on DE mutation scheme. It is known that different search models may result in different search abilities. Some researchers used multiple search models in ABC. Wang et al. [12] presented a multi-strategy ensemble of ABC (called MEABC), which constructed a strategy pool consisting of three search models. Kiran et al. [13] designed a variable search strategy for ABC (ABCVSS). Gao et al. [14] developed a novel ABC with three search strategy (MuABC), which also used a strategy pool.
In [15], Zhou et al. combined Gaussian distribution model with ABC to improve the search performance. In [16], Zhou et al. proposed a neighborhood search operator to make a balance between explorative and exploitative abilities. Cui et al. [17] developed a ranking based adaptive ABC (called ARABC), in which a food source with a high rank can be chosen in a high probability. Results show that ARABC outperforms many state-of-art algorithms. Liang et al. [18] used adaptive differential evolution (DE) operators to enhance the performance of ABC. In [19], Wang et al. used an external archive to accelerate the convergence of ABC. In the search process, some good solutions are stored in an external archive. The search model is modified by introduced these good solutions to guide the search. Like [19], Li and Yang [20] proposed a new ABC with memory (called ABCM), which memorized pervious successful search information to guide the behavior of bees. Simulation results show that ABCM is superior to ABC and quick ABC (qABC) [21]. Sharma and Pant [22] introduced a shuffled ABC to solve constrained optimization problems, in which ABC is combined with the shuffled frog-leaping algorithm.
3 Proposed approach
As mentioned before, the standard ABC suffers from slow convergence rate and low accuracy of solutions. To address these issues, a hybrid ABC (HABC) is proposed as follows.
The search equation used in ABC is similar to the mutation scheme of DE. Then, some researchers modified DE mutation schemes to construct new search equations for ABC. In [11], two new models called ABC/best/1 and ABC/best/2 were constructed based on DE/best/1 and DE/best/2, respectively. In [17], Cui et al. designed another search equation inspired by the DE/rand/1. In this paper, a new search model based on the best-of-random mutation scheme is proposed. In [8], Lin et al. presented a novel mutation base strategy, called best-of-random (BoR), for DE. The best one among randomly selected individuals is chosen as the mutation base. For example, \(X_{r1}\), \(X_{r2}\), and \( X_{r3}\) are three randomly selected individuals. Then, the best one among \(X_{r1}\), \(X_{r2}\), and \( X_{r3}\) is chosen as the mutation base [8].
where \(X_{r1}\), \(X_{r2}\), and \(X_{rb}\) are three randomly selected solutions from the population, \(i\ne r1 \ne r2 \ne rb\), and \(X_{rb}\) is best one among three individuals. The parameter \(F\in [0,1]\) is called scale factor. Based on Eqs. (1) and (4), a new search model is proposed as follows.
where \(j\in [1,D]\) is a random dimension index, and \(\phi _{i,j}\) is a random value within [−1,1]. Firstly, we randomly select three solutions \(X_{r1}\), \(X_{r2}\), and \(X_{rb}\) from the population. Secondly, we compare \(X_{rb}\) with \(X_{r1}\) and \(X_{r2}\), respectively. If \(X_{r1}\) is better than \(X_{rb}\), then swap \(X_{rb}\) with \(X_{r1}\). Now, \(X_{rb}\) is better than \(X_{r1}\). Similarly, we swap \(X_{rb}\) with \(X_{r2}\), when \(X_{r2 }\) is better than \(X_{rb}\). After two comparisons, \(X_{rb}\) is the best one among \(X_{r1}\), \(X_{r2}\), and \(X_{rb}\). The base vector \(X_{rb}\) acts as the local search center. The weighted vector differences determines the step size around the center. Therefore, new solutions are in the neighborhood of \(X_{rb}\).
The literature [10] illustrated that changing only one dimension of parent solutions in ABC may result in slow convergence speed. To tackle this problem, Akay and Karaboga [10] introduced a parameter MR to control the frequency of updating dimensions. In this paper, this method is used as the second strategy to enhance the performance of ABC. For each food source (solution), we check all dimensions \(j, j=1,2,{\ldots },D\). If \(\textit{rand}_{j}<\textit{MR}\), then update \(v_{i,j}\) according to Eq. (5). Then, the new search equation is rewritten as follows.
where MR is a predefined probability parameter and \({{\textit{rand}}}_{j}\) is a random value of the jth dimension.
The steps of the proposed HABC are given as follows.
-
Step 1.
Randomly initialize N solutions, and compute their fitness values. Set \(t=0\).
-
Step 2.
For each solution \(X_{i}\), a new solution \(V_{i}\) is generated by Eq. (6). The better one between \(X_{i}\) with \(V_{i}\) is used as the new \(X_{i}\). If \(f(V_{i})<f(X_{i})\), then \({{\textit{trial}}}_{i}=0\); otherwise \({{\textit{trial}}}_{i}={{\textit{trial}}}_{i} +1\) (this paper only considers minimization problems).
-
Step 3.
Calculate the probability \(p_{i}\) according to Eq. (2). If \(X_{i}\) is selected, then a new solution \(V_{i}\) is created by Eq. (6). The better one between \(X_{i}\) with \(V_{i}\) is used as the new \(X_{i}\). If \(f(V_{i})<f(X_{i})\), then \({{\textit{trial}}}_{i}=0\); otherwise \({{\textit{trial}}}_{i}={{\textit{trial}}}_{i} +1\).
-
Step 4.
If \(\max \{{\textit{trail}}_i \}>{\textit{limit}}\), the solution \(X_{i}\) is updated by Eq. (3).
-
Step 5:
Set \(t=t+1\). If \(t<{{\textit{MaxG}}}\), then go to Step 2; otherwise stop the algorithm.
4 Experimental study
4.1 Test problems
In the experiments, there are twelve numerical optimization problems [23, 24]. The descriptions of these test problems are displayed in Table 1, where D is the dimensional size and Opt is the global optimum. All these problems are to be minimized.
4.2 Parameter analysis of MR
In the standard ABC, only one dimension is updated during the search. This may result in slow convergence. In HABC, the parameter MR can control the number of updating dimensions. Therefore, the value ofMR may seriously affect the search performance of HABC. To investigate the effect of MR, different values of MR are evaluated. In the following experiment, MR is set to 0.1, 0.2, 0.3, 0.5, and 1.0, respectively.
In HABC, the population size (N) and maximum number of generation (MaxG) are set to 50 and 1500, respectively. Thelimit is equal to 100. For each test problem, HABC with each MR runs 30 times. Results of HABC with different MR are given in Table 2, where “Mean” represents the mean function value. The performance of HABC is not affected on problems \(f_{6}\), \(f_{11}\) and \(f_{12}\). For \(f_{1}\), \(f_{2}\), and \(f_{4}\), \({{\textit{MR}}}=0.3\) is the best choice. A larger value of MR is more suitable for problems \(f_{3}\) and \(f_{5}\). \({{\textit{MR}}}=0.2\) performs the best on \(f_{7}\). \({\textit{MR}}=0.1\) and \({{\textit{MR}}}=0.2\) can find the global optimum on \(f_{9}\). For \(f_{8}\), the values of MR between 0.1 and 0.3 are good settings. Results show that the performance of HABC is affected by the parameter MR on most test problems. A fixed MR may not be a good choice. According to the results, \({{\textit{MR}}}=0.3\) is a balanced setting.
To clearly identify which MR is the best, Figs. 1, 2, 3 present the convergence characteristics of HABC with different values of MR. From Figs. 1 and 2, all MR values can help HABC converge fast, and \({{\textit{MR}}}=0.3\) is faster than other values. \({{\textit{MR}}}=0.2\) and \({{\textit{MR}}}=0.5\) obtain the second and third places, respectively. For Fig. 3, \({{\textit{MR}}}=0.3\) shows the fast convergence, while \({{\textit{MR}}}=0.1\) is the worst.
4.3 Comparison of HABC with other ABC algorithms
To evaluate the performance of HABC, it is compared with ABC, GABC [9], and MABC [11] on twelve test problems. For ABC, GABC, MABC, and HABC, the parameters N, limit, and MaxG are set to 50, 100, and 1500, respectively. The parameter p in MABC is equal to 0.7 [11]. The parameter C is set to 1.5 [9]. The MR used in HABC is set to 0.3.
Computational results of HABC, ABC, GABC, and MABC are presented in Table 3, where “Mean” indicates the mean function value. From the results, all four ABCs can converge to the global optimum on \(f_{6}\). HABC is superior to ABC on eight problems, while ABC outperforms HABC on three problems. GABC, MABC, and HABC achieve the same result on \(f_{8}\). HABC obtains better solutions than GABC on seven problems, but HABC is worse than GABC on three problems. MABC is better than HABC on three problems, while HABC achieves better solutions on five problems. For problems \(f_{11}\) and \(f_{12}\), MABC and HABC obtain the same results.
Figures 4, 5, 6, 7, 8 and 9 present the convergence curves of ABC, GABC, MABC, and HABC on six selected problems. HABC converges much faster than ABC, GABC, and MABC. For \(f_{11}\) and \(f_{12}\), GABC and ABC are faster than HABC at the initial search stage. HABC shows faster convergence rate at the middle and last search stages. The convergence processes confirm that our designed strategies can significantly improve the convergence rate of ABC.
5 Conclusions
In order to address the issues of low accuracy of solutions and slow convergence, a hybrid ABC (HABC) is presented. In HABC, a modified search model is designed based on the best-of-random DE mutation scheme. Moreover, a new parameter MR is introduced to control the number of updating dimensions. To verify the performance of HABC, twelve test problems are used in the experiments. HABC is compared with ABC, GABC, and MABC. Results show that HABC is superior to other three ABC algorithms on the majority of test problems.
The parameter MR can seriously affect the performance of HABC on most test problems. It demonstrates that a fixed vale of MR may not be a good setting for all test problems. In the future work, we will try some dynamic strategies to adjust MR. It is hopeful that the dynamic MR can help HABC to solve different kinds of problems.
References
Kennedy, J., Eberhart, R.: Particle swarm optimization. In: Proceedings of IEEE International Conference on Neural Networks, pp. 1942–1948 (1995)
Dorigo, M., Maniezzo, V., Colorni, A.: Ant system: optimization by a colony of cooperating agents. IEEE Trans. Syst. Man Cybern. B (Cybern.) 26(1), 29–41 (1996)
Yang, X.S.: Firefly algorithm, stochastic test functions and design optimization. Int. J. Bio-Inspired Comput. 2(2), 78–84 (2010)
Karaboga, D.: An idea based on honey bee swarm for numerical optimization. Technical Report-TR06, Erciyes University, engineering Faculty, Computer Engineering Department (2005)
Yang, X.S., Deb, S.: Engineering optimisation by cuckoo search. Int. J. Math. Model. Numer. Optim. 1(4), 330–343 (2010)
Yang, X.S.: A new metaheuristic bat-inspired algorithm. In: Nature Inspired Cooperative Strategies for Optimization (NICSO 2010), pp. 65–74 (2010)
Karaboga, D., Akay, B.: A comparative study of artificial bee colony algorithm. Appl. Math. Comput. 214, 108–132 (2009)
Lin, C., Qing, A., Feng, Q.: A new differential mutation base generator for differential evolution. J. Glob. Optim. 49(1), 69–90 (2011)
Zhu, G., Kwong, S.: Gbest-guided artificial bee colony algorithm for numerical function optimization. Appl. Math. Comput. 217, 3166–3173 (2010)
Akay, B., Karaboga, D.: A modified artificial bee colony algorithm for real-parameter optimization. Inf. Sci. 192, 120–142 (2012)
Gao, W.F., Liu, S.Y., Huang, L.L.: A global best artificial bee colony algorithm for global optimization. J. Comput. Appl. Math. 236(11), 2741–2753 (2012)
Wang, H., Wu, Z.J., Rahnamayan, S., Sun, H., Liu, Y., Pan, J.S.: Multi-strategy ensemble artificial bee colony algorithm. Inf. Sci. 279, 587–603 (2014)
Kiran, M.S., Hakli, H., Gunduz, M., Uguz, H.: Artificial bee colony algorithm with variable search strategy for continuous optimization. Inf. Sci. 300, 140–157 (2015)
Gao, W.F., Huang, L.L., Liu, S.Y., Chan, F.T.S., Dai, C.: Artificial bee colony algorithm with multiple search strategies. Appl. Math. Comput. 271, 269–287 (2015)
Zhou, X.Y., Wu, Z.J., Wang, H., Rahnamayan, S.: Gaussian bare-bones artificial bee colony algorithm. Soft. Comput. 20(3), 907–924 (2016)
Zhou, X.Y., Wang, H., Wang, M.W., Wan, J.Y.: Enhancing the modified artificial bee colony algorithm with neighborhood search. Soft. Comput. 21(10), 2733–2743 (2017)
Cui, L.Z., Li, G.H., Wang, X.Z., Lin, Q.Z., Chen, J.Y., Lu, N., Lu, J.: A ranking-based adaptive artificial bee colony algorithm for global numerical optimization. Inf. Sci. 417, 169–185 (2017)
Liang, Z.P., Hu, K.F., Zhu, Q.X., Zhu, Z.X.: An enhanced artificial bee colony algorithm with adaptive differential operators. Appl. Soft Comput. 58, 480–494 (2017)
Wang, H., Wu, Z., Zhou, X., Rahnamayan, S.: Accelerating artificial bee colony algorithm by using an external archive. In: IEEE Congress on Evolutionary Computation (CEC 2013), pp. 517–521 (2013)
Li, X.N., Yang, G.F.: Artificial bee colony algorithm with memory. Appl. Soft Comput. 41, 362–372 (2016)
Karaboga, D., Gorkemli, B.: A quick artificial bee colony (qABC) algorithm and its performance on optimization problems. Appl. Soft Comput. 23, 227–238 (2014)
Sharma, T.K., Pant, M.: Shuffled artificial bee colony algorithm. Soft. Comput. 21(20), 6085–6104 (2017)
Wang, H., Sun, H., Li, C., Rahnamayan, S., Pan, J.S.: Diversity enhanced particle swarm optimization with neighborhood search. Inf. Sci. 223, 119–135 (2013)
Wang, H., Rahnamayan, S., Sun, H., Omran, M.G.H.: Gaussian bare-bones differential evolution. IEEE Trans. Cybern. 43(2), 634–647 (2013)
Acknowledgements
This work is supported by the project of the First-Class University and the First-Class Discipline(10301-017004011501), and the National Natural Science Foundation of China.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Pan, X., Lu, Y., Sun, N. et al. A hybrid artificial bee colony algorithm with modified search model for numerical optimization. Cluster Comput 22 (Suppl 2), 2581–2588 (2019). https://doi.org/10.1007/s10586-017-1343-0
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10586-017-1343-0