Abstract
Although Artificial Bee Colony (ABC) algorithm is simple and efficient, it also has some disadvantages too. For example, the ABC is good at exploration but poor at exploitation and easily falls into local optimum. In order to overcome these shortcomings and improve the efficiency of the algorithm, the Uniform Local Search Artificial Bee Colony (UGABC) algorithm has been proposed in this paper. The algorithm greatly improves the exploitation ability. For the purpose of comparison, we used four algorithms to experiment. The experimental results show that the UGABC has the best accuracy and the fastest convergence rate among four algorithms.
Access provided by Autonomous University of Puebla. Download conference paper PDF
Similar content being viewed by others
Keywords
1 Foreword
There are a large number of nonlinear, non-differentiable multi-peak complex optimization problems in the engineering technology and optimizations fields. Traditional optimization methods are difficult to solve these problems. In recent years, the intelligent algorithm proposed by scientists can effectively solve these complex problems. For example, Kenney et al. [1] proposed Particle Swarm Optimization (PSO) algorithm simulating birds predation behavior. Yang et al. [2] proposed Cuckoo Search (CS) algorithm simulating cuckoo parasitic brooding. In 2005, the Turkish scientist Karaboga [3] simulated the behavior of bee collecting honey proposed the Artificial Bee Colony (ABC) algorithm. Compared with some traditional evolutionary algorithms, ABC algorithm has the advantages of simplicity, high speed, strong performance, good robustness, etc. It has a very effect on continuous functions [4, 5], and has been widely supplied and developed.
It is crucial to strike a balance between local search and global search for algorithms to solve optimization problems. Emphasizing local search helps to improve the convergence speed of the algorithm, but it is easy to fall into local optimum. Emphasizing global search helps to find new optimal solutions and avoid premature convergence, but it will reduce the convergence speed of the algorithm. The ABC algorithm is better in global search, while the local search is slightly worse, which makes the algorithm easily fall into local optimum. Peng et al. [6] proposed a uniform local search method, which randomly selecting two individuals in the population, and generating a new optimal individual through uniform design, which can significantly enhance the local search ability of the algorithm, thereby improving the overall optimization of the algorithm performance.
The ABC algorithm is sensitive to the search strategy, and different search strategies significantly affect the optimization performance of the algorithm. Therefore, scholars have made a lot of improvements to the ABC algorithm’s search strategy to improve the optimization performance of the algorithm. Inspired by the PSO algorithm, Zhu and Kwong [7] proposed the GABC algorithm, which introduces a Gbest in the search strategy, which improves the performance of the algorithm. Because the GABC algorithm has small changes to the search strategy, the effect is good and has received extensive attention.
Inspired by the GABC algorithm and the uniform local search method, we proposed a new algorithm named Uniform Local Search Gbest Artificial Bee Colony (UGABC) to solve the optimization problem. The UGABC algorithm combines the strong local search ability of ULS and GABC algorithms in order to improve the optimizations of the algorithm and solve the optimization problems.
2 ABC Algorithm and GABC Algorithm
2.1 ABC Algorithm
In the ABC algorithm, a Food Source represents a feasible solution of the problem to be solved. We use the “Fitness” to measure the pros and cons of a food sources. All bees are divided into Employed Bees, Onlooker Bees and Scout Bees. Different bees guide the entire bee colony to find quality food sources by sharing information and role conversion. At the beginning of the algorithm, all food sources are found by scouts, and then the food source is exploited by employed bees and scout Bees. Continued development has exhausted the resources of the food source, and the employed bees of the food source that depleted the resources are converted into scout bees to find more food sources. For convenience, we take the minimization problem as an example. The steps of the algorithm are as follows.
-
(1)
Initialization phase: Randomly generate SN initial food sources.
-
(2)
Employed bees search for new food sources in their respective food source neighborhoods, and choose a food source with a large fitness value using greedy method. When all employed bees complete the search, they returned to the dance area and share the information of food sources to the onlooker bees by means of swing dance.
-
(3)
Onlooker bees select food sources based on information shared by employed bees, the greater the fitness value, the greater the probability of being selected.
-
(4)
A food source is abandoned if the food source has not been updated after the Limit cycle. The corresponding employed bee is converted into a scout bee. The scout bees use the initialization formula to start randomly looking for new food sources.
-
(5)
Record the best solution so far.
-
(6)
Determine if the termination condition is met. If the termination condition is met, the optimal solution is output and the algorithm ends. Otherwise, go to (2).
In the step (2), use Eq. 1 to determine the neighbor food source.
Here \( x_{ij} \) is a randomly selected food source, \( \varphi_{ij} \) is a random number between [−1, 1], \( k \) is a randomly selected location index.
2.2 GABC Algorithm
The literature [7] proposed the GABC algorithm, which is based on the ABC algorithm to change the formula 1 into the formula 2. Although the changes are small, but the effect is good.
Here \( y_{j} \) is the Optimal solution of Column j, \( \psi_{ij} \) is a random number between [0, C]. C is a non-negative constant. C plays a very important role in balancing local search and global search. When C = 0, Eq. 2 becomes Eq. 1. From the literature [7] we know that when C = 1.5, the GABC algorithm works best.
3 Uniform Local Search Artificial Bee Colony Algorithm
3.1 Uniform Local Search
Uniform Design (UD) is an experimental design method jointly proposed by Professor Fang and mathematician Wang in 1978 [8]. The basic idea of UD is to use the number theory method to find some more uniform sets of points in the experimental area, and then use these points to arrange experiments. Such experimental results are representative and it can reduce the number of experiments. Literature [9] proved that if there are k factors, each factor has q levels, if a comprehensive experiment is performed; the number of experiments in the orthogonal design is q2. The uniform design uses the uniform distribution theory to select q points to do experiments. So the number of experiments is q. When q is large, the superiority of uniform design is very prominent. Compared with the orthogonal design experimental method, the uniform design has the advantages of fewer experiments and better robustness.
Peng et al. proposed a Uniform Local Search (ULS) based on UD and applied it to the DE algorithm. The experimental results show that ULS can enhance the local search ability of the DE algorithm [6].
Like orthogonal design, uniform design also has a set of tables for building experiments. Generally, using \( U_{n} (q^{s} ) \) represent uniform design table. The table has n rows and s columns. Here n represents the number of experiments, s represents the maximum number of independent factors, and each factor contains n levels. Table 1 is a uniform design table. As you can see from Table 1, the maximum number of levels per factor is equal to the number of experiments. For ease of use, uniform design provides a number of experimental tables, and the specific construction of the tables and more experimental tables can be found in Ref. [9, 10].
Literature [6] found that in the process of uniform local search optimization, the \( {\text{U}}_{6} \left( {6^{6} } \right) \) uniform design table can be obtained by deleting the last row of \( {\text{U}}_{7} \left( {7^{6} } \right) \). There are two reasons. First, the last line of \( {\text{U}}_{7} \left( {7^{6} } \right) \) is the original individual, which is redundant. The second reason is that the experimental results obtained in advance show that the results of \( {\text{U}}_{6} \left( {6^{6} } \right) \) is the best. In the experiment, each factor has six levels to get the best experimental results. If the number of levels of each factor is too large, it will take more evaluations to affect the performance of the algorithm [6].
The uniform local search step is as shown in Algorithm 1. In the ULS, if the problem dimension D is greater than six, we randomly decompose the D dimension into six groups. ULS requires six experimental individuals to be constructed. The total number of evaluations also needs to be six times.
3.2 Uniform Local Search Artificial Bee Colony Algorithm Steps
The steps of UGABC are shown in Algorithm 2. We embedded the ULS into the loop of GABC in order to enhance the algorithm’s local optimization ability. Although ULS can greatly improve the convergence speed, it is a greedy choice mechanism, so if the number of executions of ULS is too many, it may cause the algorithm to fall into local extreme. In order to strike a balance between convergence speed and population diversity, we only perform ULS once per cycle.
4 Experimental Results and Analysis
4.1 Test Function and Experiment Setup
In order to verify the effectiveness and superiority of the UGABC algorithm, we selected 13 commonly used benchmark functions in literature [11] as test set. In the simulation experiment, four algorithms of ABC, UABC, GABC and UGABC were selected for comparison experiments. Based on the principle of fairness, the parameters of these four algorithms are: SN = 50, Dim = 30, Limit = 50, MaxFEs = Dim * 5000, where C = 1.5 in GABC algorithm and UGABC algorithm, each benchmark function is independent run 30 times. The hardware environment used in the experiment was Intel I7 processor, 8 GB memory, and the software environment was Windows 7 and MATLAB 7.
In order to objectively and fairly evaluate the experimental results, the results were analyzed using Wilcoxon rank sum test and Friedman test in statistics. The Wilcoxon rank sum test is based on the rank sum of the samples to determine whether the two samples are from the same population. The Wilcoxon rank sum test can analyze whether there is a significant difference between the algorithm participating in the comparison and the experimental result of the UGABC algorithm running independently 30 times on the benchmark functions. The Friedman test uses rank to analyze whether there is a significant difference in the population distribution of multiple independent samples. The Friedman test ranks the rank mean of each sample. The smaller the rank means, the better the test results [12].
4.2 UGABC Algorithm Quality Analysis
The average error and standard deviation of the UGABC algorithm and the other three algorithms are shown in Table 2. The significance level of the Wilcoxon rank sum test is 0.05. At the bottom of Table 2, the symbols “−”, “+”, and “≈” are used to indicate that the corresponding algorithm is inferior, superior, and equivalent to the UGABC algorithm. The rank of Friedman test is in Table 4. Figure 1 plots the average convergence curve for each algorithm running independently for 30 times on 13 benchmark functions. It can be found from Table 2 that the UGABC algorithm has strong convergence ability and good convergence precision. From the results of the Wilcoxon rank sum test in the last three rows of Table 2, it can be seen visually that the UGABC algorithm is the best among the algorithms involved in the comparison.
As can be seen from Fig. 1, the UGABC algorithm has stronger convergence ability than other algorithms. The convergence speed of UGABC is slightly worse than other algorithms in \( f_{5} \) and \( f_{11} \). In addition to this, the UGABC algorithm has an absolute advantage. Table 3 gives the Friedman test rankings for the UGABC algorithm and the algorithms involved in the comparison. The GUABC algorithm ranks first, which illustrates the advantages of the UGABC algorithm from a statistical perspective.
4.3 Dimension Change Analysis
To further illustrate the advantages of the UGABC algorithm, we selected different dimensions for experimentation and recorded experimental results. Table 4 gives the experimental results of the four algorithms in 50 dimensions. Similar to Table 2, in order to objectively compare the advantages and disadvantages of the algorithm, the last three lines of Table 4 gives the Wilcoxon rank sum test results of four algorithms. Table 5 gives the Friedman test results for the 50-dimensional.
From Table 4, we find that in the 50-dimensional problems, the UGABC algorithm is significantly better than the ABC algorithm in all benchmark functions. Compared with the UABC algorithm, UGABC has the comparable experimental results on both \( f_{5} \) and \( f_{11} \). UGABC algorithm is superior to UABC in the rest of the benchmark functions. Compared with the GABC, UGABC slightly worse than GABC in \( f_{5} \). Both algorithms converge to 0 in \( f_{6} \). UGABC’s test results are better than GABC in other functions.
5 Conclusion
In order to improve the local search ability of the ABC algorithm, this paper proposes an artificial bee colony algorithm based on uniform local search. The algorithm introduces Gbest for neighborhood search in the stage of employed bees and onlooker bees, and embeds ULS after each cycle, which significantly improves the local search ability of the algorithm. It can be seen that the UGABC algorithm is superior to the comparison algorithm in solving accuracy, convergence speed and running time from the test results of different dimensions of 13 standard benchmark functions.
References
Kennedy, J., Eberhart, R.: Particle swarm optimization. In: Proceedings of the 4th IEEE International Conference on Neural Networks, pp. 1942–1948. IEEE Service Center, Piscataway (1995)
Yang, X.S., Deb, S.: Cuckoo search via Levy flights. In: World Congress on IEEE Nature & Biologically Inspired Computing, pp. 210–214 (2009)
Karaboga, D.: An idea based on honey bee swarm for numerical optimization. TR06, Computers Engineering Department, Engineering Faculty, Erciyes University, Kayseri (2005)
Karaboga, D., Akay, B.: A comparative study of artificial bee colony algorithm. Appl. Math. Comput. 2(14), 108–132 (2009)
Karaboga, D., Basturk, B.: On the performance of artificial bee colony (ABC) algorithm. Appl. Soft Comput. 8(1), 687–697 (2008)
Peng, H., Wu, Z., Deng, C.: Enhancing differential evolution with communal learning and uniform local search. Chin. J. Electron. 26(4), 725–733 (2017)
Zhu, G., Sam, K.: Gbest-guided artificial bee colony algorithm for numerical function optimization. Appl. Math. Comput. 217(7), 3166–3173 (2010)
Wang, Y., Fang, K.: A note on uniform distribution and experimental design. Mon. J. Sci. 26(6), 485–489 (1981)
Fang, K.: Uniform design-number theory method in the application of experimental design. Acta Math. Appl. Sinica 04, 363–372 (1980)
Fang, K.: Uniform design. Tactical Missile Technol. 02, 56–69 (1994)
Yao, X., Liu, Y., Lin, G.: Evolutionary programming made faster. IEEE Trans. Evol. Comput. 3(2), 82–102 (1999)
Peng, H., Wu, Z., Zhou, X., et al.: Bare-bones differential evolution algorithm based on trigonometry. J. Comput. Res. Dev. 12, 2776–2788 (2015)
Acknowledgement
This work was supported by The National Science Foundation of China (No. 61763019), The Natural Science Foundation of Heilongjiang Province (General Program: F2017019), The Science and Technology Plan Projects of Jiangxi Province Education Department (No. GJJ161072, No. GJJ161076, No. GJJ170953), The Education Planning Project of Jiangxi Province (No. 15YB138, No. 17YB211).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Zhang, Y., Peng, H., Deng, C., Wang, X., Huang, H., Tan, X. (2019). Artificial Bee Colony Algorithm Based on Uniform Local Search. In: Peng, H., Deng, C., Wu, Z., Liu, Y. (eds) Computational Intelligence and Intelligent Systems. ISICA 2018. Communications in Computer and Information Science, vol 986. Springer, Singapore. https://doi.org/10.1007/978-981-13-6473-0_2
Download citation
DOI: https://doi.org/10.1007/978-981-13-6473-0_2
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-13-6472-3
Online ISBN: 978-981-13-6473-0
eBook Packages: Computer ScienceComputer Science (R0)