Keywords

1 Foreword

There are a large number of nonlinear, non-differentiable multi-peak complex optimization problems in the engineering technology and optimizations fields. Traditional optimization methods are difficult to solve these problems. In recent years, the intelligent algorithm proposed by scientists can effectively solve these complex problems. For example, Kenney et al. [1] proposed Particle Swarm Optimization (PSO) algorithm simulating birds predation behavior. Yang et al. [2] proposed Cuckoo Search (CS) algorithm simulating cuckoo parasitic brooding. In 2005, the Turkish scientist Karaboga [3] simulated the behavior of bee collecting honey proposed the Artificial Bee Colony (ABC) algorithm. Compared with some traditional evolutionary algorithms, ABC algorithm has the advantages of simplicity, high speed, strong performance, good robustness, etc. It has a very effect on continuous functions [4, 5], and has been widely supplied and developed.

It is crucial to strike a balance between local search and global search for algorithms to solve optimization problems. Emphasizing local search helps to improve the convergence speed of the algorithm, but it is easy to fall into local optimum. Emphasizing global search helps to find new optimal solutions and avoid premature convergence, but it will reduce the convergence speed of the algorithm. The ABC algorithm is better in global search, while the local search is slightly worse, which makes the algorithm easily fall into local optimum. Peng et al. [6] proposed a uniform local search method, which randomly selecting two individuals in the population, and generating a new optimal individual through uniform design, which can significantly enhance the local search ability of the algorithm, thereby improving the overall optimization of the algorithm performance.

The ABC algorithm is sensitive to the search strategy, and different search strategies significantly affect the optimization performance of the algorithm. Therefore, scholars have made a lot of improvements to the ABC algorithm’s search strategy to improve the optimization performance of the algorithm. Inspired by the PSO algorithm, Zhu and Kwong [7] proposed the GABC algorithm, which introduces a Gbest in the search strategy, which improves the performance of the algorithm. Because the GABC algorithm has small changes to the search strategy, the effect is good and has received extensive attention.

Inspired by the GABC algorithm and the uniform local search method, we proposed a new algorithm named Uniform Local Search Gbest Artificial Bee Colony (UGABC) to solve the optimization problem. The UGABC algorithm combines the strong local search ability of ULS and GABC algorithms in order to improve the optimizations of the algorithm and solve the optimization problems.

2 ABC Algorithm and GABC Algorithm

2.1 ABC Algorithm

In the ABC algorithm, a Food Source represents a feasible solution of the problem to be solved. We use the “Fitness” to measure the pros and cons of a food sources. All bees are divided into Employed Bees, Onlooker Bees and Scout Bees. Different bees guide the entire bee colony to find quality food sources by sharing information and role conversion. At the beginning of the algorithm, all food sources are found by scouts, and then the food source is exploited by employed bees and scout Bees. Continued development has exhausted the resources of the food source, and the employed bees of the food source that depleted the resources are converted into scout bees to find more food sources. For convenience, we take the minimization problem as an example. The steps of the algorithm are as follows.

  1. (1)

    Initialization phase: Randomly generate SN initial food sources.

  2. (2)

    Employed bees search for new food sources in their respective food source neighborhoods, and choose a food source with a large fitness value using greedy method. When all employed bees complete the search, they returned to the dance area and share the information of food sources to the onlooker bees by means of swing dance.

  3. (3)

    Onlooker bees select food sources based on information shared by employed bees, the greater the fitness value, the greater the probability of being selected.

  4. (4)

    A food source is abandoned if the food source has not been updated after the Limit cycle. The corresponding employed bee is converted into a scout bee. The scout bees use the initialization formula to start randomly looking for new food sources.

  5. (5)

    Record the best solution so far.

  6. (6)

    Determine if the termination condition is met. If the termination condition is met, the optimal solution is output and the algorithm ends. Otherwise, go to (2).

In the step (2), use Eq. 1 to determine the neighbor food source.

$$ v_{ij} = x_{ij} + \varphi_{ij} (x_{ij} - x_{kj} ) $$
(1)

Here \( x_{ij} \) is a randomly selected food source, \( \varphi_{ij} \) is a random number between [−1, 1], \( k \) is a randomly selected location index.

2.2 GABC Algorithm

The literature [7] proposed the GABC algorithm, which is based on the ABC algorithm to change the formula 1 into the formula 2. Although the changes are small, but the effect is good.

$$ v_{ij} = x_{ij} + \varphi_{ij} (x_{ij} - x_{kj} ) + \psi_{ij} (y_{j} - x_{ij} ) $$
(2)

Here \( y_{j} \) is the Optimal solution of Column j, \( \psi_{ij} \) is a random number between [0, C]. C is a non-negative constant. C plays a very important role in balancing local search and global search. When C = 0, Eq. 2 becomes Eq. 1. From the literature [7] we know that when C = 1.5, the GABC algorithm works best.

3 Uniform Local Search Artificial Bee Colony Algorithm

3.1 Uniform Local Search

Uniform Design (UD) is an experimental design method jointly proposed by Professor Fang and mathematician Wang in 1978 [8]. The basic idea of UD is to use the number theory method to find some more uniform sets of points in the experimental area, and then use these points to arrange experiments. Such experimental results are representative and it can reduce the number of experiments. Literature [9] proved that if there are k factors, each factor has q levels, if a comprehensive experiment is performed; the number of experiments in the orthogonal design is q2. The uniform design uses the uniform distribution theory to select q points to do experiments. So the number of experiments is q. When q is large, the superiority of uniform design is very prominent. Compared with the orthogonal design experimental method, the uniform design has the advantages of fewer experiments and better robustness.

Peng et al. proposed a Uniform Local Search (ULS) based on UD and applied it to the DE algorithm. The experimental results show that ULS can enhance the local search ability of the DE algorithm [6].

Like orthogonal design, uniform design also has a set of tables for building experiments. Generally, using \( U_{n} (q^{s} ) \) represent uniform design table. The table has n rows and s columns. Here n represents the number of experiments, s represents the maximum number of independent factors, and each factor contains n levels. Table 1 is a uniform design table. As you can see from Table 1, the maximum number of levels per factor is equal to the number of experiments. For ease of use, uniform design provides a number of experimental tables, and the specific construction of the tables and more experimental tables can be found in Ref. [9, 10].

Table 1. Uniform design table \( {\text{U}}_{7} \left( {7^{6} } \right) \)

Literature [6] found that in the process of uniform local search optimization, the \( {\text{U}}_{6} \left( {6^{6} } \right) \) uniform design table can be obtained by deleting the last row of \( {\text{U}}_{7} \left( {7^{6} } \right) \). There are two reasons. First, the last line of \( {\text{U}}_{7} \left( {7^{6} } \right) \) is the original individual, which is redundant. The second reason is that the experimental results obtained in advance show that the results of \( {\text{U}}_{6} \left( {6^{6} } \right) \) is the best. In the experiment, each factor has six levels to get the best experimental results. If the number of levels of each factor is too large, it will take more evaluations to affect the performance of the algorithm [6].

The uniform local search step is as shown in Algorithm 1. In the ULS, if the problem dimension D is greater than six, we randomly decompose the D dimension into six groups. ULS requires six experimental individuals to be constructed. The total number of evaluations also needs to be six times.

figure a

3.2 Uniform Local Search Artificial Bee Colony Algorithm Steps

The steps of UGABC are shown in Algorithm 2. We embedded the ULS into the loop of GABC in order to enhance the algorithm’s local optimization ability. Although ULS can greatly improve the convergence speed, it is a greedy choice mechanism, so if the number of executions of ULS is too many, it may cause the algorithm to fall into local extreme. In order to strike a balance between convergence speed and population diversity, we only perform ULS once per cycle.

figure b

4 Experimental Results and Analysis

4.1 Test Function and Experiment Setup

In order to verify the effectiveness and superiority of the UGABC algorithm, we selected 13 commonly used benchmark functions in literature [11] as test set. In the simulation experiment, four algorithms of ABC, UABC, GABC and UGABC were selected for comparison experiments. Based on the principle of fairness, the parameters of these four algorithms are: SN = 50, Dim = 30, Limit = 50, MaxFEs = Dim * 5000, where C = 1.5 in GABC algorithm and UGABC algorithm, each benchmark function is independent run 30 times. The hardware environment used in the experiment was Intel I7 processor, 8 GB memory, and the software environment was Windows 7 and MATLAB 7.

In order to objectively and fairly evaluate the experimental results, the results were analyzed using Wilcoxon rank sum test and Friedman test in statistics. The Wilcoxon rank sum test is based on the rank sum of the samples to determine whether the two samples are from the same population. The Wilcoxon rank sum test can analyze whether there is a significant difference between the algorithm participating in the comparison and the experimental result of the UGABC algorithm running independently 30 times on the benchmark functions. The Friedman test uses rank to analyze whether there is a significant difference in the population distribution of multiple independent samples. The Friedman test ranks the rank mean of each sample. The smaller the rank means, the better the test results [12].

4.2 UGABC Algorithm Quality Analysis

The average error and standard deviation of the UGABC algorithm and the other three algorithms are shown in Table 2. The significance level of the Wilcoxon rank sum test is 0.05. At the bottom of Table 2, the symbols “−”, “+”, and “≈” are used to indicate that the corresponding algorithm is inferior, superior, and equivalent to the UGABC algorithm. The rank of Friedman test is in Table 4. Figure 1 plots the average convergence curve for each algorithm running independently for 30 times on 13 benchmark functions. It can be found from Table 2 that the UGABC algorithm has strong convergence ability and good convergence precision. From the results of the Wilcoxon rank sum test in the last three rows of Table 2, it can be seen visually that the UGABC algorithm is the best among the algorithms involved in the comparison.

Table 2. Mean error value and standard deviation of algorithms and comparison results based on Wilcoxon’s rank sum test

As can be seen from Fig. 1, the UGABC algorithm has stronger convergence ability than other algorithms. The convergence speed of UGABC is slightly worse than other algorithms in \( f_{5} \) and \( f_{11} \). In addition to this, the UGABC algorithm has an absolute advantage. Table 3 gives the Friedman test rankings for the UGABC algorithm and the algorithms involved in the comparison. The GUABC algorithm ranks first, which illustrates the advantages of the UGABC algorithm from a statistical perspective.

Fig. 1.
figure 1figure 1

Convergence curve on four algorithms on benchmark functions

Table 3. Average rankings achieved by Friedman test

4.3 Dimension Change Analysis

To further illustrate the advantages of the UGABC algorithm, we selected different dimensions for experimentation and recorded experimental results. Table 4 gives the experimental results of the four algorithms in 50 dimensions. Similar to Table 2, in order to objectively compare the advantages and disadvantages of the algorithm, the last three lines of Table 4 gives the Wilcoxon rank sum test results of four algorithms. Table 5 gives the Friedman test results for the 50-dimensional.

Table 4. Mean error value and standard deviation of algorithms at dim = 50 and comparison results based on Wilcoxon’s rank sum test
Table 5. Average rankings achieved at dim = 50 by Friedman test

From Table 4, we find that in the 50-dimensional problems, the UGABC algorithm is significantly better than the ABC algorithm in all benchmark functions. Compared with the UABC algorithm, UGABC has the comparable experimental results on both \( f_{5} \) and \( f_{11} \). UGABC algorithm is superior to UABC in the rest of the benchmark functions. Compared with the GABC, UGABC slightly worse than GABC in \( f_{5} \). Both algorithms converge to 0 in \( f_{6} \). UGABC’s test results are better than GABC in other functions.

5 Conclusion

In order to improve the local search ability of the ABC algorithm, this paper proposes an artificial bee colony algorithm based on uniform local search. The algorithm introduces Gbest for neighborhood search in the stage of employed bees and onlooker bees, and embeds ULS after each cycle, which significantly improves the local search ability of the algorithm. It can be seen that the UGABC algorithm is superior to the comparison algorithm in solving accuracy, convergence speed and running time from the test results of different dimensions of 13 standard benchmark functions.