1 Introduction

Optimization problems have always been widely used in mathematical programming, computer technology, engineering design, etc. Moreover, the solution of an optimization problem with simulations of group behaviors has become a popular research topic. Among the possible simulation techniques, Ant Colony Optimization (ACO) [1] proposed by Dorgo and particle swarm optimization (PSO) [2] proposed by Kennedy are relatively mature in this field.

In recent years, a new random search method in the optimization field has been developed—namely, artificial bee colony (ABC) algorithm, which was proposed by Karaboga and successfully applied to the optimization problem of functions [3]. It mainly simulates a bee colony that gathers honey intelligently and exchanges information about nectar sources according to the different divisions of labor within the bee colony until the optimal nectar source is found. In references [4, 5], the performance of the ABC algorithm was tested with five common benchmark functions, showing that the ABC algorithm has good optimal performance similar to the Genetic Algorithm (GA), PSO, and Differential Evolution (DE). However, it has disadvantages such as a low convergence rate and tendency to “mature early.” Targeting these problems, the “early maturing” phenomenon in the ABC algorithm was overcome by combining it with the DE algorithm and introducing an elimination rule during iteration [6]. Chaotic thought was introduced to the ABC algorithm in references [7, 8], and the characteristics of chaotic motion, such as the randomness and ergodicity, were used to improve the overall searchability of the algorithm.

In order to improve the convergence rate of the ABC algorithm and overcome its tendency to fall into local optima in a later period, an improved algorithm based on the self-adaptive random optimization strategy (SRABC) was proposed on the basis of references [6, 9, 10] to improve the local searchability of the algorithm by taking advantage of self-adaptive thought and the bidirectional random optimization mechanism. In addition, PSO was introduced at the initial stage of the improved algorithm to increase its convergence rate.

In order to improve the exploitation capability, the chaotic gradient ABC algorithm was proposed in Ref. [11], its effectiveness was tested on three different benchmark text datasets namely Reuters-21,578, Classic4, and WebKB. Astute Artificial Bee Colony (AsABC) algorithm proposed by Ref. [12] circumvents three problems: slow convergence speed, local optima stagnation and scalability problem. What’s more it is able to maintain a better trade-off between two conflicting aspects, exploration and exploitation in the search space.

A decentralized form of ABC algorithm with dynamic multi-populations by means of fuzzy C-means (FCM) clustering proposed by Ref. [13] improves the convergence rate and makes a balance between the global search and local turning abilities. A novel ABC algorithm with dynamic population (ABC-DP) proposed by Ref. [14,15,16] synergizes the idea of extended life-cycle evolving model to balance the exploration and exploitation tradeoff. ABC-DP is then used for solving the optimal power flow (OPF) problem in power systems that considers the cost, loss, and emission impacts as the objective functions. Taking Kapur’s entropy as the optimized objective function, Ref. [17,18,19,20] put forward the modified quick ABC algorithm (MQABC), which employed a new distance strategy for neighborhood searches.

In order to overcome the poor exploitation of ABC algorithm, inspired by the gravity model, ABCG was proposed on basis of a novel solution search equation and multiple solution search equations. The results of testing on benchmark functions show that ABCG is regarded as a competitive solution [21].

A novel strategy based on node electrical relevance and ABC algorithm was proposed to minimize outage losses and make use of renewable energy sources. The results of testing on the IEEE 69-bus distribution system show that the proposed strategy is more feasible and efficient than other strategies from the literature [9].

On basis of the ABC algorithm, a new topological shape optimization scheme was proposed. The algorithm is the most effective among the comparison methods [22].

In order to overcome the weakness of slow convergence in ABC, CosABC based the cosine similarity was proposed. Via testing on a test suite composed of twenty-four benchmark functions, the effectiveness of CosABC is demonstrated [7].

Recommender systems make an important role in electronic commerce sites, which helps to achieve better customer satisfaction and bring those products into the notice of the customer. A movie recommender system based on collaborative filtering technique was presented. On the basis of CPU Time and two standard functions, ant colony optimization and artificial bee colony optimization were compared.

The rest of paper is organized as follows. Section 1 presents the ABC algorithm, and Sect. 2 presents the essential concepts of the SRABC and the strategy for self-adaptive random optimization. Section 3 discusses the performance of the proposed approach for various benchmark functions. Finally, the conclusions are provided in Sect. 5.

2 Principles of the ABC algorithm

The ABC algorithm is a random search algorithm based on the behaviors of a bee colony [3]. In the ABC algorithm, an ABC falls into three categories according to the labor distribution: honey-gathering, observation, and investigation bees. Generally, both the honey-gathering and observation bees account for half of the bee colony, and there is only one honey-gathering bee for one nectar source. During the initialization of the algorithm, M initial solutions are first generated randomly; then, the optimal solution is found with the following search process: (1) the honey-gather bee searches and memorizes the amount of honey in the nectar source—namely, the quality of the solution (fitness); (2) the observation bee selects a nectar source by determining the yield rate according to the information about the nectar source obtained by the honey-gathering bee and changes the position memorized; and (3) when a nectar source is given up after it is exhausted, an investigation bee is produced to search for new nectar source.

In the algorithm, in order to generate a new candidate position Vi according to the memorized position Xi, the following equation is used for updating:

$$v_{ij} = x_{ij} + \varphi_{ij} (x_{ij} - x_{kj} )$$
(1)

where k is a nectar source different from i, j is the subscript of a randomly selected nectar source, and φij is a random number in the range [− 1, 1]. According to the amount of honey at the nectar source, the probability at which a nectar source is selected by the observation bee is

$$p_{i} = \frac{{fit(\theta_{i} )}}{{\sum\limits_{i = 1}^{S} {fit(\theta_{i} )} }}$$
(2)

where S is the total number of nectar sources, θi is the ith nectar source, f(θi) is the fitness of nectar source θi, and i ∈ P{1, 2, …, S}.

Suppose that after “limit” times of cycling search and update, the fitness of the nectar source could still not be improved; then, it would be given up, and the honey-gathering bee would turn into an investigation bee. “Limit” is an important control parameter in the ABC algorithm for the selection of the investigation bee. The procedure for the investigation bee finding a new nectar source and replacing Xi is expressed as follows:

$$x_{i}^{j} = x_{\hbox{min} }^{j} + rand(0,1)(x_{\hbox{max} }^{j} - x_{\hbox{min} }^{j} )$$
(3)

3 Improvement in the ABC algorithm

In order to improve the convergence rate of the ABC algorithm and overcome its tendency to fall into local optima in a later period, an improved algorithm based on the self-adaptive random optimization strategy is proposed on the basis of the above-stated ABC algorithm principle. Together with weighted PSO, the convergence rate and local searchability of the algorithm are improved by taking advantage of self-adaptive thought and bidirectional random optimization to improve the optimal search performance of this algorithm.

3.1 PSO algorithm

In 1995, after the proposal of the basic PSO algorithm by Zhang et al. [9], an improved PSO algorithm was proposed by Shi and Eberhart [10]; in it, an inertia factor w is added to the basic PSO algorithm, and the updated equations for standard PSO are expressed as

$$V_{i}^{t + 1} = wV_{i}^{t} + c_{1} r_{1} (P_{i}^{t} - X_{i}^{t} ) + c_{2} r_{2} (P_{g}^{t} - X_{i}^{t} )$$
(4)
$$X_{i}^{t + 1} = X_{i}^{t} + V_{i}^{t + 1}$$
(5)

where Vi is the speed of particle i, Xi is the position of particle i, c1 and c2 are positive acceleration factors, r1 and r2 are random numbers distributed within [0, 1], Pi is the optimal position found by a single particle, Pg is the optimal position found by the entire colony, t is the tth iteration. The basic steps of the specific optimization process for the PSO algorithm [22] are as follows:

Step 1: Initialize PSO, set the initial position and speed of m particles randomly, and calculate the fitness of each particle.

Step 2: For each particle, compare the fitness of the current position with the fitness at the best position Pi—namely, Pi,best. If it is better than Pi,best, update Pi and Pi,best; otherwise, Pi,best remains unchanged.

Step 3: For each particle, compare the fitness of the current position with the fitness at the best position Pg—namely, Pg,best. If it is better than Pg,best, update Pg and Pg,best; otherwise, Pg,best remains unchanged.

Step 4: The speed and position of the particle are adjusted according to Eqs. (4) and (5).

Step 5: If the fitness of the current position reaches the termination condition, it would end or return to Step 2.

3.2 Self-adaptive random optimization strategy

3.2.1 Position-update equation for a self-adaptive bee colony

When adopting Eq. (1) for a position update, a larger φij causes movement away from the local minimum, whereas a smaller φij favors the convergence of the algorithm [22]. The best method for the overall search is to adopt a larger φij at the initial stage of the algorithm to obtain an excellent nectar source with a higher searchability and improve the search precision. In a later period, a smaller φij is needed to improve the local searchability of the algorithm and increase its convergence rate. Therefore, φij is set as the function for iteration, and it decreases as the number of iterations increases. φij is defined as follows:

$$\varphi_{ij}^{k} = \varphi_{ij}^{k - 1} - \frac{{C(w_{\hbox{max} } - w_{\hbox{min} } )}}{{C_{\hbox{max} } }}$$
(6)

where wmax and wmin are initial and final weights, respectively; Cmax is the maximum number of iterations; and C is the current number of iterations. As a result, Eq. (1) is redefined as.

$$v_{ij} = x_{ij} + \varphi_{ij}^{k} (x_{ij} - x_{kj} )$$
(7)

Thus, to some degree, Eq. (7) plays a guiding role in the search trend for the position of a nectar source, overcoming disadvantages such as the strong randomness and low convergence rate.

3.2.2 Bidirectional random optimization mechanism

When calculating the fitness of a nectar source with the ABC algorithm, the observation bee selects a nectar source after comparing the ones around θi. The position near the nectar source is calculated as follows:

$$\theta_{i} (C + 1) = \theta_{i} (C) + \varphi_{i} (C)$$
(8)

where φi(c) is the progressive step length produced randomly near θi. After calculating the fitness, if fit(C + 1) > fit(C), then the observation bee would choose θi(C + 1), or φi(c) remains unchanged. If the fitness of a nectar source is not improved after cycling for a finite number of iterations, then it should be given up, and the honey-gathering bee would turn into an investigation bee according to Eq. (3). There are certain disadvantages associated with the above-stated method; namely, in each cycle, the nectar source in a single direction would be searched, and as a result, there is a tendency to fall into local optima. In references [23, 24], a bidirectional random optimization mechanism was proposed in research on the search hit rate and success rate in a dynamic network environment [23], which effectively improved the search feature of the network. Inspired by this thought, an improved mechanism is introduced to improve the search direction for a nectar source; if

$$fit(\theta_{\text{i}} + l) < fit(\theta_{\text{i}} )$$
(9)

then

$$\theta_{\text{i}} = \theta_{\text{i}} + d$$
(10)

If

$$fit(\theta_{\text{i}} - l) < fit(\theta_{\text{i}} )$$
(11)

then

$$\theta_{\text{i}} = \theta_{\text{i}} - d$$
(12)

or θi remains unchanged.

3.2.3 Algorithm initialization realized by the particle swarm optimization algorithm

The convergence rate is low in the bee colony algorithm, whereas it is relatively higher in the PSO algorithm, which is introduced in the initial stage to improve the algorithm. That is, the overall optimal solution is obtained by iteration by taking advantage of PSO; then, the position of a nectar source would be randomly generated near the optimal solution. Later, an optimization process is conducted to calculate the position of a nectar source within an ABC. The improved initial position of the nectar source with the ABC algorithm is as follows:

$$X_{i} = P_{g,best}^{M} + \varphi_{{_{i} }}^{M} \cdot P_{g,best}$$
(13)

where \(P_{g,best}^{M}\) is an M-dimensional vector, and each element is equal to Pg,best. \(\varphi_{{_{i} }}^{M}\) is an M-dimensional vector in [− 1, 1] produced randomly. According to the above-mentioned strategy, the specific procedures for improving the algorithm are as follows:

  • Step 1 The related initial parameters for the ABC and PSO algorithm are set. The initial speeds and positions of M particles are randomly generated according to Eqs. (4) and (5).

  • Step 2 The optimal solution Pg,best within the number of cycles c is determined by calculating the fitness value of each particle for comparison.

  • Step 3 The honey-gathering bee searches for a new nectar source according to Eqs. (6) and (7) and calculates its fitness. If it is better than the original position, then the original position is replaced with the new one.

  • Step 4 The observation bee selects a nectar position according to the amount of honey in the nectar source according to the probability in Eq. (2), generates a new position according to the bilateral random optimization mechanism, and evaluates this position.

  • Step 5 If the nectar source is given up, then the honey-gathering bee at this nectar source would turn into an investigation bee according to Eq. (3).

  • Step 6 The current optimal position and fitness value are recorded.

4 Analysis of the simulation experiment

In order to verify the effectiveness of the proposed algorithm and conduct a performance analysis, seven benchmark functions are selected for comparison and testing, which are different from the traditional ABC algorithm. A comparison of the performance of a hybrid ABC is proposed in reference [25].

  1. (1)

    Rastrigin function

    \(f(x) = \left( {\sum\limits_{i = 1}^{D} {x_{i}^{2} } - 10\cos (2\pi x_{i} ) + 10} \right)\) is a multimodal function whose optimal solutions are distributed evenly with the search scope within [− 20, 20] and the overall optimal solution of 0.

  2. (2)

    Griewank function

    \(f(x) = \frac{1}{4000}\left( {\sum\limits_{i = 1}^{D} {x_{i}^{2} } } \right) - \left( {\prod\limits_{i = 1}^{D} {\cos (\frac{{x_{i} }}{\sqrt i })} } \right) + 1\) is a multimodal function whose optimal solutions are distributed evenly. However, its local optima increase as the number of dimensions increases. The search scope is within [− 600, 600], and the optimal solution is 0.

  3. (3)

    Sphere function

    \(f(x) = \sum\limits_{i = 1}^{n} {x_{i}^{2} }\) is a continuous convex function with a single peak whose search scope is within [− 100, 100]. The lowest point of the function is 0.

  4. (4)

    Rosenbrock function

    \(f(x) = \sum\limits_{i = 1}^{n - 1} {[100(x_{i + 1} - x_{i}^{2} )^{2} + (x_{i} - 1)^{2} ]}\); the search scope is within [− 30, 30], and the optimal solution is 0.

  5. (5)

    Ackley function

    \(f(x) = - 20\exp \left( { - 0.2\sqrt {\frac{1}{D}\sum\limits_{i = 1}^{D} {x_{i}^{2} } } } \right) - \exp \left( { - 0.2\frac{1}{D}\sum\limits_{i = 1}^{D} {\cos (2\pi x_{i} )} } \right) + 20 + e\); the search scope is within [− 30, 30], and the lowest point of the function is 0.

  6. (6)

    Pathological function

    \(f(x) = \sum\limits_{i = 1}^{n - 1} {\left( {0.5 + \frac{{\sin^{2} (\sqrt {100x_{i}^{2} + x_{i + 1}^{2} } ) - 0.5}}{{(1 + 0.001(x_{i}^{2} + x_{i + 1}^{2} + 2x_{i} x_{i + 1} )^{2} )}}} \right)}\); the search scope is within [− 100, 100], and the lowest point of the function is 0.

  7. (7)

    Alpine function

    \(f(x) = \sum\limits_{i = 1}^{D} {\left| {x_{i} \sin (x_{i} ) + 0.1x_{i} } \right|}\); the search scope is within [− 10, 10], and the lowest point of the function is 0.

The parameters for the algorithms are set as follows. For the SRABC algorithm, the scale of the colony S = 60, limit = 60, the number of PSOs is 60, and the number of cycles for the PSO algorithm c = 100. For the standard ABC algorithm S = 60, limit = 60, the maximum number of iterations is 2500, and the algorithm operates 30 times independently. A comparison of the maximum, minimum, average, and variance obtained after the independent operation of various functions in different dimensions 30 times is summarized in Table 1.

Table 1 Test results for different functions

From Table 1, although the optimization results for the Sphere function with a single peak for different dimensions do not greatly improve with the SRABC algorithm, it is still better than those obtained by the standard ABC and the hybrid artificial bee colony (HABC) algorithms. For the multimodal Griewank and Rastrigin functions, there are complicated nonlinear overall optimization problems. From the table, the precision of the simulation results of seven functions for different dimensions with the SRABC algorithm is better than those of the ABC and HABC algorithms. In particular, the 60-dimensional Rastrigin function converges to 0 rapidly. The overall optimal solution of the Rosenbrock function is distributed in a long, narrow, and flat parabolic valley, and it is difficult to converge to this solution. From Table 1, although the average values of the Ackley function, f the Alpine function and the Pathological function are large, the average values of the proposed algorithm are obviously smaller than the other two, and the minimum is close to the optimal solution. From the test of the Ackley function, the performance of the SRABC algorithm is not much better than that of ABC and HABC algorithms when the number of dimensions is low; however, as the number of dimensions increases, the precision of the optimal solution of the SRABC algorithm is obviously better than the other two algorithms. From the test of the Alpine function in Table 1, the performance of the HABC algorithm is better than that of the ABC algorithm, and the performance of the proposed SRABC algorithm is much better than the other two. From the test of the Pathological function in Table 1, the search precision of the proposed algorithm is more stable than the other two when the number of dimensions is 30, and there is not much difference between the three algorithms when the number of dimensions is 60. Further, from Table 1, the improved algorithm maintains the features of its original algorithm and improves the calculation precision and stability compared to the traditional and HABC algorithms.

In Figs. 1, 2, 3, 4, 5, 6 and 7, the curves showing the optimal values obtained by the ABC, HABC, and SRABC algorithms versus the number of iterations for each function with 30 and 60 dimensions are shown. In order to show the results clearly, the number of iterations is plotted along the x axis, and the optimal values are plotted along the y axis on a logarithmic scale.

Fig. 1
figure 1

Comparison of different dimensions for the Sphere function

Fig. 2
figure 2

Comparison of different dimensions for the Rastrigin function

Fig. 3
figure 3

Comparison of different dimensions for the Griewank function

Fig. 4
figure 4

Comparison of different dimensions for the Rosenbrock function

Fig. 5
figure 5

Comparison of different dimensions for the Ackley function

Fig. 6
figure 6

Comparison of different dimensions for the Alpine function

Fig. 7
figure 7

Comparison of different dimensions for the Pathological function

In Figs. 1, 2, 3, 4, 5, 6 and 7, for the same number of iterations and targeting different dimensions of seven benchmark functions, iteration of the optimal value nearly stops in a later period, and there is a certain improvement obtained by the HABC algorithm compared to the ABC algorithm. Further, the performance of the SRABC algorithm is greatly improved by the proposed optimization process. From Fig. 1, the SRABC algorithm converges swiftly and obtains the optimal solution for the Sphere function. Since the optimization result at the initial stage of the PSO algorithm is regarded as the initial value of the SRABC algorithm, the search space is greatly reduced, and the convergence rate is further improved. From Figs. 2 and 3, for a certain number of iterations, the optimal values could be achieved by the SRABC algorithm at an early stage for the Griewank and Rastrigin functions, which nearly decrease linearly, and the convergence rate is smaller than those of the other two algorithms. From Figs. 4 and 5, the SRABC algorithm has a greater precision than the ABC and HABC algorithms for 30 dimensions if the number of iterations is low. Moreover, as the number of iterations increases, the SRABC algorithm converges to the optimal value gradually and steadily. The proposed algorithm could perform more steady astringency and local or overall searchability when the number of dimensions is 60. From Figs. 6 and 7, compared with the other two algorithms, the proposed algorithm ensures an initial search precision and converges to the optimal value as the number of iterations increases.

The self-adaptive random optimization strategy for a bee colony could increase the convergence to an optimal value during the entire optimization process, which could guide individuals to the overall optimal value. There are substantial improvements in the precision and convergence rate when comparing the ABC and SRABC algorithms, which also prevents early maturing of algorithm.

5 Conclusion

As a novel intelligent swarm optimization algorithm, the ABC algorithm is characterized by its easy realization, simple operation, and few control parameters [21, 26]. Targeting the weak local searchability, low search precision, and low convergence rate of the ABC algorithm, a PSO algorithm was introduced at the initial stage to initialize the bee colony. On the basis of self-adaptive thought and the bidirectional random optimization mechanism, an improved algorithm based on the bidirectional random optimization strategy was proposed to effectively overcome disadvantages such as the strong randomness and single search direction during the optimization process, which avoided the tendency to fall into local optima to a certain degree. Through a comparison with the traditional algorithm and the HABC algorithm proposed in Ref. [6], the proposed algorithm was generally found to be effective for seven different benchmark functions, which improved optimization ability of algorithm on the basis of raising convergence rate of algorithm.