Keywords

1 Introduction

In order to efficiently address different optimization problems, strong optimization methods are required. Some swarm intelligence optimization algorithm(SIOAs) has gradually entered our field of vision, for instance particle swarm optimization (PSO) [10, 19, 21, 28], firefly algorithm (FA) [1, 17, 23], and artificial bee colony (ABC) [2, 4, 9, 11, 16]. These SIOAs feature easier implementations, simpler principles, and better search capabilities than pure mathematical techniques. Among the above SIOAs, ABC is popular.

Although ABC has achieved outstanding results on several optimisation problems, it also has certain shortcomings, including slow convergence and poor exploitation. Especially, the probability selection for onlooker bees may not choose a certain sound solution in the late search stage. To tackle the above issues, different versions of ABC were proposed [3, 6, 15, 26]. In [29], encouraged by PSO, the search method was changed by add the third item containing the optimal solution. Gao et al. [7] created a novel search technique, it is really very similar to the mutation operation of DE. In [25], an evolutionary ratio that adequately reflected the performance of each search strategy was defined. Then, based on the evolutionary ratio, an adaptive mechanism was proposed. In [13], Karaboga et al. proposed a novel definition of the neighborhood to enhance the search method, it can significantly improve the exploitation capability of onlooker bees. In. [4], Cui et al. defined a DFS framework and used elite solutions to change the search formula. Kiran and Hakli [14] introduced five search strategies in ABC, choose one of the five search strategies that is more suitable for the current search stage with the success rate. Cui et al. [5] selected good solutions based on a ranking method. Solutions with higher rankings have more chances to be selected. In [22], Wang et al. proposed a novel selection mechanism.

There are various means for ABC to improve its convergence performance, this work suggests a unique ABC (named ASESABC) with adaptive search strategies and an elite selection mechanism to ABC. Firstly, This study puts three search methods with kind of characteristics into the strategy pool. Then, to select an appropriate search strategy at each iteration, a tolerance-based strategy selection mechanism is designed. Moreover, an elite selection approach can effectively improve the working efficiency of onlooker bees, so we take advantage of it instead of probability selection. a set of 22 traditional problems with \(D=30\) and 100 are examined to evaluate ASESABC’s performance, and Five modified ABC algorithms are compared to ASESABC’s performance.

The essay next sections are hereinafter. The original ABC is plainly explained in Sect. 2. Section 3 offers the suggested strategy ASESABC. Section 4 presents experimental findings. In Sect. 5, conclusions are drawn.

2 Artificial Bee Colony Algorithm

Bees’s foraging behavior is a very interesting natural phenomenon, According to this phenomenon, Karaboga et al. [12] puts forward a mathematical model called ABC. ABC has three different types of bees, which can complete their own simple works and cooperate with each other. The employed bees explore existing food sources in search of potentially superior ones. The onlooker bees identify some promising nectar sources and scour food source’s neighbourhoods based on the search experiences of upper stage. When the employed bees could not search for a preferable solution, after the food source is abandoned, it transforms into a scout bee and look for a novel nectar source.

When ABC is put in place to an a specific case optimization problem  [12], each nectar source is a promising solution. The initialized population contains SN solutions, denoted as \(X_1\), \(X_2\), \(\ldots \), \(X_{SN}\), where \(X_i=(x_{i,1}, x_{i,2}, \ldots , x_{i,D})\) is the i-th individual, and D is problem’s dimension. The standard ABC [12] are hereinafter.

Initialization\(--\)At this stage, initial SN solutions are produced at random according to the model below.

$$\begin{aligned} x_{i,j} = x_{min,j}+ rnd \cdot (x_{max,j} - x_{min,j}), \end{aligned}$$
(1)

where \(i=1,2,\ldots , SN\), \([x_{min,j}, x_{max,j}]\) is the j-th dimension’s border restriction, \(j=1,2,\ldots , D\), and \(rnd \in [0,1] \) is an arbitrary value.

Employed bee phase\(--\)By using Eq. (2), a prospective individual \(V_i\) around each \(X_i\) is engender.

$$\begin{aligned} v_{i,j}=x_{i,j}+\phi _{i,j}\cdot (x_{i,j}-x_{k,j}), \end{aligned}$$
(2)

where \(X_k\) is chosen stochastic from SN solutions (\(k \ne i\)), \(j \in \{1, D \}\) is an arbitrary integers, \(\phi _{i,j}\) is an arbitrary value in [-1,1]. Based on the avaricious selection, if the candidate’s individual \(V_i\) is more fit than \(X_i\), \(V_i\) will replace \(X_i\).

Onlooker bee phase\(--\)The last kind of bee ask the onlooker bees of their experiences. The onlooker bees then decide on some viable options and research their neighbourhoods. The selection likelihood of the i solution is expressed by \(p_i\), its calculation process as the next shown

$$\begin{aligned} p_i=\frac{fit(X_i)}{\sum ^{SN}_{i=1}{fit(X_i)}}, \end{aligned}$$
(3)

where \(fit(X_i)\) represents the fitness value of the i-th solution, and its calculation process as the next shown

$$\begin{aligned} fit(X_i) = \left\{ \begin{array}{ll} \frac{1}{1+f(X_i)}, &{} \text {if}~f(X_i) \ge 0 \\ 1+|f(X_i)|, &{} \text {if}~f(X_i)<0 \\ \end{array} \right. , \end{aligned}$$
(4)

where \(f(X_i)\) is the goal value for the i-th individual. For each onlooker bee, a roulette wheel method is utilized to choose a good solution. Then, the onlooker bee still uses Eq. (2). To update the chosen solution, An avaricious selection is made.

Scout bee phase\(--\) The previous two search phases, a solution may be updated by the greedy selection. When the search falls into a local minimum, by the neighborhood search, it is difficult to update solutions. In the scout bee phase, every solution \(X_i\) has a counter \(trial_i\). If \(X_i\) is updated, the related \(trial_i\) is set to 0; otherwise \(trial_i\) is increased by 1. When the maximum \(trial_i\) value reaches the limit threshold value, the corresponding \(X_i\) maybe trapped into local minimum. Then, \(X_i\) is no longer used. An employed bee transforms into a scout bee, which employs Eq. (1) to create a novel solution.

3 Proposed Approach

3.1 Tolerance-Based Strategy Selection Method

In numerous research articles  [8, 27], researchers sought to use multiple strategies to maintain equilibrium ABC’s exploration and exploitation, which is turning out to be an effective tactic. In [24], it was suggested to use a multi-strategy ensemble of ABC (MEABC). The 3 methods of searching that make up MEABC’s pool of strategies. Then, in accordance with the quality of offspring, a straightforward random technique is utilized to alter the search approach. if unable to discover a superior solution, the current search strategy is swapped out with a different stochastic search method culled from the strategy pool. It is transparent that the search tactics might be modified often. That will result in the oscillation of the search.

This paper makes reference to MEABC and utilises the same strategy pool(SP), which includes the search methods ABC,GABC,and improved ABC/best/1. The SP is defined as below.

figure a

where \(s=1,2,3\), \(SP_s\) is the s-th technique in the SP, \(k\in \{1,2,... ,SN\}\) is a stochastic index (\(k \ne i\)), \(gbest_j\) is the j-th dimension of the best individual, \(j\in \{1,2,\ldots ,D\}\) is a stochastic index, \(\phi _{i,j}\) is a stochastic element in [-1,1], and \(\psi _{i,j}\) is a stochastic outcome in [0, C].

In trying to look for an appropriate solution, a suitable technique for the current search can effectively improve performance. Therefore, it is crucial to choose a suitable search technique. During the search, to choose appropriate search strategies, a tolerance-based adaptive method is proposed. At the beginning of every generation, a search technique is selected from the SP. Then, the entire population uses the selected strategy to generate offspring. When the search situation changes, a new search strategy may be selected to replace the current one.

figure b

For each search strategy \(SP_s\), it is assigned a counter \(T_s\) to record the number of unsuccessful searches (\(s=1,2,3\)). Supposing that the current search strategy is \(SP_s\). If a parent solution \(X_i\) is unable to be updated through its offspring \(V_i\), the current search is unsuccessful and the corresponding counter \(T_s\) is added by 1. when every bee has completed their work, the counter \(T_s\) is compared with a preset threshold M. If \(T_s\) is larger than M, the current strategy, \(SP_s\), is replaced by an new one, \(SP_h\), which was chosen at random from the strategy pool. According to the above analysis, the tolerance-based strategy selection method described in Algorithm 1.

3.2 Elite Selection for the Onlooker Bee Phase

Individuals are primarily looking for good solutions near good nectar sources on the onlooker bee stage. Then, a selection probability \(p_i\) by Eq. (3) is assigned to every individual. The \(fit(X_i)\) determines the probability \(p_i\). Solutions with better fitness values increase the probability of selecting a solution for further search. However, we may not choose better solutions based on fitness values. By the suggestions of [24], the fitness definition Eq. (4) is not available in the later search stage. When the func cost of the solution is sufficiently minimal, a solution’s fitness value \(fit(X_i)\) runs to be 1. Then, all solutions may have the same fitness values in this situation. As a result, we cannot use the probability \(p_i\) to choose better solutions for the onlooker bees.

In this section, to choose some excellent solutions for the onlooker bees, an elite selection strategy is designed. Firstly, according to the fitness values of solutions, the population is sorted. A temporary set Q is created using the top m solutions, where \(m = \rho \cdot SN\) is a parameter and lots of scientists usually define \(\rho \in (0,1)\). For each solution \(X_i \in Q\), the Euclidean distance \(dist_i\) amid \(X_i\) and gbest is calculated. Then, the mean Euclidean distance of Q is computed as below.

$$\begin{aligned} mdist=\frac{\sum ^m_{i=1}{dist_i}}{m-1} \end{aligned}$$
(8)

The gbest is added to the elite set eSet. For a solution \(X_i \in Q\), the Euclidean distance dist(ij) between \(X_i\) and each solution \(X_j \in eSet\) are calculated. If dist(ij) is less than mdist for all \(X_j \in eSet\), the solution \(X_i\) is added to the collection eSet. The detailed steps are listed in Algorithm 2.

figure c

3.3 Framework of Proposed Approach

This study proposes a modified ABC (called ASESABC) that includes adaptive search strategies and an elite selection mechanism. Firstly, a strategy pool contains 3 search rules. A tolerance-based strategy selection method is designed to choose suitable search strategies. Then, an elite selection tactic is utilised to aid the onlooker bee greater opt a good solution.

In Algorithm 3, the ASESABC framework is described, where FEs denotes the amount of problem assessments, limit is a predetermined value, MaxFEs is the greatest number of FEs, and M is a threshold value. The main operations of ASESABC consist of six parts: initialization (Lines 1–3), employed bee phase (in rows 5–14), onlooker bee phase (in rows 15–25), Scout bee phase (Lines 26–30), strategy adjustment (Line 31), and elite set updating (Line 32).

figure d
Table 1. Results of ASESABC under different M values when \(D=30\).
Table 2. The result of ASESABC on mean ranking values among different M.

4 Experimental Study

4.1 Benchmark Problems and Parameter Settings

This chapter shows what is the outcome of the algorithm, it take 22 benchmark tasks to validate the ASESABC’s capacity in the following experiments [20]. The parameter of Dimensional D is 30 and 100. We take ABC and four ABC variants as benchmark algorithms and based on it compare with ASESABC’s result. Following is a list of the relevant algorithms.

  • ABC [12].

  • Quick ABC (qABC) [13].

  • ABC with variable search strategy (ABCVSS) [14].

  • Multi-strategy ensemble ABC (MEABC) [24].

  • Adaptive ABC with two search strategies (TSaABC) [18].

  • Proposed approach ASESABC.

For the above six ABCs, the values for the common parameters swarm number (SN), maximum number of function evaluations (MaxFEs), and limit are set to 50, \(5000 \cdot D\), and \(SN \cdot D\), respectively. Throughout the experiments, each problem is tested 30 times for every ABC and the average outcomes are reported.

Table 3. The result of ASESABC compared with benchmark algorithms when \(D=30\).

4.2 Study on the Parameter M

In Sect. 3.1, a new parameter M is defined as a threshold value to control the tolerance of changing strategies. For a small M, the strategy may be changed frequently, and this tends to cause oscillation in the search. For a large M, an inappropriate strategy may be used for a long time. This is not beneficial for balancing exploitation and exploration. Therefore, the value of M may influent on the performance of ASESABC. To compare the effectiveness of various ASESABC, the best M can be obtained. In this section, M is set to 40, 50, 60, 70, and 80, respectively.

The computation of ASESABC for various M values is shown in Table 1. As seen, ASESABC with \(M=70\) is not worse than other cases except for problem \(f_{18}\). \(M=60\) achieves the best result in \(f_{18}\). To further determine the M value, Friedman experiment is tested. ASESABC’s ranking values under various M values are shown in Table 2. According to the results, \(M=70\) has the highest rating value. It implies that the optimal setting is \(M=70\).

Table 4. The result of ASESABC compared with benchmark algorithms when \(D=100\).
Table 5. The result of ASESABC and benchmark algorithms’ mean ranking values.

4.3 Comparison of ASESABC with Other ABC Algorithms

The proposal ASESABC is contrasted to five distinct ABCs with \(D=30\) and 100. Table 3 lists the comparative outcomes of ASESABC for \(D=30\). Six algorithms produced the best outcomes, which are displayed in bold. As can be seen from the outcomes, on all experiment issues, ASESABC is not worse than other ABCs, outside of \(f_4\) and \(f_6\). On \(f_4\), MEABC obtains the best solution. qABC outperforms other ABCs on \(f_6\). Compared with ABC, ABCVSS, and TSaABC and ASESABC achieve similar performance on 2, 8, and 12 problems, respectively. On 22 test functions, ASESABC’s results are better than or equal to ABC, ABCVSS, TSaABC. Figure 1 shows the convergence curve of ASESABC and benchmark algorithms on four test functions \(f_1\), \(f_2\), \(f_5\), and \(f_9\). With other ABCs compared, ASESABC showed the fastest convergence. For \(f_9\), TSaABC’s convergence is better than ASESABC in the intermediate search process, but ASESABC is faster than TSaABC soon in later.

Fig. 1.
figure 1

The graph of convergence curves of ASESABC and benchmark algorithms on four test function (\(f_1\), \(f_2\), \(f_5\), and \(f_{9}\)) when \(D=30\).

Table 6. The result of passing the Wilcoxon experiment among ASESABC and benchmark algorithms.

Table 4 lists the comparison results between ASESABC and five other ABCs for \(D=100\). For most test problems and the dimensional size is 100, ASESABC continues to perform better than other ABC algorithms. Compared with ABC, qABC, and MEABC, ASESABC is not worse than them on all 22 problems. ABCVSS is superior to ASESABC on two problems \(f_6\) and \(f_{15}\). For problem \(f_4\), TSaABC surpasses ASESABC.

To compare six ABC algorithms, there are tests in Friedman and Wilcoxon. Table 5 provides the mean rankings of six compared algorithms on \(D=30\) and 100. According to the results, ASESABC receives the highest ranking value. It proves that, out of the six ABC algorithms, ASESABC is the most effective one. The Wilcoxon tests p-values amid ASESABC and ABC variants have emerged in Table 6. The bold of p-values is demonstrated in the situation under the 0.05 evident levels. For \(D=30\), in comparison to ABC, qABC, ABCVSS, MEABC, and TSaABC, ASESABC performs noticeably better. For \(D=100\), ASESABC significantly outperforms ABC and qABC.

5 Conclusion

A novel ABC with adaptive search techniques and an elite selection mechanism (called ASESABC) is presented to address the shortcomings of the original ABC. Firstly, we built a strategy pool containing various search rules. Then, a tolerance-based strategy selection method is created to select a suitable search technique at every iteration. Furthermore, an elite selection mechanism is employed to substitute the original probability selection. In the experiments, with \(D=30\) and 100, 22 well-known issues are utilized.

ABC, qABC, ABCVSS, MEABC, and TSaABC performance are compared with that of ASESABC. Outcomes demonstrate that ASESABC is surpass other ABC algorithms on most issues. Although the problem dimension is increased to 100, our approach still gets the best results among the six ABC variants. Study on the parameter M shows that different M values affect the performance of ASESABC. The statistical test demonstrates that \(M=70\) obtains the relatively best choice. In the experiments, ASESABC is only put to the test with some classical problems. In a further study, the performance of ASESABC on some complex benchmark sets will be investigated.