1 Introduction

Optimization is considered as the choice of a vector for an objective function in a given domain to make an optimal solution. In the last two decades, several metaheuristic techniques have been developed to solve difficult optimization problems. Some of these problems are navigation [1], and big data optimization [2]. The potential of metaheuristic optimization approaches for addressing various maximization/minimization problems, especially the NP-hard problems, is well documented. This is evident from the sizeable number of recently proposed modern stochastic optimization methods. Some of the major metaheuristic optimization methods that have been applied to solve challenging optimization problems are: differential evolution (DE) [3], evolutionary strategy (ES) [4], genetic algorithms (GAs) [5], artificial bee colony (ABC) [6], elephant herding optimization (EHO) [7], moth search (MS) algorithm [8], harmony search (HS) [9], monarch butterfly optimization (MBO) [10], and particle swarm optimization (PSO) [11]. In fact, individuals in swarm intelligence algorithms like a meme in memetic computing [12, 13].

Krill herd (KH) is a robust swarm intelligence algorithm originally proposed by Gandomi and Alavi [14]. KH has been proved to be superior to many other metaheuristic approaches (e.g., GAs, ES, BBO, DE, PSO) for solving many benchmark problems [14]. This clearly indicates that KH is a generic stochastic optimization method with immense scope of further development. On the other side, the ABC algorithm, motivated by the swarm behaviors of bee colonies, has a quite simple yet effective structure for solving optimization problems [6]. Hence, it has attracted the attention of many researchers.

It is known that the metaheuristic methods require various exploration and exploitation schemes for solving problems with increasing dimensions in the search space. Although the KH generally explores the search space well and appears to be fully capable of locating the global optimal value, its exploration ability has exhibited relatively poor performance at later run phase [14]. On the other hand, the ABC method has strong exploration ability with its poor exploitation. Therefore, KH or ABC method, when acts independently, does not exhibit the potential needed for the exploration and exploitation of the search space [15]. In order to address this, the present study intends to investigate a hybridization of the ABC and KH methods towards solving continuous numerical global optimization as well as discrete problems.

The rest of the paper is organized as follows: the related work about KH and ABC is provided in Sect. 2. An overview of the basic KH and ABC is presented in Sect. 3. This is followed by the detailed hybridization process in Sect. 4. The performance evaluation is carried out in Sect. 5. The manuscript ends with the conclusions and guidance, as provided in Sect. 6.

2 Review of related literature

Though this paper is based on krill herd (KH) algorithm and artificial bee colony (ABC) algorithm, therefore, some of the most representative algorithms of KH and ABC will be reviewed.

2.1 KH algorithm

Wang et al. [16] introduced the chaos theory into the KH optimization process. The range of a chaotic map is always between 0 and 1 through normalization. Twelve chaotic maps are used to tune the inertia weights \((\omega _{n}\), \(\omega _{f})\) in KH on fourteen benchmarks. The best chaotic map (Singer map) is selected to generate the chaotic KH (CKH) algorithm [16], and it is further compared with other state-of-the-art metaheuristic algorithms.

Wang et al. [17] proposed a hybrid metaheuristic algorithm namely CSKH by a combination of the advantages of cuckoo search (CS) and KH. In CSKH, two operators inspired by the CS algorithm, krill updating (KU) and krill abandoning (KA) were introduced into the basic KH. The KU operator inspires the intensive exploitation and makes the krill individuals search the space carefully in the later run phase of the search, while the KA operator is used to further enhance the exploration of the CSKH in place of a fraction of the worse krill at the end of each generation.

Mukherjee et al. [18] used various chaotic maps to generate chaotic KH (CKH) with the aim of improving the performance of the basic KH method. It is observed that Logistic map-based CKHA offers better results as compared other chaotic maps.

Wang et al. [19] added an updated version of reproduction schemes called stud selection and crossover (SSC) operator to the basic KH algorithm. Accordingly, a new version of the KH algorithm termed as Stud Krill Herd (SKH) was proposed. The added SSC operator is inspired by the Stud genetic algorithm. It selects the best krill (Stud) to perform the crossover operator. This approach appears to be well capable of solving various functions.

Limited by the length of the paper, we just review some of the most presentative KH papers. More related work of KH algorithm can be found in Bolaji et al. [20].

2.2 ABC algorithm

Since the basic ABC has been proposed, it has developed very fast. The related literature of ABC algorithm will be reviewed below.

Bolaji et al. [21] proposed a novel hybrid ABC algorithm based on the integrated technique for tackling the university course timetabling problem. First of all, initial feasible solutions are generated using the combination of saturation degree and backtracking algorithm. Secondly, a hill climbing optimizer is embedded within the employed bee operator to enhance the local exploitation ability of the ABC while tackling the problem. Empirical results on these problem instances validate the effectiveness and efficiency of the proposed algorithm for addressing the university course timetabling problem.

Kıran and Gündüz [22] proposed a hybridization of PSO and ABC approaches, namely HPA. The global best solutions obtained by PSO and ABC are used for recombination, and the solution obtained from this recombination is given to the populations of the PSO and ABC as the global best and neighbor food source for onlooker bees, respectively. They utilized twelve basic benchmark functions in addition to CEC 2005 composite functions and an energy demand estimation problem to verify their proposed HPA algorithm.

Awadallah et al. [23] proposed a metaheuristic technique called a hybrid artificial bee colony (HABC) for the nurse rostering problem (NRP). In HABC, the process of the employed bee operator is replaced with the hill climbing optimizer (HCO) to empower its exploitation capability and the usage of HCO is controlled by hill climbing rate (HCR) parameter. The performance of the proposed HABC is evaluated using the standard dataset published in the first international nurse rostering competition 2010 (INRC 2010).

Bullinaria and AlYahya [24] examined the performance of ABC for optimizing the connection weights of feed-forward neural networks for classification tasks, and presented a more rigorous comparison with the traditional Back-Propagation (BP) training algorithm. The empirical results for benchmark problems demonstrate that using the standard “stopping early” approach with optimized learning parameters leads to improved BP performance over the previous comparative study, and that a simple variation of the ABC approach provides improved ABC performance, too.

Li et al. [25] proposed an improved discrete ABC (DABC) algorithm to solve the hybrid flexible flowshop scheduling problem with dynamic operation skipping features in molten iron systems. First, each solution is represented by a two-vector-based solution representation, and a dynamic encoding mechanism is developed. Second, a flexible decoding strategy is designed. Next, a right-shift strategy considering the problem characteristics is developed, which can clearly improve the solution quality. Finally, an enhanced local search is embedded in the proposed algorithm to further improve the exploitation ability.

Another kind of bee algorithms, called bee colony optimization (BCO) algorithm, is also proposed. Krüger et al. [26] provided theoretical verification of the BCO algorithm by proving some convergence properties. As a result, the gap between successful practice and missing theory is reduced.

Limited by the length of the paper, we just review some of the most presentative ABC papers. More related work of ABC algorithm can be found in Hussein et al. [27].

3 Background

In this section, the background of our work, including ABC and KH, will be provided.

3.1 The ABC method

ABC is one of the seminal metaheuristic methods among various intelligent optimization techniques. After the appearance of swarm intelligence of bee colony, the forage selection is modeled. Based on this model, the definition of three main can be defined as follows [28]:

  • Food resource

In the simplest form, the value of a food source is described with only one quantity. In Fig. 1, A and B, C and D represent two food resources and two non-food resources, respectively. Furthermore, S, O, R, UF, and EF denote scouts, onlookers, recruits, unemployed foragers, and denote employed foragers, respectively.

  • Unemployed foragers

The unemployed forages have two sorts. One is Scouts (S). A scout bee is the type of bee that begins implementing search autonomously without any a-priori knowledge. The other one is Onlookers (O). They only stay in the nest in order to search for a food source with the help of the employed foragers.

  • Employed foragers

All of them are related to a food source that is being exploited. This information is shared with some probability. Three feasible choices associated with the quantity of nectar are provided for the foraging bee. One is Unemployed Forager (UF). When the nectar is less than a fixed threshold, the foraging bee gives it up and turns to an unemployed bee. The other one is Employed Forager 1 (EF1). If not, it may dance and recruit mates. The last one is Employed Forager 2 (EF2). It may forage around the food source all the time.

Fig. 1
figure 1

The behavior of honey bee foraging for nectar

Fig. 2
figure 2

Flowchart of the KHABC algorithm

The artificial bee colony includes three types of bees: (1) employed bees, (2) onlookers and (3) scouts. In the artificial bee colony, a food source corresponds to an employed bee. That is to say, the employed bees and the food sources have the same number. The main steps of the search conducted by the artificial bees can be described as follows:

  1. Step 1

    Initialize the population \(x_{ij}\);

  2. Step 2

    Repeat

  3. Step 3

    Generate new solutions \(\upsilon _{ij}\) around \(x_{ij}\) for the employed bees as

    $$\begin{aligned} \upsilon _{ij}=x_{ij} + \Phi _{ij} ({x_{ij} -x_{kj}}) \end{aligned}$$
    (1)
Fig. 3
figure 3

3-D contours of the Griewank function

Fig. 4
figure 4

The positions of the individuals and best position after a 1st iteration, b 2th iteration, c 3rd iteration, and d 4th iteration

Here k is a solution around i, \({\varPhi }\) is a random number [−1, 1].

  1. Step 4

    The greedy selection is used between \(x_{i}\) and \(\upsilon _{i}\);

  2. Step 5

    Calculate the probability \(P_{i}\) for \(x_{i}\) according to their fitness:

    $$\begin{aligned} P_i =\frac{f_i }{\sum \nolimits _{i=1}^{SN} {f_i } } \end{aligned}$$
    (2)

    SN is the number of food sources, and f is its fitness;

  3. Step 6

    Normalize \(P_{i}\) into [0, 1];

  4. Step 7

    Generate the new solutions \(\upsilon _{i}\) for the onlookers from \(x_{i}\), selected depending on \(P_{i}\);

  5. Step 8

    The greedy selection is used for the onlookers between \(x_{i}\) and \(\upsilon _{i}\);

  6. Step 9

    Check if a solution is abandoned. If it is, replace it with a novel one \(x_{i}\) for the scout

    $$\begin{aligned} x_{ij} =\mathop {\min }\nolimits _{j} +\varphi _{ij} *\left( {\mathop {\max }\nolimits _j -\mathop {\min }\nolimits _j} \right) \end{aligned}$$
    (3)

    Here \(\varphi _{ij}\) is a random number in [0, 1].

  7. Step 8

    Save the best solution obtained up to now;

  8. Step 9

    Go to Step 2 until termination criteria is satisfied.

3.2 The KH method

KH [14] is a classic metaheuristic method for function optimization. KH is based on the simulation of the herding behavior of the krill individuals. The KH algorithm repeats the implementation of the three movements and takes search directions that converge to the best solution. The position is mostly influenced by three movements:

  1. (i)

    Movement induced by other krill;

  2. (ii)

    Foraging action; and

  3. (iii)

    Random diffusion.

In KH, the Lagrangian model is used as shown below:

$$\begin{aligned} \frac{dX_i }{dt}=N_i +F_i +D_i \end{aligned}$$
(4)

where \(N_{i}\) is the motion induced by other krill; \(F_{i}\) is the foraging motion, and \(D_{i}\) is the physical diffusion.

For the first motion, its direction \((\alpha _{i})\) is estimated by the following three factors: target effect, local effect, and a repulsive effect. For a krill, its definition can be provided as:

$$\begin{aligned} N_i^{new} =N^{\max }\alpha _i +\omega _n N_i^{old} \end{aligned}$$
(5)

where

$$\begin{aligned} \alpha _i =\alpha _i^{local} +\alpha _i^{t\mathrm{arg}et} \end{aligned}$$
(6)

and \(N^{\max }\) is the maximum induced speed, \(\omega _{n}\) is the inertia weight in [0,1], \(N_i^{old}\) is the former motion, \(\alpha _i^{local} \) is the local effect and \(\alpha _i^{t\mathrm{arg}et}\) is the target direction effect.

Table 1 Unconstrained benchmark functions
Table 2 Best function values of thirteen unconstrained benchmark functions
Table 3 Mean function values of thirteen unconstrained benchmark functions
Table 4 Worst function values of thirteen unconstrained benchmark functions

The formulation of the second motion is mostly determined by two main components: the food location and the previous experience. It can be expressed for the \(i\hbox {-th}\) krill as follows:

$$\begin{aligned} F_i =V_f \beta _i +\omega _f F_i^{old} \end{aligned}$$
(7)

where

$$\begin{aligned} \beta _i =\beta _i^{food} +\beta _i^{best} \end{aligned}$$
(8)

and \(V_{f}\) is the foraging speed, \(\omega _{f}\) is the inertia weight in [0, 1], \(F_i^{old} \) is the last motion, \(\beta _i^{food}\) is the food attractive and \(\beta _i^{best} \) is the effect of the \(i\hbox {-th}\) krill.

In essence, the third motion is looked upon as a random process. It can be expressed as:

$$\begin{aligned} D_{^{i}} =D^{\max }\delta \end{aligned}$$
(9)

where \(D^{\max }\) is the maximum diffusion speed, and \(\delta \) is the random directional vector.

According to the formulations of these actions for the \(i\hbox {-th}\) krill, the change in position of a krill from t to \(t+\Delta t\) can be represented by Eq. (10):

$$\begin{aligned} X_i \left( {t+\Delta t} \right) =X_i (t)+\Delta t\frac{dX_i }{dt} \end{aligned}$$
(10)

4 The KHABC method

In this section, firstly, we will provide the main idea of KHABC algorithm, and then an example will be used to show how KHABC can works.

4.1 The mainframe of KHABC method

Based on the above-analyses of the ABC, it can be seen that the standard ABC algorithm does not directly utilize the global optimal individual. In addition, in KH, if any krill gets trapped in the local values, it cannot escape from local minimum by itself. To overcome these limitations, a hybrid meta-heuristic method based on information exchange is presented. The hybridization process is similar to that proposed by Kıran and Gündüz [22].

Table 5 The Std of thirteen unconstrained benchmark functions
Table 6 Comparisons between KHABC and other methods at \(\alpha = 0.05\) on a two-tailed t-tests for thirteen unconstrained benchmark functions
Table 7 CEC 2017 constrained benchmark functions
Table 8 Best function values of twenty-one CEC 2017 constrained benchmark functions
Table 9 Mean function values of twenty-one CEC 2017 constrained benchmark functions

Information exchange or crossover operation is one of the most-famous evolution operators. Here, it is used for yielding a new solution, called TheBest. TheBest is considered to be \(K^{best}\) for the KH and food source of onlooker bees for the ABC. To get TheBest, the \(K^{best}\) of the KH and the optimal individual of the ABC are computed by Eq. (2). Probabilities used to select the two solutions are given by Eqs. (11) and (12):

$$\begin{aligned} P_{best} =\frac{fit_{best} }{fit_{Kbest} +fit_{best} } \end{aligned}$$
(11)

where \(P_{best}\) is the probability to choose the optimal individual of the ABC, \(fit_{best}\) and \(fit_{Kbest}\) are the \(K^{best}\) of the KH and the optimal individual of the ABC achieved according to Eq. (2).

$$\begin{aligned} P_{Kbest} =\frac{fit_{Kbest}}{fit_{Kbest} +fit_{best}} \end{aligned}$$
(12)

where \(P_{Kbest}\) is the \(K^{best}\) of the KH.

When generating the best solution, random numbers in the range of [0, 1] are utilized for the dimensions of the standard test function. If it is not above \(P_{best}\), the value for this dimension is selected from the optimal individual of ABC. Otherwise, this value is selected from \(K^{best}\) of KH. This selection process can be formulated as Eq. (13):

$$\begin{aligned} TheBest_i =\left\{ {{\begin{array}{ll} {Best_i} &{}\quad {if\;r<P_{best} } \\ {Kbest_i} &{}\quad {otherwise} \\ \end{array} }} \right. \end{aligned}$$
(13)

where \(TheBest_{i}\) is the \(i\hbox {-th}\) dimension of TheBest, \(Best_{i}\) is the \(i\hbox {-th}\) dimension of the best solution found by ABC, \(Kbest_{i}\) is the \(i\hbox {-th}\) dimension of \(K^{best}\) of the KH. r is a random number in the range of [0, 1].

Based on the information exchange described above, the connection between the krill and bees in the KHABC method can be stated as follows.

The global part of the KHABC method is “the best”. Through the best, not only the ability of the KH from escaping from the local minima showed substantial improvement, but also the exploitation of ABC got significantly enhanced by the direct utilization of the global best solution. \(K^{best}\) of the KH is updated in terms of the best accordingly and the same is passed on to onlooker bees of ABC as neighbor.

Besides, a concentrated elitism strategy is introduced into KHABC to preserve the optimal solutions and not being ruined by the method. This is carried out in order to guarantee that the whole population is capable of proceeding with a better status than before. By introducing this concentrated elitism strategy into the algorithm, the KHABC has been further developed. The main steps of the proposed KHABC algorithm can be described as follows:

  • Step 1 Initialize the KHABC (KH and ABC);

  • Step 2 Determine \(K^{best}\) of the KH and best of the ABC;

  • Step 3 Repeat

  • Step 4 Apply the recombination procedure to the \(K^{best}\) and best solutions;

  • Step 5 Save the KEEP best individuals as BEST;

  • Step 6 KH process

    • 1) Motion Induced by other individuals;

    • 2) Foraging motion;

    • 3) Physical diffusion;

    • 4) Determine the \(k^{best}\) of the population;

  • Step 7 ABC process

    • 1) Employed bee phase of abc;

    • 2) Onlooker bee phase of abc;

    • 3) Scout bee phase of abc;

    • 4) Determine the best of the population;

  • Step 8 Replace the KEEP worst individuals with the KEEP best individuals saved in BEST;

  • Step 9 Is termination condition met? If yes, output the best solution; or go to Step 3.

From the analysis above, we can see, the Step 6 KH progress and Step 7 ABC progress are simultaneously implemented. In this way, KHABC can find the best solution in a much faster way. In addition, the corresponding flowchart of the process is illustrated in Fig. 2.

Furthermore, for the complexity of the proposed KHABC algorithm, because KHABC did not introduce new operators except the original operators used in KH and ABC, the complexity of KHABC is no more than KH and ABC. Our proposed KHABC algorithm only used the best individual in KH and ABC, not introduced any new operators, therefore, KHABC is as simple as the basic KH and ABC.

4.2 Convergent process of KHABC algorithm

As an example, the proposed KHABC algorithm is benchmarked using the 2D Griewank function (see Fig. 3). The formulation of the 2D Griewank function can be given as:

$$\begin{aligned} f(\vec {x})=\sum _{i=1}^n {\frac{x_{i}^2}{4000}} -\prod _{i=1}^n {\cos \left( {\frac{x_i }{\sqrt{i}}} \right) +1} \end{aligned}$$
(14)

For the purpose of clarifying the movement process of the individuals in KHABC algorithm, less individuals (10) and less generations (4) are used to find the minima in this problem. The optimization process of KHABC algorithm can be shown in Fig. 4, where the position of individuals and the best solution are shown by blue \(\circ \) marks and red \(\bullet \) mark, respectively. The positions in the spread of the individuals after 1st, 2nd, 3rd and 4th generations are respectively shown in Fig. 4a–d. As is evident, firstly the positions of the individuals are scattered over the search space and far away from the best solution (see Fig. 4a). Subsequently, all the individuals move towards the best solution with increasing the generations (see Fig. 4b–c). Ultimately, all the individuals are located around the best solution (see Fig. 4d).

5 Simulations

In this section, the proposed KHABC algorithm will be benchmarked by thirteen basic unstrained benchmark functions, twenty-one CEC 2017 constrained optimization functions, and ten CEC 2011 real world problems (RWPs). And to ensure a fair comparison, all the simulations were implemented in the same environments as shown in [29].

Here, the performance of KHABC was compared with ten nature-inspired methods viz. chaotic cuckoo search (CCS) [30], DE [3], ES [4], GA [5], HS [9], HSBBO [31], KH with elitism (KHE) [32], multi-swarm bat algorithm (MBA) [33], PSO [11], and bat algorithm with variable neighborhood search (VNBA) [34].

For KH and KHABC, the same parameters are set: \(V_{f} =0.02\), \(D^{\max }=0.005\), \(N^{\max }=0.01\), and KEEP=2. For the other methods, the parameters are set suggested in [31]. For CCS, HSBBO, KHE, MBA, and VNBA, their parameters are set as the original paper.

It is well-known that most of the metaheuristic methods are based on certain type of stochastic distribution. To obtain typical performances, fifty trials are implemented for each method on each function. The optimal function value is highlighted in bold. If no special clarification is provided, the dimension of the benchmark is set to thirty. In addition, both the population size and maximum generation are set to 50.

Table 10 Worst function values of twenty-one CEC 2017 constrained benchmark functions
Table 11 The Std of twenty-one CEC 2017 constrained benchmark functions

5.1 Unconstrained optimization

Here, the proposed KHABC algorithm is verified by thirteen basic standard benchmark functions.

5.1.1 Benchmark evaluation

In order to validate KHABC, it has been applied to optimize a series of benchmark functions from previous studies (see Table 1) [35].

The characteristics of the thirteen functions are provided in Table 1, including optimal value, dimension, separability, modality, lower bound and upper bound. Their best, average, worst, and Std values are recorded, as shown in Tables 234 and 5.

Table 12 Comparisons between KHABC and other methods at \(\alpha = 0.05\) on a two-tailed t-tests for twenty-one CEC 2017 constrained benchmark functions

From Table 2, it can be seen that HSBBO has the best performance on eleven of the thirteen test problems. Furthermore, the performance of KHABC is only worse than HSBBO. For average solutions shown in Table 3, KHABC provides the best results for eleven of the thirteen test problems, while DE and MBA are only inferior to KHABC, and have the best performance on only one function. Furthermore, for the worst performance, as shown in Table 4, KHABC performs the best on eleven functions out of thirteen functions, and MBA performs much better than other comparative algorithms, which can find the best solutions on two functions. For the Std of eleven algorithms on thirteen functions (see Table 5), KHABC has the minimum on ten out of thirteen functions. This indicates that KHABC can be implemented in a more stable way.

5.1.2 Comparisons with other optimization methods by using t-test

According to the final function values of fifty independent runs on 30-D thirteen functions, as shown in Sect. 5.1.1, the t values on thirteen test problems of the two-tailed test with the 5% level of significance between KHABC method and other ten metaheuristic methods are provided in this section. The results are recorded in Table 6. In the table, the value of t with 98 degrees of freedom is significant at \(\alpha =0.05\) by a two-tailed test. The result is highlighted in bold font for showing the better performance of KHABC w.r.t. comparative method. The “Better”, “Equal”, and “Worse” in the last three rows indicate better than, equal to and worse performance respectively of the KHABC as compared to the comparative one. Here, the comparison between KHABC and MBA is taken an example. KHABC has better and worse performance than MBA on eleven and one test problems, respectively. The performance between KHABC and KH has no significant differences on one test problem. To summarize, it can be said that KHABC outperforms MBA on most test problems. In addition, for KHABC and HSBBO, HSBBO yields better and worse performance than KHABC on one and ten test problems respectively and they have similar performance on two test problems. The two examples above indicate that KHABC significantly outperforms MBA and HSBBO on most benchmarks. Looking carefully at the results as shown in Table 6, it is safe to say, KHABC is a competitive and promising method on most cases when compared to the other ten methods.

5.2 Constrained optimization

Here, the proposed KHABC algorithm is verified by twenty-one CEC 2017 constrained benchmark functions (C01–C21).

5.2.1 Benchmark evaluation

The characteristics of the twenty-one CEC 2017 constrained benchmark functions (C01–C21) can be found in Table 7, including the number of decision variables and constraints (inequality constraints and equality constraints). Their best, average, worst, and Std values through fifty independent runs are recorded, as shown in Tables 8910 and 11.

From Table 8, it can be seen that HSBBO has the best performance on four of the twenty-one test problems, while KHABC performs the best on sixteen functions which is much better than KHBBO. For average solutions shown in Table 9, KHABC can find the best results for fourteen of the twenty-one test problems, while GA is only inferior to KHABC, and has the best performance on only three functions (C06–C08). Furthermore, for the worst performance, as shown in Table 10, KHABC performs the best on eleven functions out of twenty-one functions, and CCS can find the best solutions on three functions (C02, C03, and C18). For other algorithms, DE and HSBBO have the similar performance, and perform the best on two functions (C07, C08 and C04, C14). For the Std of eleven algorithms on twenty-one constrained functions, although CCS and de have the minimum Std on four functions, they are far worse than KHABC, which can find the solutions within the least range on ten functions. This indicates that KHABC is more suitable algorithm to solve the real world problems.

Table 13 CEC 2011 real world application problems

5.2.2 Comparisons with other optimization methods by using t-test

According to the final function values of fifty independent runs on 30-D twenty-one CEC 2017 constrained functions as shown in Sect. 5.2.1, the t values on thirteen test problems of the two-tailed test with the 5% level of significance between KHABC method and other ten metaheuristic methods are provided in this section. The results are recorded in Table 12. In Table 12, the value of t with 98 degrees of freedom is significant at \(\alpha =0.05\) by a two-tailed test. The result is highlighted in bold font for showing the better performance of KHABC w.r.t. comparative method. The “Better”, “Equal”, and “Worse” in the last three rows indicate better than, equal to and worse performance respectively of the KHABC as compared to the comparative one. Here, the comparison between KHABC and HSBBO is taken an example. KHABC has better and worse performance than HSBBO on sixteen and two test problems, respectively. The performance between KHABC and HSBBO has no significant differences on two test problems. To summarize, it can be said that KHABC outperforms HSBBO on most test problems. In addition, for KHABC and CCS, CCS yields better and worse performance than KHABC on one and eighteen test problems respectively and they have similar performance on two test problems. The two examples above indicate that KHABC significantly outperforms HSBBO and CCS on almost all the benchmarks. Looking carefully at the results as shown in Table 12, it is safe to say, KHABC is a competitive and promising method on most cases when compared to the other ten methods.

Table 14 Best function values of ten CEC 2011 real world application problems
Table 15 Mean function values of ten CEC 2011 real world application problems
Table 16 Worst function values of ten CEC 2011 real world application problems
Table 17 The Std of ten CEC 2011 real world application problems
Table 18 Comparisons between KHABC and other methods at \(\alpha = 0.05\) on a two-tailed t-tests for ten CEC 2011 real world application problems

5.3 Real world problems

The target of designing a new metaheuristic algorithm is to solve the practical engineering problems. Therefore, except the benchmark evaluation conducted in Sects. 5.1 and 5.2, ten real world problems selected from CEC 2011 (R01–R10) is used to further verify the proposed KHABC algorithm.

5.3.1 Performance

Ten selected CEC 2011 real world problems can be shown in Table 13. More information about these real world problems can be found in [36]. Their best, average, worst, and Std values through fifty independent runs are recorded, as shown in Tables 141516 and 17.

From Table 13, it can be seen that CCS has the best performance on two of the ten real world problems, while KHABC performs the best on five functions which is much better than CCS. For average solutions shown in Table 14, KHABC can find the best results on six of the ten real world problems, while CCS, DE, ES, and PSO have the similar performance, and all of them have the best performance on only one real world problem (R10, R09, R08, and R03). Furthermore, for the worst performance, as shown in Table 15, KHABC performs the best on four out of ten real world problems, and CCS, DE, and ES rank two, three, and four, respectively, which can find the best solutions on three, two, and on real world problems (C02, C03, and C18). For the Std of eleven algorithms on ten real world problems, DE has the best performance and has the minimum Std values on three real world problems. CCS, DE, and GA have the similar performance, and they can find the solutions with the minimum Std values on two real world problems. KHABC can find the solutions within the least range on one real world problem.

Fig. 5
figure 5figure 5figure 5figure 5

Performance of different methods for the thirteen unconstrained benchmark functions and twenty-one CEC 2017 constrained functions. a F01–F04. b F05–F08. c F09–F12. d F13 and C01–C03. e C04–C07. f C08–C11. g C12–C15. h C16–C19. i C20–C21 and legend

5.3.2 Comparisons with other optimization methods by using t-test

According to the final function values of fifty independent runs on ten CEC 2011 real world problems, as shown in Sect. 5.3.1, the t values on ten real world problems of the two-tailed test with the 5% level of significance between KHABC method and other ten metaheuristic methods are provided in this section. The results are recorded in Table 18. In Table 18, the value of t with 98 degrees of freedom is significant at \(\alpha =0.05\) by a two-tailed test. The result is highlighted in bold font for showing the better performance of KHABC w.r.t. comparative method. The “Better”, “Equal”, and “Worse” in the last three rows indicate better than, equal to and worse performance respectively of the KHABC as compared to the comparative one. Here, the comparison between KHABC and CCS is taken an example. KHABC has better and worse performance than CCS on six and two real world problems, respectively. The performance between KHABC and CCS has no significant differences on two real world problems. To summarize, it can be said that KHABC outperforms CCS on most real world problems. In addition, for KHABC and VNBA, VNBA yields worse performance than KHABC on seven real world problems and they have similar performance on three RWPs. The two examples above indicate that KHABC significantly outperforms CCS and VNBA on almost all the benchmarks. Looking carefully at the results as shown in Table 18, it is safe to say, KHABC is a competitive and promising method on most cases when compared to the other ten methods.

Furthermore, in order to prove the advantages of KHABC method over other algorithms, convergence maps of the five selected representative methods (CCS, HSBBO, KHABC, MBA, and VNBA) on thirty-four benchmark problems (F01–F13, and C01–C21) are given in Fig. 5. It can be observed from Fig. 5, that KHABC is capable of finding minimum on most benchmark functions. For other four algorithms, HSBBO is little inferior to KHABC, though HSBBO has the similar performance with KHABC on some benchmark functions.

All the experiments conducted in this section have proven the effectiveness and efficiency of the proposed KHABC algorithm. This also indicates KHABC is a promising, robust optimization strategy for unconstrained, constrained, and real world problems.

6 Conclusions

In the present study, a hybridization of the KH and ABC methods, namely KHABC, is proposed for the continuous and discrete optimization. The KHABC integrates the capabilities of the KH and the ABC in order to stop all krill from being attracted to local values. Moreover, a focused elitism scheme is applied to the method to further enhance its performance. The effectiveness of the proposed methodology is tested using twenty-five high-dimensional test problems, as well as a discrete problem. The results clearly demonstrate the superiority of KHABC over KH, ABC, and other meta-heuristic algorithms. However, there are quite a few issues that merit further investigation such as analyzing the parameters used in the KHABC method. The future study can focus on solving a more ubiquitous set of different continuous optimization and discrete problems. The combination of other search strategies based on various robust meta-heuristic techniques, such as the ACO and PSO, is a direction that is worth investigation. Finally, the study of CPU time used by the meta-heuristic approaches needs attention in order to make the proposed method more feasible for solving the practical engineering problems.