1 Introduction

From some past years, the swarm intelligence-based algorithms have gained enormous attention to solve the complex optimization problems because of their simple structure, easy implementation, flexibility and local optima avoidance ability. The swarm intelligence is popular field of nature-inspired algorithms where the information exchange via collaborative communication between the search agents is utilized. The information-exchange mechanism between the search agents helps in exploring the promising areas of the search space, and the communication between the agents is required to avoid local optima during the search. In the literature, numerous algorithms are available which belong to the category of swarm intelligence. The reason behind the existence of large number of algorithms can be answered with the “no free lunch theorem (NFL)” [1] which was the revolutionary development in the field of nature-inspired algorithms. The NFL opposes the existence of an ideal optimization algorithm which can solve all the optimization problems. In other words, it can be said that a particular algorithm may be the best on a particular set of problems, but for some other set of optimization problems, this particular algorithm may perform poorly. This theorem always allows the researchers to develop new algorithms or improve the existing algorithms. The swarm intelligence-based algorithms are widely used because they utilize less number of parameters. Some well-known and wide applicable algorithms based on swarm intelligence are particle swarm optimization (PSO) [2], ant colony optimization (ACO) [3], artificial bee colony (ABC) algorithm [4], cuckoo search (CS) [5], firefly algorithm (FA) [6], etc.

The artificial bee colony (ABC) algorithm was developed by Karaboga and Basturk [4] by mimicking the social and foraging behavior of honey bees. Although the classical ABC performs well on several real-life problems, in some cases, it suffers from inefficient exploration, low exploitation and slow convergence rate. Therefore, in the literature, several attempts have been performed to tackle these issues. For example, in [7], three modified search equations are introduced to improve the search ability of bees in terms of exploration and exploitation. Zhu and Kwong [8] have proposed a gbest-guided ABC which was inspired by the PSO. Liu et al. [9] have proposed a new search mechanism based on the information of global best and previous best solutions. In this search mechanism, two new scaling factors are introduced to maintain the balance between exploitation and exploration. In [10], the diversity of the bee colony is increased and the exploitation ability of bees is increased by modifying the search equations. To overcome issues in balancing, searching and convergence, several modified search equations are proposed [11, 12]. In [13], a multistrategy ensemble ABC algorithm is proposed which tries to establish an appropriate balance between exploitation and exploration. In order to maintain the diversity in the algorithm and to enhance the exploration ability of bees, multipopulation strategy is adopted [14]. In [15], to balance between diversity and convergence in the ABC, Levy-flight-inspired search strategy is integrated in ABC. In [16], a modified ABC is introduced based on neighborhood search strategy to enhance the exploration ability of bees and to maintain the diversity in the algorithm. In order to boost up the exploration ability of bees, ABC is hybridized with Grey wolf optimizer algorithm [17]. In [18], a Cauchy operator is employed to balance between global and local search strategies. In [19], the ABC algorithm is hybridized with Bat algorithm to enhance the exploration ability and convergence rate in the ABC. The literature presented here on ABC is not covering all the aspects, and therefore, the other developments can be found in [20].

Although, in the literature, various attempts have been done to improve the efficiency of ABC, in many cases, it has been realized experimentally that similar to other meta-heuristic algorithms, ABC also suffers from the imbalance between the search operators exploration and exploitation. Therefore, there is a chance of improving the search ability of the classical ABC algorithm. Also, the NFL theorem allows improving the search efficiency of the algorithm so that it can be employed to solve complex real-life problems with comparatively better accuracy. One way to improve the performance of ABC algorithm can be the hybridization with other algorithms. In the literature, various algorithms are hybridized to propose a comparative better optimization algorithm [21,22,23,24,25,26]. In these hybrid algorithms, the advantages (better exploration or better exploitation) of various algorithms are integrated. By inspiring these concepts of hybridization, an attempt has been made in the present paper toward the enhancement of search efficiency of employed bees in ABC algorithm through sine cosine algorithm (SCA) [27]. The proposed hybrid method is named as sine cosine artificial bee colony (SCABC). In the SCABC, the modified SCA search equations are integrated for the employed bee phase to level up the exploration and exploitation ability of search agents. The modification in the classical search mechanism of SCA is done to sustain the balance of exploration in SCABC because the classical search equations of SCA show the imbalance between exploration and exploitation [28]. The investigation of the proposed SCABC is performed on a well-known set of 23 benchmark problems. The numerical results and comparison analysis through various performance metrics verify the efficacy of the proposed SCABC algorithm. Moreover, in the paper, the SCABC also employed to determine the optimal thresholds for the multilevel thresholding problem. The comparison of results on this problem ensures that the SCABC is a comparatively better optimization method in finding the optimal thresholds for gray images.

The rest of the paper is arranged as follows: Sect. 2 provides an overview and conceptual description of classical ABC and classical SCA. In Sect. 3, the proposed hybrid method called SCABC is explained in detail. Section 4 provides the numerical experimentation and validation of the SCABC on 23 classical benchmarks. In Sect. 5, the SCABC is employed on multilevel thresholding problem. Finally, the conclusion of the work is presented in Sect. 6 with some future research directions.

2 Overview of classical ABC and SCA

In this section, the overview of classical ABC and SCA has been presented with conceptual details and working mechanism.

2.1 Artificial bee colony algorithm

The ABC algorithm was designed by Karaboga [4] for global optimization by inspiring the collaborative foraging behavior of honey bees. In ABC, the determination of optimal solution can be regarded as the equivalent process of foraging behavior by bees. Moreover, in the ABC, the food sources can be regarded as the possible solutions to the optimization problem and the amount of nectar in food source denotes the fitness of objective function. In ABC algorithm, the food sources are evolved in three phases—(1) employed bee (2) onlooker bee, and (3) scout bee. In the algorithm, number of food sources, employed bees and onlooker bees are considered to be equal.

In employed bee phase, each employed bee is associated with a unique food source and tries to locate a new position based on the information of associated food source and random food source using the following search equation:

$$ u_{i,j} = x_{i,j} + \eta_{i,j} \cdot \left( {x_{i,j} - x_{k,j} } \right) $$
(1)

where \( j \) represents the randomly selected dimension coordinate from the set \( \left\{ {1,2, \ldots D} \right\} \), \( \eta_{i,j} \) is uniformly distributed random number in the interval \( \left( { - \,1,\;1} \right) \). \( k \) represents the random food source different from other than \( i \), \( \vec{u}_{i} \) is a newly evolved food source by employed bee \( \vec{x}_{i} \). If the newly generated solution \( \vec{u}_{i} \) is found better in terms of fitness value, then the old solution \( \vec{x}_{i} \) is replaced by the solution \( \vec{u}_{i} \).

After the completion of employed bee phase, the onlooker bee phase starts. In this phase, the location and nectar amount of food source are shared with onlooker bees. The onlooker bees search for better locations of food sources based on the probability \( p_{i} \), which can be calculated as follows:

$$ p_{i} = \frac{{{\text{fitness}}_{i} }}{{\mathop \sum \nolimits_{i = 1}^{{N_{f} }} {\text{fitness}}_{i} }} $$
(2)

where \( {\text{fitness}}_{i} \) represents the fitness of the ith food source or nectar amount available at ith food source and \( N_{\text{f}} \) denotes the number of food sources. Based on these probabilities, onlooker bees locally exploit the neighborhood areas of food sources with the help of Eq. (1). Similar to the employed bee phase, a greedy selection is applied here between the newly generated food source and its old location.

When the phase of onlooker bee is completed, then the scout bees are used to remove such food sources which are not evolved up to some predefined limit. In the scout bee phase, the undeveloped food sources up to the predefined threshold limit are replaced with the uniform distributed food sources using the following equation:

$$ u_{i,j} = {\text{lb}}_{j} + {\text{rand}} \cdot \left( {{\text{ub}}_{j} - {\text{lb}}_{j} } \right) $$
(3)

where \( {\text{lb}}_{j} \) and \( {\text{ub}}_{j} \) denote the available lower and upper bounds for the dimension \( j \) of the ith food source.

The steps of ABC are presented in the form of pseudo-code in Algorithm 1. Some key points in the pseudo-code are: (1) the termination criteria for ABC algorithm are fixed as maximum number of function evaluations (FESs); (2) when the bee is evaluated using objective function, the FES is increased with a number 1; and (3) the limit count parameter \( \left( l \right) \) of the ABC is increased by 1 when the newly generated food source is not better than the old one.

2.2 Sine cosine algorithm

The sine cosine algorithm [27] uses the features of trigonometric functions sine and cosine in its search strategy. This algorithm was introduced by Mirjalili in 2014 [27] for global optimization problems. Similar to other optimization algorithms, the SCA also searches the solution randomly with the guidance of global best solution. In the SCA, the global best solution is known as destination point. The search equations, which are used in SCA to evolve the position of candidate solutions, are defined as follows:

$$ \vec{x}_{i,t + 1} = \vec{x}_{i,t} + \alpha \times \sin \left( \psi \right) \times \left| {c \cdot \vec{x}_{D,t} - \vec{x}_{i,t} } \right| $$
(4)
$$ \vec{x}_{i,t + 1} = \vec{x}_{i,t} + \alpha \times \cos \left( \psi \right) \times \left| {c \cdot \vec{x}_{D,t} - \vec{x}_{i,t} } \right| $$
(5)

where \( \vec{x}_{i,t + 1} \) and \( \vec{x}_{i,t} \) denote the location of candidate solution \( \vec{x}_{i} \) at the iterations \( t + 1 \) and \( t \), respectively. \( \vec{x}_{D,t} \) represents the position of destination point obtained so far. \( \alpha , \psi \) and \( c \) are random numbers that maintain the randomness in search process to prevent from the situation of getting trapped at local optimum. Equations (4) and (5) can be written jointly as follows:

$$ \vec{x}_{i,t + 1} = \vec{x}_{i,t} + \left\{ {\begin{array}{*{20}l} {\alpha \times \sin \left( \psi \right) \times \left| {c \cdot \vec{x}_{D,t} - \vec{x}_{i,t} } \right|} \hfill & {{\text{if}}\;{\text{rand}} < 0.5} \hfill \\ {\alpha \times \cos \left( \psi \right) \times \left| {c \cdot \vec{x}_{D,t} - \vec{x}_{i,t} } \right|} \hfill & {\text{otherwise}} \hfill \\ \end{array} } \right. $$
(6)

where rand represents the uniformly distributed random number from the interval (0, 1). This random number helps to transit from cosine to sine function and vice versa.

In the search mechanism of the SCA, the random number \( \alpha \) is responsible for the exploration and exploitation of search space. The value of \( \alpha \) which is adopted in the SCA is formulated as follows:

$$ \alpha = 2 - 2 \times \left( {\frac{t}{T}} \right) $$
(7)

where \( t \) represents the current iteration and \( T \) is the maximum number of iterations. From Eq. (7), it can be seen that random number \( \alpha \) linearly decreases from the value 2 to 0. The value of \( \alpha \) which lies in the interval \( \left( {1, 2} \right) \) supports to the exploration, and in this situation, new promising areas of the search space are discovered. When the value of \( \alpha \) lies in the interval \( \left( {0,\;1} \right) \), the regions of the search space which are already discovered are exploited. Thus, the parameter \( \alpha \) helps to transit from the exploration phase to the exploitation phase. This transition maintains the balance between exploitation and exploration. The random number \( \psi \) decides the movement of the current solution \( \vec{x}_{i} \) either toward the destination point or outwards the destination point. The random number \( c \) is introduced to support the exploration of search space when the parameter \( \alpha \) fails. The parameter \( c \) randomly explores and exploits the search space within the algorithm. The framework of classical SCA is provides in Algorithm 1.

figure a

3 Proposed hybrid method: SCABC

This section introduces the proposed hybrid method called SCABC. First, the motivations of hybridizing SCA and ABC are provided in detail. Second, the search strategy of SCABC has been discussed with pseudo-code.

3.1 Motivations

In the ABC algorithm, the employed bees are associated with food sources by one-to-one correspondence. The information of nectar amount available at food source is passed to the onlooker bees by employed bees so that bees can concentrate on more promising food sources. Thus, the employed bee phase plays an important role during the search process to explore more promising regions. The employed bee searches new food source randomly without any guidance which may decrease the convergence rate. Here, the slow convergence occurs due to the worthless and irregular exploration produced by search equation of classical ABC algorithm. Therefore, an intelligent explorative behavior with proper balance between the operators’ exploration and exploitation can be prepared for the employed bee phase.

In the literature, it has been found that the SCA is rich in exploration and tries to move from the exploration phase to the exploitation over the iterations. But, sometimes the algorithm faces the problem of high diversity at early stages of the algorithm, which degrade the performance of the algorithm, and skips the true solutions during the search. The low exploitation at early iterations and low exploration at the later iterations of algorithm may cause the problem of slow convergence and premature convergence, respectively. Therefore, an appropriate balance between exploration and exploitation can be maintained with the help of some memory-based information. The shortcoming of high diversity in SCA can be alleviated by integrating the elite guidance, and this guidance may be useful for employed bees to prevent from irregular exploration. Therefore, by inspiring from all the above advantages and to improve the efficiency of both the algorithms, the present work hybridizes the ABC algorithm with SCA.

3.2 The conceptual description of proposed SCABC algorithm

The proposed SCABC algorithm can be considered as an extended ABC algorithm in which the employed bee phase is improved by the modified search equations of SCA. The modifications in the search mechanism of classical SCA are done to provide an elite guidance for the candidate solutions. The integrated elite guidance tries to detract the problem of high diversity from the classical search equation of SCA. The modified search equation of SCA is as follows:

$$ \vec{x}_{i,t + 1} = \vec{x}_{D,t} + \left\{ {\begin{array}{*{20}l} {\left| {\frac{{f_{\text{best}} }}{{f_{\text{worst}} }}} \right| \times \sin \left( \psi \right) \times \left| {c \cdot \vec{x}_{D,t} - \vec{x}_{i,t} } \right| } \hfill & {{\text{if}}\;{\text{rand}} < 0.5} \hfill \\ {\left| {\frac{{f_{\text{best}} }}{{f_{\text{worst}} }}} \right| \times \cos \left( \psi \right) \times \left| {c \cdot \vec{x}_{D,t} - \vec{x}_{i,t} } \right| } \hfill & {\text{otherwise}} \hfill \\ \end{array} } \right. $$
(8)

where the coefficient \( c \) and the parameters \( \psi , {\text{rand}} \) are same as defined in Sect. 2.2. \( f_{\text{best}} \) represents the objective function value at destination point \( \vec{x}_{D,t} \), and \( f_{\text{worst}} \) represents the worst food source in terms of objective function value. Whenever \( f_{\text{worst}} = 0 \), then it is replaced by sufficiently large real number. From the search equation, it is clear that the ratio \( \left| {\frac{{f_{\text{best}} }}{{f_{\text{worst}} }}} \right| \) generates a real number between 0 and 1. This ratio controls the amplification of the difference vector \( \left| {c \cdot \vec{x}_{D,t} - \vec{x}_{i,t} } \right| \) and maintains the exploration and exploitation within the algorithm. The steps involved in the SCABC are presented in Algorithm 2. The flowchart of the algorithm is shown in Fig. 1.

figure b
Fig. 1
figure 1

Flowchart of proposed SCABC algorithm

4 Experimental validation of proposed SCABC algorithm

In this section, the proposed hybrid method called SCABC is evaluated on a classical benchmark set. This test set contains 23 well-known benchmark problems which are collected from various sources, and many researchers have been used them to evaluate their algorithms [29,30,31,32]. The description of these problems is presented in Table 1. These problems are of minimization type. In this set, first 10 problems are unimodal while remaining are multimodal problems. In the experiments, the number of search agents is fixed to 50 and maximum function evaluations (the termination criteria) are fixed to \( 2 \times 10^{5} \).

Table 1 Description of the classical benchmark problems

4.1 Numerical results and analysis

In this section, the numerical results obtained by the classical ABC, classical SCA and the proposed SCABC algorithms are recorded and presented in Tables 2 and 3 corresponding to 10- and 30-dimensional problems, respectively. In these tables, the best, average, median, worst and standard deviation value of the objective function values recorded over 30 independent trails. Since the problems are of minimization type, therefore, between two different objective function values of a particular problem, the one having less value is better. These better values are highlighted with boldface in Tables 2 and 3. The data of results obtained by the SCABC, ABC and SCA algorithms in 30 independent trials are presented in the form of boxplots in Fig. 2. The boxplots clearly demonstrate that the results obtained by the SCABC are satisfactory as compared to the SCA and ABC algorithms. This is due to the reason that 25th and 75th percentiles of samples collected for proposed SCABC algorithm decline toward the minimum solution within a narrow interquartile range.

Table 2 Comparison of results obtained by ABC, SCA and proposed SCABC algorithms on classical benchmark problems with dimension 10
Table 3 Comparison of results obtained by ABC, SCA and proposed SCABC algorithms on classical benchmark problems with dimension 30
Fig. 2
figure 2

Comparison based on boxplots

4.1.1 Comparison of results in terms of exploitation strength

In the benchmark set, the problems from F1 to F10 are unimodal, and in these problems only one minima is present which is known as global minima. Therefore, these problems are generally used to evaluate the exploitation strength and convergence rate of algorithms. On the test problems F1–F5, F7, F8 and F10, the proposed SCABC is better than the classical ABC and classical SCA in all the statistics such as best, median, average, maximum and standard deviation values of objective function corresponding to the 10- and 30-dimensional problems. In 10-dimensional F6, the SCABC provides better median and best value of the objective function as compared to the other algorithms. Average and worst value is better in ABC, and the standard deviation value is better in the SCA as compared to the other algorithms. In 30-dimensional F6, the best value is better in ABC, median value is better in SCABC and other statistics are better in the SCA as compared to the other algorithms. For the 10-dimensional F9, the classical SCA is better than the classical ABC and the proposed SCABC algorithms in terms of all the statistics. In 30-dimensional F9, the SCABC algorithm is better than the classical ABC and classical SCA in terms of average, worst and standard deviation values of the objective function. In terms of best and median value of the objective function, the classical SCA is better than other algorithms for 30-dimensional F9. Hence, from the comparison of results on unimodal problems demonstrates the better search ability of the SCABC algorithm in terms of exploitation strength and convergence rate as compared to the ABC and SCA.

4.1.2 Comparison of results in terms of exploration strength

The test problems from F11 to F23 in the benchmark set are multimodal problems where several local optima are present. In the test problems F11, F13, F19 and F20, the proposed SCABC algorithm provides better results in terms of all the statistics as compared to the classical ABC and classical SCA for 10- and 30-dimensional problems. In the problem F18, the SCABC and SCA provide the optima of the problem and are better than the classical ABC algorithm for both 10 and 30 dimensions. In the problems F15, F16, F22 and F23, the proposed SCABC algorithm is either performed better or very competitive than the ABC and SCA. In the problem F12, the SCA and ABC both are better than the SCABC algorithm. The objective function in the problem F12 is also known as Rastrigin function. This function has large number of local optima, and therefore, the high diversity is needed. Since the SCA and ABC show high diversity during its search mechanism, it can be observed from the results that the SCA and ABC have outperformed on this problem. The SCABC tries to achieve a stable stage of exploration and exploitation, and therefore, a leading guidance for the employed bees is added into the search mechanism. This leading guidance reduces the high diversity, and therefore, the performance of classical ABC is better than the SCABC algorithm on the problem F12. In the problem F14, the performance of the classical ABC and SCABC algorithms is very competitive to each other. On the 10-dimensional F14, the classical SCA performs much better than the SCABC algorithm, but for the dimension 30, the SCABC and classical SCA provide very competitive results. The presence of high diversity in the SCA is the reason for better performance on this instance as compared to the SCABC algorithm. In F17, the proposed SCABC algorithm is better than the classical ABC but worse than the classical SCA because this problem has large number of local optima which requires high diversity in the applied algorithm, and in the literature, it has been observed that the SCA is rich in diversity. In the problem F21, all the algorithms provide the optima (0). Hence, the performance of the proposed SCABC algorithm on most of the multimodal problems demonstrates the better search efficiency of the SCABC algorithm than the classical ABC and classical SCA in terms of better exploration and local optima avoidance ability.

Thus, the overall comparison of results based on best, average, median, worst and standard deviation of the objective function values verifies that the proposed SCABC is a better optimization method than the classical ABC and classical SCA.

4.2 Statistical validity of the results

In order to demonstrate the significant difference between the proposed search strategy in SCABC and algorithms ABC and SCA, a nonparametric Wilcoxon signed-rank test is applied on the obtained results from these algorithms. A nonparametric test is selected because it can be applied without any information about the distribution of data set of results. In the nonparametric test, the measure of central tendency is median and this can be considered as better representative to represent the significant improvement in the algorithm. In this section, the Wilcoxon test is applied at 5% significance level. The obtained p-values along with the outcomes obtained from Wilcoxon test are presented in Tables 4 and 5 corresponding to 10- and 30-dimensional benchmark problems. In the table, ‘+/−/=’ signs are used to denote that the proposed SCABC algorithm is better than, worse than or similar to its competitor.

Table 4 Statistical outcomes obtained by applying Wilcoxon signed-rank test for 10-dimensional classical benchmark problems
Table 5 Statistical outcomes obtained by applying Wilcoxon signed-rank test for 30-dimensional classical benchmark problems

4.3 Convergence behavior analysis

In this section, the convergence curves have been plotted to observe the convergence rate and search ability of algorithms to locate the optima of test problems. The convergence curves are plotted in Figs. 3 and 4 by considering the median value of objective functions obtained in 30 independent trials of algorithms. In Fig. 3, the convergence curves are shown for unimodal problems, and in Fig. 4, the convergence curves are plotted for multimodal problems. In these curves, the horizontal axis depicts the iterations and the vertical axis represents the objective function values. From the convergence curves of unimodal test problems, it can be assured that in terms of convergence rate the SCABC algorithm is better than ABC and SCA. From the convergence curves corresponding to the multimodal problems from F11 to F23, it can be observed that in most of the problems the SCABC algorithm has shown its better search efficiency as compared to ABC and SCA.

Fig. 3
figure 3

Convergence curves corresponding to the unimodal problems

Fig. 4
figure 4

Convergence curves corresponding to the multimodal problems

4.4 Performance index analysis

In this section, the proposed hybrid method SCABC is compared with ABC and SCA in terms of success rate, time complexity and error value by performance index (PI) analysis metric [33]. The relative performance of an Algorithm A using PI can be calculated as follows:

$$ {\text{PI}}_{\text{A}} = \frac{1}{{N_{\text{P}} }}\mathop \sum \limits_{i = 1}^{{N_{\text{P}} }} \mu_{1} \alpha_{1} + \mu_{2} \alpha_{2} + \mu_{3} \alpha_{3} $$
(9)

where \( \alpha_{1} = \frac{{{\text{SR}}^{i} }}{{{\text{TR}}^{i} }} \), \( \alpha_{2} = \frac{{{\text{MT}}^{i} }}{{{\text{AT}}^{i} }} \), and \( \alpha_{3} = \frac{{{\text{ME}}^{i} }}{{{\text{AE}}^{i} }} \). \( {\text{SR}}^{i} \): number of successful runs for the ith problem. \( {\text{TR}}^{i} \): total number of runs conducted for the ith problem. \( {\text{MT}}^{i} \): minimum of mean computational time taken by all the algorithms for the ith problem. \( {\text{AT}}^{i} \): mean computational time taken by an algorithm for the ith problem. \( {\text{ME}}^{i} \): minimum value of mean error obtained for the ith problem. \( {\text{AE}}^{\text{P}} \): mean error obtained by an algorithm for the ith problem. \( N_{\text{P}} \): total number of problems, and \( \mu_{1} , \mu_{2} \) and \( \mu_{3} \) are nonnegative weights (\( \mu_{1} + \mu_{2} + \mu_{3} = 1 \)) associated with success rate \( \alpha_{1} \), computational time term \( \alpha_{2} \), and error term \( \alpha_{3} \). The three different cases based on the weights assigned to the success, time and error terms are as follows:

  • Case I\( \mu_{1} = w, \mu_{2} = \frac{{\left( {1 - w} \right)}}{2} \) and \( \mu_{3} = \frac{{\left( {1 - w} \right)}}{2}, 0 \le w \le 1 \)

  • Case II\( \mu_{1} = \frac{{\left( {1 - w} \right)}}{2}, \mu_{2} = w \) and \( \mu_{3} = \frac{{\left( {1 - w} \right)}}{2}, 0 \le w \le 1 \)

  • Case III\( \mu_{1} = \frac{{\left( {1 - w} \right)}}{2}, \mu_{2} = \frac{{\left( {1 - w} \right)}}{2} \) and \( \mu_{3} = w, 0 \le w \le 1 \)

The PI curves for all the above three cases are plotted in Fig. 5. In the case I, equal weights are provided to the error and computational time. It can be observed from Fig. 5a that PI of SCABC is higher than ABC for all the weights and higher than SCA for \( w \ge 0.5. \) In case II, equal weights are provided to the success and error term. From Fig. 5b, it is clear that the PI graph is decreased for SCABC with the weight \( w \) assigned to the computational time, which shows that SCABC takes more time to solve the problem as compared to SCA and ABC. In the case III, equal weights are provided to the computational time and success term. From Fig. 5c, it can be analyzed that PI of the proposed SCABC algorithm is higher than ABC for all the weights and higher than SCA for \( w \ge 0.4 \). Overall, in order to achieve comparatively more success and to provide less error, the proposed SCABC algorithm can be recommended over SCA and ABC.

Fig. 5
figure 5

Performance index (PI) for the cases I, II and III

4.5 Comparisons of the proposed SCABC algorithm with some other nature-inspired algorithms

This section compares the performance of the SCABC algorithm with some other nature-inspired optimization algorithms. The comparison has been performed at same parameter settings as used for SCABC algorithm on the same set of benchmark problems given in Table 1. The results obtained from various algorithms such as basic version of PSO (Basic PSO) [2]), modified PSO (mPSO) [34], harmony search (HS) [35], salp swarm algorithm (SSA) [36], moth-flame optimization (MFO) [37], firefly algorithm (FA) [38], gbest-guided ABC (GABC) [39] and proposed SCABC are presented in Table 6. In this table, the average of error values in objective fitness is presented. From the table, it can be observed that the proposed SCABC algorithm outperformed basic PSO, mPSO, HS, SSA and MFO for both the category (unimodal and multimodal) of benchmark problems. The comparison with the FA shows that the SCABC performs better than the FA on most of the problems except F7. Similarly, when the comparison is performed between the SCABC and GABC algorithms, it can be observed that for all the unimodal test problems (except F6) the SCABC is provided better results as compared to the GABC algorithm. The comparison on the multimodal problems shows the competitive performance of the SCABC algorithm with the GABC algorithm in terms of exploration strength.

Table 6 Comparison of various algorithms on 30 dimensional test problems

Hence, an overall analysis shows that the SCABC algorithm can be preferred over all the algorithms basic PSO, mPSO, HS, SSA, MFO, FA and GABC for the unimodal problems. On the multimodal problems, the SCABC algorithm can be considered as a better optimization method as compared to the basic PSO, mPSO, HS, SSA, MFO and FA. The SCABC and GABC algorithms are very competitive to each other for multimodal problems.

5 Application of SCABC algorithm in image segmentation

In the present section, the problem of multilevel gray image thresholding has been tried to solve using Kapur’s entropy method [40] and proposed SCABC algorithm. Image segmentation is a process of splitting an image into various segments, where the main task is to simplify the representations of objects within the image for further processing. In multilevel thresholding, finding the optimal thresholds is very crucial for the image segmentation. Kapur’s method (based on entropy criteria) is a well-known approach to find the optimal thresholds. In the present section, a brief description for the Kapur’s entropy method is provided.

Consider an image defined as an 2D gray-level intensity function \( f\left( {x,y} \right) \), where the values of \( f\left( {x,y} \right) \) is the gray level, lies in the range \( \left\{ {0, 1, 2, \ldots , L - 1} \right\} \). Let the number of pixels at intensity \( i \) are \( n_{i} \) and the total number of pixels in the image is \( N \). The probability of occurrence for ith gray level can be obtained by: \( p_{i} = \frac{{n_{i} }}{N} \). The Kapur’s entropy method is defined as follows:

5.1 Kapur’s entropy method

In order to determine the threshold values, the Kapur’s method [40] maximizes the entropy of the segmented classes. The Kapur’s method utilizes the concept from Shannon’s entropy given in [41]. In the Kapur’s method, the entropy of the image is defined with the assumption that the image is represented by its gray-level histogram. If there are \( m \) number of thresholds \( \left( {T_{1} ,T_{2} , \ldots ,T_{m} } \right) \) to be determined and these thresholds are dividing the image into \( m + 1 \) classes, namely \( C_{0} ,C_{1} ,C_{2} , \ldots ,C_{m} \), then the Kapur’s method does it by maximizing the objective function (fitness function):

$$ f\left( {T_{1} ,T_{2} , \ldots T_{m} } \right) = E_{0} + E_{1} + \cdots + E_{m} $$
(10)

where

$$ \begin{aligned} & E_{0} = - \,\mathop \sum \limits_{i = 0}^{{T_{1} - 1}} \frac{{p_{i} }}{{\omega_{0} }}{ \ln }\left( {\frac{{p_{i} }}{{\omega_{0} }}} \right),\;\;w_{0} = \mathop \sum \limits_{i = 0}^{{T_{1} - 1}} p_{i} \\ & E_{k} = - \,\mathop \sum \limits_{{i = T_{k} }}^{{T_{k + 1} - 1}} \frac{{p_{i} }}{{\omega_{k} }}{ \ln }\left( {\frac{{p_{i} }}{{\omega_{k} }}} \right),\;\;w_{k} = \mathop \sum \limits_{{i = T_{k} }}^{{T_{k + 1} - 1}} p_{i} ,\;\;k = 1,2, \ldots ,m - 1 \\ & E_{m} = - \,\mathop \sum \limits_{{i = T_{m} }}^{L - 1} \frac{{p_{i} }}{{\omega_{m} }}{ \ln }\left( {\frac{{p_{i} }}{{\omega_{m} }}} \right),\;\;w_{m} = \mathop \sum \limits_{{i = T_{m} }}^{L - 1} p_{i} \\ \end{aligned} $$

where \( E_{0} ,E_{1} , \ldots ,E_{m} \) are the Kapur’s entropy and \( \omega_{0} ,\omega_{1} , \ldots ,\omega_{m} \) represent the class probabilities of the segmented classes \( C_{0} ,C_{1} ,C_{2} , \ldots ,C_{m} \), respectively.

5.2 Experimental setup

The present section provides a brief description of experimental setup for the proposed hybrid method SCABC. In the present study of multilevel thresholding, six gray benchmark images—cameraman, clock, couple, boat, bridge and airport—are considered. These images are picked from USC-SIPI Image Database and presented in Fig. 6.

Fig. 6
figure 6

Benchmark test images—a cameraman, b clock, c couple, d boat, e bridge and f airport

Since the proposed method, SCABC is the hybridized version of SCA and ABC; therefore, for the performance comparison of SCABC is done with ABC and SCA with same parameter settings of termination criteria. In the proposed SCABC algorithm, 12 food sources/candidate solutions and 100 iterations are fixed as termination criteria of the algorithm. The same parameter setting is used for ABC and SCA. All the algorithms are implemented in MATLAB 2014a with Intel core-i5 @ 2.30 GHz.

In order to evaluate the quality of segmented images, a famous metric known as peak-signal-to-noise ratio (PSNR) is used in the paper. The PSNR metric depends directly on the image intensity and demonstrates the accuracy of the segmented image. The PSNR value can be determined as follows:

$$ {\text{PSNR}} = 10\log_{10} \left( {\frac{{255^{2} }}{\text{MSE}}} \right) $$
(11)
$$ {\text{MSE}} = \frac{1}{MN}\mathop \sum \limits_{i = 1}^{M} \mathop \sum \limits_{j = 1}^{N} \left[ {I\left( {i,j} \right) - J\left( {i,j} \right)} \right]^{2} $$
(12)

where \( I \) and \( J \) represent the original and segmented images, respectively.

5.3 Results and analysis

This section presents the numerical results of the proposed multilevel threshold scheme SCABC. The results in our study are produced with the same parameter setting as provided in Sect. 5.2. The quality of segmented images is compared by the objective fitness given by Kapur’s entropy method and by the PSNR measure.

The best objective function values obtained by implementing the SCABC, ABC and SCA algorithms using Kapur’s method are presented in Table 7, and their corresponding values of thresholds are presented in Table 8. The obtained mean objective function values are reported in Table 9. For a small number of threshold values (m = 2 or 3), the objective function values are practically the same. Therefore, in the present study, results are presented corresponding to large number of thresholds (m = 4, 5 and 6). From the presented results, it can be observed that the SCABC algorithm has achieved higher value of objective function in all test images as compared to ABC and SCA. An example of segmented test image (boat) obtained by the SCABC algorithm is presented in Fig. 7. In the same figure, the fitted histogram and locations of thresholds for segmented images are also presented. The obtained mean PSNR values of segmented images are presented in Table 10 by implementing the SCABC, ABC and SCA. The table clearly indicates either competitive or better quality of segmented images obtained from the proposed hybrid method SCABC than other algorithms. Hence, the overall analysis in terms of various performance metrics signifies the better ability of proposed hybrid method called SCABC.

Table 7 Comparison of best fitness values obtained by ABC, SCA and proposed SCABC algorithm
Table 8 Optimal threshold values obtained by ABC, SCA and proposed SCABC algorithm
Table 9 Comparison of mean fitness values obtained by ABC, SCA and proposed SCABC algorithm
Fig. 7
figure 7

Thresholded boat test image by SCABC algorithm corresponding to Kapur’s method. ad the segmented images of three, four, five, and six classes, respectively. a’d’ the thresholds for the segmented images. ad The fitted histogram

Table 10 Comparison of mean PSNR values obtained by ABC, SCA and proposed SCABC algorithm

6 Conclusions

In this paper, the hybrid algorithm called SCABC has been proposed for the global optimization. First, the problem of high diversity at early generations and insufficient diversity at later generations of SCA is tried to alleviate by modifying its search equations and integrating memory-based information. Second, the employed bee phase of classical ABC algorithm has been replaced with the modified search equations of SCA in order to explore and exploit the search space more properly. The proposed search strategy for employed bees can be considered more efficient as compared to original one. To verify this fact, a classical benchmark set of 23 problems has been considered in the paper. The effectiveness and efficiency of the proposed SCABC algorithm can be assured against SCA and ABC based on the performance on considered benchmarks. Various other measures such as convergence analysis, performance index analysis and statistical analysis also ensure the better convergence rate and significant improvement in the search strategy of the SCABC algorithm. Therefore, the overall analysis recommends SCABC as a better optimization method than SCA and ABC algorithms. The comparison with some other algorithms also verifies the better or competitive search ability of the SCABC algorithm. To ensure the effectiveness of the SCABC algorithm on real-life applications, the multilevel thresholding problem is considered. The performance analysis on thresholding problem based on various metrics shows the better ability of the SCABC algorithm as compared to the ABC and SCA.

In future, we will focus on the other real-life applications of proposed SCABC algorithm based on the performance on test problems. The multi-objective and binary SCABC can also be developed in the future.