Keywords

1 Introduction

In the two last decades, the optimization algorithms have become more and more attractive. New algorithms have been presented such as particle swarm optimization (PSO) [1], artificial bee colony (ABC) [2], firefly algorithm (FA) [3], genetic algorithm (GA) [4], differential evolution (DE) [5], evolution strategy (ES) [6], ant colony optimization (ANT) [7], cuckoo search (CS) [8], and gray wolf optimizer (GWO) [9]. The common features of these algorithms are all inspired by the animalsā€™ search for food in nature. This process attempted to simulate mathematical formulas. All algorithms have outstanding features such as simple programming, fast convergence speed, and accuracy with acceptable errors. These algorithms appeared in almost fields including the economic system, engineering problem. It leads to finding that a remarkable algorithm that can solve the problem with suitable and high reliability is necessary. For that, in this paper, comparing the reliability of four algorithms PSO, ABC, CS, and GWO, in which, PSO and ABC algorithms are known as classical algorithms, and their significant features are widely acknowledged. CS and GWO algorithms are recently proposed as the new algorithms, and the main characteristic of these algorithms is improved accuracy. In order to comprehensively evaluate the advantages and disadvantages of the above algorithms, the paper investigated the search space having 30 dimensions to compare the performance of all four algorithms in two terms; the convergence rate and accuracy level.

2 Swarm Intelligence Algorithms

Particle Swarm Optimization (PSO)

Particle swarm optimization (PSO) is proposed by Eberhart and Kennedy [1]. The feature of this algorithm is based on the balance of global best (Gbest) and local best (Lbest,i) during the velocity update. Thus, each candidate solution \(X_{i} (i = 1,2, \ldots ,n)\) at the step of movement (tĀ +Ā 1)th registers the new position which is closer to the local best or global best position. During their movement, solution candidates also expand the new search spaces through the two factors defined as learning factors c1, c2. The process of updating the new velocity and position of the PSO algorithm is described in Eqs.Ā 1 and 2.

$$V_{i}^{t + 1} = wV_{i}^{t} + c_{1} r_{1} \left( {L_{{{\text{best}},i}} - X_{i}^{t} } \right) + c_{2} r_{2} \left( {G_{\text{best}} - X_{i}^{t} } \right)$$
(1)
$$X_{i}^{t + 1} = X_{i}^{t} + V_{i}^{t + 1}$$
(2)

where r1, r2 register the value in the range [0,1], w is denoted an inertia weight to control the velocity.

Artificial Bee Colony (ABC)

The artificial bee colony (ABC) [2] algorithm is a swarm-based meta-heuristic algorithm that was introduced by Karaboga in 2005. The search process of ABC has three major steps:

  • Send the employed bees to a food source and estimate their nectar quality following Eq.Ā 3;

    $$v_{i,j} = x_{i,j} + \phi_{i,j} \left( {x_{i,j} - x_{k,j} } \right)$$
    (3)

    where \(k \in \left\{ {1,2, \ldots ,{\text{SN}}} \right\}\) is randomly chosen index; k is determined randomly and should differ from i. \(\phi_{i,j}\) register the value in the range [0,1].

  • Onlooker bees select the food sources based on information collected from employed bees and estimate their nectar quality expressed in Eq.Ā 4;

    $$p_{i} = \frac{{{\text{fit}}_{i} }}{{\sum\nolimits_{i = 1}^{\text{SN}} {{\text{fit}}_{i} } }}$$
    (4)

    where \({\text{fit}}_{i}\) is the fitness value of the solution ith.

  • Determine the scout bees and employ them on possible food sources for exploitation.

The general structure of the algorithm is introduced as follows.

The food source of which the nectar is abandoned by the scout bees is replaced with a new food source by the scouts by Eq.Ā (7) in case the position cannot be improved further. The parameter limit is the control parameter to determine the abandonment of the food sources within the predetermined number of cycles.

Cuckoo search Algorithm (CS)

Cuckoo search algorithm (CS) is also a nature-inspired algorithm, lied on the development of the population of cuckoo bird in nature. This algorithm was introduced by Yang and Deb [8]. This algorithm lied on LĆ©vy flights having the step length s to orient the new direction. An outstanding advantage of this algorithm is that it can produce a suitable distribution in which its values can be registered positive or negative. Thus, at the step of movement, (tĀ +Ā 1)th follows Eq.Ā 5

$$X_{i}^{t + 1} = X_{i}^{t} + \alpha L\left( {s,\lambda } \right)$$
(5)

where

  • \(\alpha\): Scaling factor;

  • \(X_{i}^{t+1}\) and \(X_{i}^{t}\) are new position and current position of cuckoo bird.

  • \(L\left( {s,\lambda } \right)\): is LĆ©vy distribution, used to define the step size of random walk.

Gray Wolf Optimizer (GWO)

Gray wolf optimizer (GWO) is proposed by Mirjalili et al. [9]. Based on the hunting characteristics of wolves, with a division of tasks for each of the wolves, the leader in the swarm is called alpha (\(\alpha\)). The alpha considers the best solution, the second and third best solution namely beta (\(\beta\)) and delta (\(\delta\)). The process of hunting and attract toward the prey can simulate in mathematical form as:

$$\overrightarrow {{D_{a} }} = \left| {\overrightarrow {{C_{1} }} \cdot \overrightarrow {{X_{a} }} - \overrightarrow {X} } \right|,\;\overrightarrow {{D_{\beta } }} = \left| {\overrightarrow {{C_{2} }} \cdot \overrightarrow {{X_{\beta } }} - \overrightarrow {X} } \right|,\;\overrightarrow {{D_{\delta } }} = \left| {\overrightarrow {{C_{3} }} \cdot \overrightarrow {{X_{\delta } }} - \overrightarrow {X} } \right|$$
(6)
$$\overrightarrow {{X_{1} }} = \overrightarrow {{X_{a} }} - \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\rightharpoonup}$}}{{A_{1} }} (\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\rightharpoonup}$}}{{D_{\alpha } }} ),\;\overrightarrow {{X_{2} }} = \overrightarrow {{X_{\beta } }} - \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\rightharpoonup}$}}{{A_{2} }} (\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\rightharpoonup}$}}{{D_{\beta } }} ),\;\overrightarrow {{X_{3} }} = \overrightarrow {{X_{\delta } }} - \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\rightharpoonup}$}}{{A_{3} }} (\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\rightharpoonup}$}}{{D_{\delta } }} )$$
(7)
$$\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\rightharpoonup}$}}{X} \left( {t + 1} \right) = \frac{{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\rightharpoonup}$}}{{X_{1} }} + \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\rightharpoonup}$}}{{X_{2} }} + \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\rightharpoonup}$}}{{X_{3} }} }}{3}$$
(8)

where

$$\begin{aligned} \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\rightharpoonup}$}}{{A_{1,2,3} }} = & 2\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\rightharpoonup}$}}{a} \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\rightharpoonup}$}}{{r_{1} }} - \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\rightharpoonup}$}}{a} \\ \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\rightharpoonup}$}}{{C_{1,2,3} }} = & 2\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\rightharpoonup}$}}{{r_{2} }} \\ \end{aligned}$$
(9)

In Eq.Ā 9, the vector \(\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\rightharpoonup}$}}{a}\) is a reduction function from 2 to 0. \(\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\rightharpoonup}$}}{{r_{1} }}\) and \(\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\rightharpoonup}$}}{{r_{2} }}\) are random scalar having a value in range [0,1].

3 Numerical Examples

To compare the efficiency of algorithms PSO, ABC, CS, and WGO, the first ten benchmark test functions were selected to examine and are given in TableĀ 1. All functions have the same search space with nĀ =Ā 30 dimension, the results are presented in two terms; comparing the convergence rate and accuracy level for each function.

TableĀ 1 First ten classical benchmark functions

For fair in comparison, all algorithms have the same initial solution candidates NĀ =Ā 30, and a total of iterations are 1000. The convergence trend of all functions is shown in Fig.Ā 1. The comparison results obtained from 30 runs randomly in both algorithms are given in TableĀ 2 in the case of 30 dimensions. Terms of best fitness, mean, and standard deviation are investigated after obtaining the results from 30 runs randomly.

Fig.Ā 1
figure 1figure 1figure 1

Convergence trends of the 15 benchmark functions

TableĀ 2 Results of ten benchmark functions in comparison between other algorithms with dimension nā€‰=ā€‰30

Based on the convergence trend shown in Fig.Ā 1, it can be realized that two algorithms, CS and GWO, illustrate the better performance of convergence rate at functions F1ā€“F5, F9ā€“F10. Especially, at the F9 function, the CS and GWO algorithms achieved the best value which are registered around 200 and 400 of iterations, respectively. Meanwhile, the ABC algorithm shows a good convergence rate at F8 functions. And PSO records the worst convergence rate among the algorithms.

Based on the results statistics given in TableĀ 2, it can be seen that the CS and GWO algorithms are given the best level of accuracy response in almost functions investigated. Especially at the functions F1ā€“F5, the CS algorithm has shown superiority when the best fitness value is far ahead of the other algorithms. Meanwhile, the accuracy of PSO and ABC still recorded worse performance in comparison with the CS and GWO algorithms. However, at function F6, the ABC algorithm achieved the most accurate level.

4 Conclusion

In this paper, a comparison between the algorithms presented such as PSO, ABC, CS, and GWO to analyze optimization problems. The first ten benchmark functions are used as numerical examples to investigate the convergence rate and accuracy response of the algorithms. Through the achieved results, the CS algorithm is considered to be a stable and highly reliable algorithm in solving the real works, while the PSO and ABC algorithms recognize an instability in some cases particular problems.