1 Introduction

Swarm intelligence (SI) is considered to be one of the most famous fields of artificial intelligence that has seen many applications for the last few decades. The major source of inspiration for the swarm intelligence is the existence of collective behaviour among different types of social swarms e.g. ants, bees, termites, worms, flock of birds and schools of fish [1]. In order to achieve different types of goals, the individuals of these swarms work in an inter-coordinated interactive system. The interactive system is the result of well-managed natural behaviour of all individuals of the swarms. Recently, SI techniques have been applied for solving different types of problems in various fields e.g. for solving optimization problems, robot control systems, load balancing, demanding robustness and flexibility and routing in mobile telecommunication networks. Beni [2] used this term for the first time in the sense of cellular robotic system that consists of simple agents organizing themselves using neighbourhood interaction system. For solving problems of optimization nature, different SI techniques have been developed e.g. PSO (particle swarm optimization) [3], ant colony optimization (ACO) [4], artificial bee colony optimization (ABC) [5]. Recently, some more advanced swarm intelligence optimization algorithms e.g. cuckoo search [6], firefly algorithm (FA) [7], bat algorithm [8], krill herd bio-inspired optimization algorithm [9], glow worm optimization [10,11,12]; grey wolf optimizer [13], ant lion optimizer [14], whale optimization algorithm [15] some hybrid algorithms [16], moth flame optimization algorithm [17] and clustering algorithms [18] have been introduced. Firefly algorithm was developed by Yang [19] by taking inspiration from flash light signals which is the source of attraction among fireflies for potential mates. All the fireflies are unisexual and attract each other according to the intensities of their flash lights. Higher the flash light intensity, higher is the power of attraction and vice versa. For solving optimization problem, the brightness of flash is associated with the fitness function to be optimized. FA is a heuristic algorithm, which means that it discovers or finds solutions by using trial and error mechanism. In fact, no guarantee can be given by this algorithm for finding optimal solution of a problem in a pre-defined interval of time. FA is also meta-heuristic in nature which means that there is a trade-off between the exploration (Finding new search space) and exploitation (Finding solution in local search space) resulting during various operational steps of standard firefly algorithm. This imbalanced relationship between the exploration and exploitation results in low quality local or global convergence rates ultimately leading to degraded solution quality. Researchers have addressed this issue in standard FA, different improved variants of FA and hybrid approaches of FA with other optimization algorithms in different ways for solving various optimization problems. The authors in [20,21,22] applied the standard FA for solving different optimization problems and have applied lower level heuristics for finding optimal solution in the so far discovered solutions (exploitation) whereas the process of randomization is used for finding new search space for avoiding the solution to be trapped in local optima (exploration). To overcome the problem of balancing between the exploration and exploitation, many researchers introduced modifications in the standard FA to improve its efficiency. The improved versions of FA were introduced by the authors in [23,24,25]. All these authors have introduced some modifications in the initial stage or firefly position changing stage. To balance the exploration and exploitation of the FA, many researchers have hybridized the FA with other optimization algorithms. FA has been used by the authors in [26,27,28] in hybrid mechanism with other optimization algorithms. All these authors have given attention to solve the specific problem but not to operational steps of the standard FA. The problem with almost all meta-heuristic algorithms is that all of these approaches have not been successful in getting the most optimal value even the maximum number of iteration for which the algorithm is run reaches. In our proposed method, the major target is improving the performance of the standard FA by introducing pattern search at termination stage. When the firefly algorithm is run for a fixed number of times and the most optimal solution is not obtained or no further improvement can be observed in getting the most optimal values, then the solution is that the values obtained from firefly algorithm will be given as initial points to some other optimization algorithm that will be further run and will improve the solution. In our case, this task has been achieved by using pattern search. The values obtained from running the FA for a fixed number of times are given as initial points to the pattern search that further improves the solution quality leading to get the most optimal solution or at least better solution than provided by the standard FA. The major advantage associated with the proposed method is that whenever the standard FA will fail in getting the most optimal value, the pattern search will take that obtained values as inputs from the standard FA and will further improve the solution quality helping in getting the most optimal value or at least better than the value obtained by standard FA. The proposed solution is very simple, easily implementable, robust and applicable to solve different optimization problems irrespective of the problem nature by keeping the facts, figures and results obtained under consideration.

2 Standard FA operation

Firefly algorithm (FA) is a newly introduced stochastic, nature-inspired meta-heuristic algorithm that has attracted the attention of researchers since its inception for solving various types of optimization problems. FA is inspired from the natural phenomenon of light emission by fireflies for their strong inter-coordination to achieve different types of goals. The major source of attraction among different fireflies is the light emitted by fireflies. Higher the light emitted by a firefly, higher will be its force of attraction for other fireflies and vice versa. The light emitted by the fireflies is associated with the objective function to be optimized when solving problems of optimization nature. In order to solve optimization problems, Yang associated the intensity of the light emitted by fireflies with the objective function to be optimized. According to physics rules of light, the intensity of the light emitted by firefly, I(r), at distance r from the firefly can be observed by Eq. (1):

$${\text{I }}({\text{r}}){\text{ }}=\frac{{{I_0}}}{{{r^2}}}$$
(1)

where I0 represents the light intensity generated at the light source. If γ is the absorption coefficient of the medium, then the light intensity, I, at distance r is given by the Eq. (2):

$${\text{I }}={\text{ }}{{\text{I}}_0}{\text{exp }}( - \gamma {{\text{r}}^{\text{2}}})$$
(2)

where, r represents the distance between source of light and the light observation point. This light intensity can be associated with the attractiveness between the fireflies in the FA which is given by Eq. (3):

$${\rm B}{\text{ }}={\text{ }}{\beta _0}{\text{exp }}( - \gamma {{\text{r}}^{\text{m}}})~\left( {{\text{m}} \geq {\text{1}}} \right)$$
(3)

where β0 represents the attractiveness at distance r = 0. The distance between two fireflies’ xi and yj called as Euclidian distance is given by Eq. (4):

$${{\text{r}}_{{\text{ij}}}}={\text{ }}\left| {{\text{ }}{{\text{x}}_{\text{i}}} - {\text{ }}{{\text{y}}_{\text{j}}}} \right|{\text{ }}=\sqrt {\mathop \sum \limits_{{k=1}}^{d} {{({x_{i,k}} - {y_{j,k}})}^2}}$$
(4)

In each generation, the position of the firefly can be changed according to the following Eq. (5):

$${{\text{X}}_{\text{i}}}={\text{ }}{{\text{x}}_{\text{i}}}+{\text{ }}{\beta _0}{\text{exp }}( - \gamma {{\text{r}}_{{\text{ij}}}}^{{\text{2}}}){\text{ }}({{\text{x}}_{\text{i}}} - {\text{ }}{{\text{y}}_{\text{j}}}){\text{ }}+{\text{ }}\alpha \varepsilon$$
(5)

where α represents the parameter of randomization, ε represents random number generated in Gaussian distribution [19]. The parameter of randomization is used to control the solution search space.

3 Proposed method

In the basic operation of standard FA, the algorithm terminates in two cases; the required optimal value is obtained or the maximum number of iteration reaches. If the algorithm terminates after getting the required optimal value, then the approach is said to be successful in achieving the target goal and there is no further processing required. If the algorithm terminates after the maximum number of iteration reaches and the value obtained is not the most optimal value, then the number of iterations can be increased. Even after increasing the number of iterations, the value obtained is not the most optimal value, and then some other technique can be applied to get the most optimal value or at least the value better than the value obtained by standard FA in its maximum iterations [29]. In the proposed approach, pattern search (PS) has been applied to target this issue and to improve the results obtained by standard FA. The PS takes the value obtained by standard FA as starting point and performs further processing to get the minimum or maximum value of the minimization or maximization functions, respectively. The pseudo code of proposed approach is shown in Fig. 1 and flowchart is shown in Fig. 2. The pattern search is the most important part of the algorithm. Pattern search is a kind of optimization algorithm having strong capability of solving various kinds of optimization problems where other standard optimization algorithms face failure in finding the most optimal solution. Its easy implementation, conceptual simplicity and computational efficiency make it more applicable than the other optimization techniques. The basic operations of the pattern search consist of few technical steps. Frist of all, the mesh size, the expansion factor, the contraction factor and the maximum number of iterations is specified. After specifying these parameters, the starting points of PS are set initially or these points are taken form some other algorithms. In our work, these points have been taken form the standard FA. Using these starting points, the mesh points and pattern vectors are created. The mesh points are used to evaluate the values of objective function to be optimized. In order to check the optimized function values, the mesh points values obtained are checked. If the obtained values obtained are better than the previous values, it means the algorithm is going in right direction, the mesh point’s values are expanded and the procedure proceeds. If the values obtained are not better than the previous values, it means the mesh size needs to be contracted and new points are set as starting points. This procedure continues until the termination stages reaches or the maximum number of iteration reaches. The main target of the proposed approach is the improvement in the convergence rates and getting the optimal solution. If viewed critically, the complexity of the proposed method may be a bit higher than the standard FA because of embedding another algorithm in the standard FA but our current concern is the performance and not the complexity. When we are successful in getting the most optimal value, our next target will be the reduction in complexity and computation time. The proposed methodology is based on the No-Free Lunch theorem which states that two problem solving techniques are equivalent if they give similar performance for solving all problems of the domain. It ultimately means that the performance of a solver can be improved if it is hybridized with some other solver. So, for better convergence rate, some other technique can be incorporated in FA to improve its performance [30].

Fig. 1
figure 1

Proposed FA-PS approach pseudo code

Fig. 2
figure 2

Proposed FA-PS approach for optimization

4 Experimental setup

All the experiments were carried out on Intel(R) core(TM) i5-3570 CPU @ 3.40 GHz with MATLAB R2013a. In the standard FA, the initial population was taken as 60 fireflies whereas the number of iterations, gamma value, beta value and alpha values were taken as 300, 1, 2 and 0.2 respectively. After running FA for a fixed number of iterations, the PS starts and executes for a specific number of iterations. The above mentioned parameters of the algorithm were the final parameters obtained after tuning them with different values. The initial population of fireflies was varied from 30 to 80 with an increment of 5. The maximum number of iteration was varied from 100 400 with an increment of 20. The values of gamma value, beta value and alpha values were from 0.5 to 2, 1 to 3 and 0.1 to 0.4 for finding a better combination that may give better convergence rate resulting a higher quality solution. All these parameters were tuned for each of the following minimization and maximization functions.

4.1 Minimization functions

Total of eight minimization benchmark functions were used. These functions are outlined in Eqs. 613 and their convergence rates are shown in Figs. 3, 4, 5, 6, 7, 8, 9 and 10.

Fig. 3
figure 3

F1 Minimization function

Fig. 4
figure 4

F2 Minimization function

Fig. 5
figure 5

F3 minimization function

Fig. 6
figure 6

F4 minimization function

Fig. 7
figure 7

F5 minimization function

Fig. 8
figure 8

F6 minimization function

Fig. 9
figure 9

F7 minimization function

Fig. 10
figure 10

F8 minimization function

$${\text{F1}}={e^{ - {{\left( {x1 - 4} \right)}^{2~~}} - {{(x2 - 4)}^2}}}+{e^{ - {{\left( {x1 - 4} \right)}^{2~~}} - {{(x2 - 4)}^2}}}+{\text{2}}{e^{ - {{\left( {x1} \right)}^{2}} - {{(x2+4)}^2}}}+{\text{2}}{e^{ - {{\left( {x1} \right)}^{2}} - {{(x2)}^2}}}$$
(6)

where xi ε {− 5, 5}

$${\text{F2}}=(\left| {x1} \right| - 5)+{(\left| {x2} \right|+5)^2}$$
(7)

where xi ε {− 10, 10}

$${\text{F3}}=100*{(x2 - x1)^2}+{(6.4*{(x2 - 0.5)^2} - x1-0.6)^2}$$
(8)

where xi ε {− 5,5}

$${\text{F4}}=100*{(x2 - x{1^2})^2}+{(x1 - 1)^2}$$
(9)

where xi ε {− 2.048, 2.048}

$$\begin{aligned} F5 & ={\left( {x2 - \frac{{5.1}}{{4{\pi ^2}}}{{(x1)}^2}+\frac{5}{\pi }x1 - 6} \right)^2} \hfill \\ & +10\left( {1 - \frac{1}{{8{{{\uppi}}}}}} \right){\text{cos}}x1+10 \hfill \\ \end{aligned}$$
(10)

where xi ε {− 5,15}

$${\text{F6 }}={\text{ }}{\left( {{\text{x1}}} \right)^{\text{2}}}+{\text{ }}{\left( {{\text{x2}}} \right)^{\text{2}}} - {\text{ cos}}\left( {{\text{18}}\left( {{\text{x1}}} \right)} \right){\text{ }} - {\text{ cos}}\left( {{\text{18}}\left( {{\text{x2}}} \right)} \right)$$
(11)

where xi ε {− 1,1}

$${\text{F7 }}={\text{ }}{\left( {{\text{x1 }}+{\text{ 2x2 }} - {\text{ 7}}} \right)^{\text{2}}}+{\text{ }}{\left( {{\text{2}}\left( {{\text{x1}}} \right)\left( {{\text{x2}}} \right){\text{ }} - {\text{ 5}}} \right)^{\text{2}}}$$
(12)

where xi ε {− 10,10}

$$F8=4(x1)2+2.1(x1)4+\frac{1}{3}x{1^6}+(x1)(x2) - 4{(x1)^2}+4{(x1)^4}$$
(13)

where xi ε {− 5,5}.

As shown in figures, the pattern search algorithm is introduced in the standard FA when there is no significant improvement observed in the convergence rate. For function 1, function 2 and function 5, the pattern search was introduced after iteration 200 where as for the rest of functions, the pattern search was introduced after iteration 250. For functions 1, 2, 4, 7 and 8, the solution quality of the proposed approach outperforms standard FA and GA whereas for function 3, 5 and 6, the convergence rates of proposed is slightly worse than GA whereas better than standard FA.

4.2 Maximization functions

Equations 1419 show the mathematical formulas of the maximization functions used in the experimentation and their convergence rates are shown in Figs. 11, 12, 13, 14, 15 and 16.

Fig. 11
figure 11

9 maximization function

Fig. 12
figure 12

F10 maximization function

Fig. 13
figure 13

F11 maximization function

Fig. 14
figure 14

F12 Maximization function

Fig. 15
figure 15

F13 maximization function

Fig. 16
figure 16

F14 maximization function

$${\text{F11}}= - (20+\left( {x{1^2} - 10*{\text{cos}}\left( {2\pi x1} \right)} \right)+(x{2^2} - 10*{\text{cos}}(2\pi x2)))$$
(14)

where xi ε {− 2.048, 2.048}

$${\text{F12}}={\text{cosx1}}*{\text{cosx2}}{e^{ - {{\left( {x1 - \pi } \right)}^2}~ - {{(x2 - ~\pi )}^2}}}$$
(15)

where xi ε {− 20, 20}

$${\text{F13}}={\text{28}}0 - \frac{{(x{1^4} - 16x{1^2} - 5x1)}}{2} - \frac{{(x{2^4} - 16x{2^2} - 5x2)}}{2}$$
(16)

where xi ε {− 5,5}

$${\text{F14}}={\text{ln}}\{ {[{({\text{sin}}{({\text{cosx1}}+{\text{cosx2}})^{\text{2}}})^{\text{2}}} - {({\text{cos}}{({\text{sinx1}}+{\text{sinx2}})^{\text{2}}})^{\text{2}}}+{\text{ x1}}]^{\text{2}}}\} {\text{ }} - {\text{ }}0.{\text{1}}({({\text{x1}} - {\text{1}})^{\text{2}}}+{\text{ }}{({\text{x2}} - {\text{1}})^{\text{2}}})$$
(17)

where xi ε {− 10, 10}

$${\text{F15}}=\left[ {\mathop \sum \limits_{{i=1}}^{5} i{\text{cos}}\left( {i+1} \right)x1+i} \right]\left[ {\mathop \sum \limits_{{i=1}}^{5} i{\text{cos}}(i{\text{cos}}\left( {i+1} \right)x2+i)} \right]$$
(18)

where xi ε {− 10,10}

$${\text{F16 }}={\text{ 66}}0 - {\text{ }}{({({\text{x1}})^{\text{2}}}+{\text{x2}} - {\text{11}})^{\text{2}}} - {\text{ }}{({\text{x1 }}+{\text{ }}({\text{x2}})_{{}}^{{\text{2}}} - {\text{7}})^{\text{2}}}$$
(19)

where xi ε {− 6, 6}.

As shown in figures, the pattern search algorithm is introduced in the standard FA when there is no significant improvement observed in the convergence rate. For function 10, function 11 and function 12, the pattern search was introduced after iteration 200 where as for the rest of functions, the pattern search was introduced after iteration 250. For functions 9, 11, 12, 13 and 14, the solution quality of the proposed approach outperforms standard FA and GA whereas for function 10, the convergence rate of proposed is slightly worse than GA whereas better than standard FA.

The convergence rates of GA, FA and proposed FA-PS for six maximization functions are shown in Figs. 11, 12, 13, 14, 15 and 16. It is evident from the figures that the proposed approach outperforms in getting the most optimal values for all the considered functions except function 2 shown in Fig. 12. The complete statistical analysis of all the functions for GA, FA and FA-PS is shown in Table 1.

Table 1 Statistical analysis of all functions

5 Conclusions

In this work, a hybrid technique of standard FA and PS has been developed for solving optimization problems. During the operational steps of standard FA, many times, the solution quality is not improved after few iterations and the algorithm fails in getting the most optimal value. This issue has been targeted in this work. When no considerable amount of improvement is observed in the solution quality of standard FA, the PS takes the value obtained by standard FA as starting point and further improves the solution quality. Although, the solution obtained by introducing PS may not be the most optimal solution but better than the solution provided by standard FA. The proposed approach was applied to 14 standard benchmark functions and the performance was compared with standard FA and GA. Out of 14 benchmark functions, the proposed approach outperforms both FA and GA in 11 benchmark functions whereas outperform FA for all the considered functions.