Keywords

1 Introduction

Firefly algorithm (FA) is a recently developed swarm intelligence algorithm [1]. It mimics the social behavior of fireflies based on the flashing and attraction of fireflies. In FA, each firefly represents a candidate solution, and its brightness is associated with the objective function for a given problem. The attraction between among fireflies is based on the differences of brightness. It means that a less brighter firefly can move to a brighter one by the attraction. During the search process, fireflies move to new positions through the attraction, and find new candidate solutions.

Since the introduction of FA, it has attraction much attention. Many researchers have proposed different FA variants and used FA to solve various real-world optimization problems [25]. In [6], Fister et al. proposed a memetic FA (MFA) to solve combinatorial optimization problems, in which the parameter α is dynamically decreased. This is helpful to increase the convergence. It has been pointed out that if α is reduced too fast, the premature convergence may occur. The attractiveness β is limited between 0.2 and 1.0. Moreover, the parameter α is multiplied by the length of the search range for the given problem. Simulation results show that MFA achieves much better solutions than the standard FA on some classical benchmark functions. On the basis of MFA, Wang et al. [7] proposed a new FA variant called RaFA, which employs two strategies: random attraction and Cauchy mutation. The first one aims to reduce the number of attractions and computational time complexity. The second one focuses on enhancing the global search ability. Though the random attraction can accelerate the convergence, it runs a risk of falling into local minima. The Cauchy mutation conducted on the global best firefly may help trapped fireflies escape from the local optima. Computational results show that RaFA outperforms the standard FA and MFA in terms of the solution accuracy and convergence speed. Based on MFA and random attraction, Wang et al. [8] introduced a neighborhood search strategy, which consists of one local and two global neighborhood search operators. Experimental results show that NSRaFA performs better than MFA and RaFA. Like other stochastic search algorithms, the performance of FA is also sensitive to its control parameters. To select the best settings of the control parameters, some researchers have proposed different strategies. In [9], twelve chaotic maps were used to update the light absorption coefficient γ and the attractiveness coefficient β. Results show that the proposed chaotic FA (CFA) can find better solutions than the standard FA on two benchmark functions. Yu et al. [10] designed a variable step size FA (VSSF), in which the parameter α is not fixed, but changed with increasing of iterations. Simulation results show that VSSFA performs better than the standard FA on sixteen benchmark functions. However, most of these functions are low-dimensional. In our further experiments, VSSFA is not suitable for solving high-dimensional functions, such as D ≥ 30, where D is the dimensional size. In [11], Wang et al. investigated the relations between the control parameters (α and β) and the convergence characteristics. The literature concluded that the parameter α should tend to zero when FA is convergent. The β maybe changed to suit for the search.

In FA, the movement of fireflies is determined by the attractions, which are associated with the brightness of fireflies. Darker fireflies can move to brighter ones because of the attraction. It means that a darker (worse) firefly will have more chances of moving to other new positions then a brighter (better) one. This may not be suitable for the search. If both the darker (worse) and brighter (better) fireflies have the same chance of moving to new positions. The brighter firefly may find more accurate solutions than the darker one. Under this case, we propose a hybrid FA called HFA, which employs a local search operator inspired differential evolution (DE). It is hopeful that the DE based local search can help the brighter fireflies find better candidate solutions. Moreover, the HFA uses the same parameter control strategies with MFA. To verify the performance of HFA, there are thirteen benchmark functions used in the experiments. Computational results show that HFA can find more accurate solutions than the standard FA, MFA, VSSFA, and CFA on the majority of test functions.

The rest paper is organized as follows. The standard FA is briefly described in Sect. 2. In Sect. 3, the proposed approach HFA is introduced. Experimental results and discussions are given in Sect. 4. Finally, the summary and future work are presented in Sect. 5.

2 Firefly Algorithm

The idea of FA is inspired by the social behavior of fireflies. On clear summer nights, there can be seen many fireflies give off flashes of light. There are about 2,000 kinds of fireflies in the world. Each species of firefly has a special flicker code to attract mates of the same species. The FA developed by Yang is based on the attraction of fireflies. To model the FA, Yang proposed three assumptions as follows [12]:

(1) all fireflies are unisex;

(2) the attractiveness of a firefly is proportional to its brightness. For any two different fireflies, the less brighter one will move towards the brighter one, and their attractiveness and brightness decrease with increasing of their distance;

(3) the brightness of a firefly is affected or determined by the landscape of the objective function for a given problem.

In FA, the light intensity I can be approximated as [12]:

$$ I(r) = I_{0} e^{{ - \gamma r^{2} }} , $$
(1)

where I 0 is the original light intensity, and γ is the light absorption coefficient.

As the attractiveness is proportional to the light intensity, the attractiveness β of a firefly can be defined by [12]:

$$ \beta (r) = \beta_{0} e^{{ - \gamma r^{2} }} , $$
(2)

where β 0 is the attractiveness at r = 0. For two fireflies X i and X j , their distance r i,j can be calculated by [12]:

$$ r_{i,j} = \left\| {X_{i} - X_{j} } \right\| = \sqrt {\sum\limits_{d = 1}^{D} {(x_{i,d} - x_{j,d} )^{2} } } , $$
(3)

where D is the problem size.

For two fireflies X i and X j , if firefly X j is brighter than firefly X i , firefly X i will be attracted by the firefly X j . Due to the attraction, firefly X i will move towards as firefly X j follows [12]:

$$ x_{i,d} (t + 1) = x_{i,d} (t) + \beta_{0} e^{{ - \gamma r_{i,j}^{2} }} \left( {x_{j,d} (t) - x_{i,d} (t)} \right) + \alpha \varepsilon_{i,d} (t) , $$
(4)

where x i,d and x j,d are the dth dimensions of X i and X j , respectively, α is a random value with the range of [0,1], ε i,d is a Gaussian random number for the dth dimension, and t indicates the generation index.

3 Proposed Approach

As mentioned before, the movement of fireflies is determined by the attractions among fireflies. In the standard FA, less brighter fireflies will move towards other brighter ones. For two fireflies X i and X j , if firefly X j is brighter than firefly X i , X i will move towards X j by the attraction. However, the brighter (better) X j has not any search operation (except for random walk). If brighter X j moves can move to other positions, it may provide more chances of finding better candidate solutions than moving X i . Under this case, we propose a hybrid FA (HFA), which uses the DE scheme to search the neighborhood of the brighter fireflies in the population.

DE is an effective meta-heuristic for global optimization [13]. In this work, we embed DE into FA. Assume that X j is the jth firefly in the population, where j = 1,2…,N, and N is the population size. First, a mutant firefly V j is generated based on the brighter firefly X j :

$$ v_{j,d} (t) = x_{j1,d} (t) + F \cdot \left( {x_{j2,d} (t) - x_{j3,d} (t)} \right) , $$
(5)

where d = 1,2,…,D, j1, j2, and j3 are mutually different random integers between 1 and N. The parameter F is called scale factor, which controls the amplification of the difference vector.

Second, a crossover operator is conducted on X j and V j , and a new candidate solution U j is generated as follows:

$$ u_{j,d} (t) = \left\{ \begin{aligned} v_{j,d} (t),{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} if{\kern 1pt} {\kern 1pt} {\kern 1pt} rand_{d} \le CR \vee d = l \hfill \\ x_{j,d} (t),{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} otherwise{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} \hfill \\ \end{aligned} \right. , $$
(6)

where d = 1,2,…,D, the parameter \( CR\,\, \in \,\,\left( {0,\,1} \right) \) is called crossover rate, rand d is a random value between 0 and 1, and l∈{1,2,…,D} is a random index.

Third, a selection operator is employed to select the better one between X j and U j as the new X j entering the next generation. Without loss of generality, this paper only considers minimization problems. The selection process can be presented as follows:

$$ X_{j} (t + 1) = \left\{ {\begin{array}{*{20}c} {U_{j} (t),} & {if{\kern 1pt} {\kern 1pt} {\kern 1pt} f\left( {U_{j} (t)} \right) \le f\left( {X_{j} (t)} \right)} \\ {X_{j} (t),} & {otherwise} \\ \end{array} } \right. , $$
(7)

In HFA, the parameters α and β use the same strategies as MFA, and they are updated by [6]:

$$ \alpha (t + 1) = \left( {\frac{1}{9000}} \right)^{{\frac{1}{t}}} \alpha (t). $$
(8)
$$ \beta = \beta_{\hbox{min} } + \left( {\beta_{0} - \beta_{\hbox{min} } } \right)e^{{ - \gamma r_{i,j}^{2} }} . $$
(9)

where βmin is the lower bound of the attractiveness β.

The main steps of the proposed HFA are presented in Algorithm 1, where N is the population size, and FEs is the number of fitness evaluations, and Max_FEs is the maximum number of fitness evaluations.

4 Experiments

4.1 Test Problems

To test the performance of HFA, thirteen benchmark functions are used in the experiments [14, 15]. All functions are minimization problems, and their dimensions are set to 30. The detailed descriptions of these functions are described as follows.

$$ f_{1} \left( x \right) = \sum\nolimits_{i = 1}^{D} {x_{i}^{2} } $$

where x i ∈ [−100, 100], and the global optimum is 0.

$$ f_{2} \left( x \right) = \sum\nolimits_{i = 1}^{D} {\left| {x_{i} } \right|} + \prod\nolimits_{i = 1}^{D} {x_{i} } $$

where x i ∈ [−10, 10], and the global optimum is 0.

$$ f_{3} \left( x \right) = \sum\nolimits_{i = 1}^{D} {\left( {\sum\nolimits_{j = 1}^{i} {x_{j} } } \right)}^{2} $$

where x i ∈ [−100, 100], and the global optimum is 0.

$$ f_{4} \left( x \right) = \max_{i} \left( {\left| {x_{i} } \right|,\,1 \le i \le D} \right) $$

where x i ∈ [−100, 100], and the global optimum is 0.

$$ f_{5} \left( x \right) = \sum\nolimits_{i = 1}^{D - 1} {\left[ {100\left( {x_{i}^{2} - x_{i + 1} } \right)^{2} + \left( {x_{i} - 1} \right)^{2} } \right]} $$

where x i ∈ [−30, 30], and the global optimum is 0.

$$ f_{6} \left( x \right) = \sum\nolimits_{i = 1}^{D} {\left( {\left\lfloor {x_{i} + 0.5} \right\rfloor } \right)}^{2} $$

where x i ∈ [−100, 100], and the global optimum is 0.

(7) Quartic with noise

$$ f_{7} \left( x \right) = \sum\nolimits_{i = 1}^{D} {ix_{i}^{4} } + rand\left[ {0,1} \right) $$

where x i ∈ [−1.28, 1.28], and the global optimum is 0.

$$ f_{8} \left( x \right) = \sum\nolimits_{i = 1}^{D} { - \,x_{i} } \sin \left( {\sqrt {\left| {x_{i} } \right|} } \right) $$

where x i ∈ [−500, 500], and the global optimum is −12569.5.

$$ f_{9} \left( x \right) = \sum\nolimits_{i - 1}^{D} {\left[ {x_{i}^{2} - 10\,\cos \left( {2\pi x_{i} } \right) + 10} \right]} $$

where x i ∈ [−5.12, 5.12], and the global optimum is 0.

$$ f_{10} \left( x \right) = \, - 20\,\,\exp \left( { - 0.2\,\sqrt {\frac{1}{D}\sum\nolimits_{i = 1}^{D} {x_{i}^{2} } } } \right) - \,\exp \left( {\frac{1}{D}\sum\nolimits_{i = 1}^{D} {\cos \left( {2\pi x_{i} } \right)} } \right) + 20 + e $$

where x i ∈ [−32, 32], and the global optimum is 0.

$$ f_{11} (x) = \frac{1}{4000}\sum\nolimits_{i = 1}^{D} {x_{i}^{2} } - \prod\nolimits_{i = 1}^{D} {\cos \left( {\frac{{x_{i} }}{\sqrt i }} \right)} + 1 $$

where x i ∈ [−600, 600], and the global optimum is 0.

$$ \begin{aligned} f_{12} (x) = 0.1\{ \sin^{2} (3\pi x_{1} ) + \sum\nolimits_{i = 1}^{D - 1} {(x_{i} - 1)^{2} [1 + \sin^{2} (3\pi x_{i + 1} )]} \hfill \\ {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} + (x_{D} - 1)^{2} [1 + \sin^{2} (2\pi x_{D} )]\} + \sum\nolimits_{i = 1}^{D} {u(x_{i} ,10,100,4)} \hfill \\ \end{aligned} $$

where x i ∈ [−50, 50], and the global optimum is 0.

$$ \begin{aligned} f_{13} (x) = \frac{\pi }{D}\{ 10\sin^{2} (3\pi y_{1} ) + \sum\nolimits_{i = 1}^{D - 1} {(y_{i} - 1)^{2} [1 + \sin^{2} (3\pi y_{i + 1} )]} \hfill \\ {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} + (y_{D} - 1)^{2} [1 + \sin^{2} (2\pi x_{D} )]\} + \sum\nolimits_{i = 1}^{D} {u(x_{i} ,5,100,4)} \hfill \\ \end{aligned} $$

where x i ∈ [−50, 50], and the global optimum is 0.

4.2 Comparisons of HFA with Other FA Variants

In this section, we compare the proposed HFA with the standard FA, VSSFA [10], MFA [6], and CFA [9] on the test suite. For the sake of fair comparisons, the same parameter settings are listed as follows. The population size N and Max_FEs are set to 20 and 5.0E + 05, respectively. For MFA and HFA, the initial α, β min, β 0, and γ are set to 0.2, 0.2, 1.0, and 1.0, respectively. For other parameters in HFA, F = 0.5 and CR = 0.9 are used. In the standard FA, α, β 0, and γ are set to 0.2, 1.0 and 1/Γ2, where Γ is the length of the search range for a given problem. For VSSFA and CFA, we use the same parameter settings as suggested in the their literature [9, 10]. For each test function, each algorithm is run 30 trials and the mean best fitness values are recorded.

Table 1 presents the mean best fitness values achieved by the standard FA, VSSFA, MFA, CFA, and HFA. The best results among five FA variants are shown in bold. From the results, HFA achieves better solutions than the standard FA and VSSFA on all test functions. MFA, CFA, and HFA can converge to the global optimum on f 6. For the rest of twelve functions, HFA outperforms MFA and CFA. Figure 1 shows the convergence curves of the five FA variants on four selected functions. As seen, HFA converges faster than the other four algorithms.

Table 1. Comparison results of HFA with FA, VSSFA, MFA, and CFA.
Fig. 1.
figure 1

The convergences curves of FA, VSSFA, MFA, CFA, and HFA.

Table 2 presents the results achieved by the Friedman test. It can be seen that HFA achieves the best mean rank (the smallest value). CFA obtains the second best rank, while VSSFA takes the worst rank. Results show that HFA is the best algorithm among the five FA variants.

Table 2. Results achieved by the friedman test.

5 Conclusions

In the standard FA, the movement of fireflies is determined by the attractions, which are associated with the brightness of fireflies. Less brighter fireflies can move towards brighter ones because of the attraction. It means that a less brighter firefly will have more chances of moving to other new positions then a brighter one. This may not be suitable for the search. In this paper, we present a hybrid FA (HFA) variant, in which the DE scheme is utilized to enhance the search of brighter fireflies. In addition, HFA uses the parameter control method as suggested in MFA. Experiments are conducted on thirteen benchmark functions. Computational results show that HFA achieves more accurate solutions than the standard FA, VSSFA, MFA, and CFA.