Keywords

1 Introduction

Optimization problems exist in many engineering fields. With the growth of problem complexity, stronger optimization algorithms are required. In the past decades, several new optimization techniques have been proposed by the inspiration of swarm intelligence, such as artificial bee colony (ABC) [1,2,3,4], bat algorithm (BA) [5,6,7], firefly algorithm (FA) [8,9,10], cuckoo search (CS) [11, 12], fruit fly optimization (FFO) [13], and artificial plant optimization algorithm [14, 15]. FA is a new optimization technique originally invented by Prof. Yang [8]. In FA, every firefly moves toward new positions and find potential solutions by the attraction of other brighter fireflies. Some latest researches proved that FA is an efficient optimization tool.

However, the standard FA still has some drawbacks. For instance, FA is sensitive to the control parameters, and the convergence speed is slow. To tackle these issues, several improved strategies have been designed. Recently, a memetic FA (MFA) was proposed, in which two improved strategies are employed [16]. First, the step factor α is dynamically decreased based on an empirical model. Second, the attractiveness β is constrained into a predefined range [0.2, 1.0]. Moreover, the factor α is multiplied by the length of search interval. Gandomi et al. [17] presented a chaotic FA, in which different chaotic maps were used to adjust the step factor α and the light absorption coefficient γ. Besides Gandomi’s work, some researchers also combined FA with chaos to obtain a good performance. In [18], an adaptive selection method was used to choose the parameter α from a candidate set. In [19], Wang et al. investigated the relationships between convergence and the parameter α. Results show that α should tend to zero when FA converges to a solution. Based on this principle, Wang et al. designed two dynamic methods adjust α, in which α is gradually decreased based on different models as the generation increases [19, 20].

In this paper, we present a new adaptive firefly algorithm (AFA), which is an enhanced version of MFA. In AFA, we combine MFA with a new adaptive parameter strategy to dynamically adjust the step factor α. Thirteen famous test functions are used for performance verification. Results show that AFA is superior to the standard FA, MFA [16], chaotic FA (CFA) [17], and FA with random attraction (RaFA) [9].

2 Firefly Algorithm

For two fireflies X i and X j , their attractiveness β is defined as follows [21].

$$ \beta (r_{ij} ) = \beta_{0} e^{{ - \gamma r_{ij}^{2} }} $$
(1)

where β 0 is the attractiveness at r = 0, γ is the light absorption coefficient, and r ij is the distance between X i and X j . The distance r ij is computed as follows [21].

$$ r_{ij} = \left\| {X_{i} - X_{j} } \right\| = \sqrt {\sum\limits_{d = 1}^{D} {(x_{id} - x_{jd} )^{2} } } $$
(2)

where D is the problem size.

When X j is brighter (better) than X i , X i will move towards X j because of the attraction. In the standard FA, this movement is defined as follows [21].

$$ x_{id} (t + 1) = x_{id} (t) + \beta (r_{ij} ) \cdot \left( {x_{jd} (t) - x_{id} (t)} \right) + \alpha \left( {rand - \frac{1}{2}} \right) $$
(3)

where x id and x jd are the dth dimensions of X i and X j , respectively, α ∈ [0, 1] is called step factor, and rand is a random value within [0, 1].

3 Proposed Approach

Recently, a memetic firefly algorithm (MFA) was designed to enhance the performance of FA [16]. The MFA made three improvements. Firstly, the step factor α is dynamically updated as follows.

$$ \alpha (t + 1) = \left( {\frac{1}{9000}} \right)^{{\frac{1}{t}}} \alpha (t) $$
(4)

where t indicates the generation index. We can find that the value of α decreases with the growth of t.

Secondly, the definitions of the attractiveness β is modified. The new β is calculated as follows.

$$ \beta (r_{ij} ) = \beta_{\hbox{min} } + \left( {\beta_{0} - \beta_{\hbox{min} } } \right)e^{{ - \gamma r_{ij}^{2} }} $$
(5)

where β min is the minimum value of the attractiveness β. The β is constrained into the range \( [\beta_{\hbox{min} } ,\beta_{0} ] \). In [16], β min and β 0 are equal to 0.2, and 1.0, respectively.

Thirdly, the step factor α is multiplied by the length of the search range by the suggestions of [21]. Then, the new movement equation is modified as follows.

$$ x_{id} (t + 1) = x_{id} (t) + \beta (r_{ij} ) \cdot \left( {x_{jd} (t) - x_{id} (t)} \right) + \alpha (t) \cdot s_{d} \cdot \left( {rand - \frac{1}{2}} \right) $$
(6)

where s d is the length of the search interval of the dth dimension.

Based on MFA, we propose an enhanced version by employing a new adaptive parameter strategy to adjust the step factor α. In our approach AFA, Eq. (4) is modified as follows.

$$ \alpha (t + 1) = \left( {1 - \frac{FEs}{MaxFEs}} \right)^{m} \alpha (t) $$
(7)

where \( m > 0 \), \( FEs \) represents the number of fitness evaluations, \( MaxFEs \) indicates the maximum value for the \( FEs \), and t is the generation number. When \( 0 < m < 1 \), a small value is added to the weighed term of \( \alpha (t) \). It avoids that \( 1 - FEs/MaxFEs = 0 \). In our experiments, \( m \) is set to 0.5. In fact, Eq. (7) is a general version of our previous work [19].

The framework of AFA is given in Fig. 1. Compared to MFA, AFA only modifies the updating strategy of the step factor α. Therefore, AFA has the same complexity with MFA.

Fig. 1.
figure 1

The framework of our approach AFA.

4 Experimental Study

4.1 Experimental Setup

In the experiment, thirteen benchmark functions are utilized for performance verification [22,23,24,25,26]. All test functions are minimization problems. Table 1 gives a brief description of these functions. More detailed descriptions of these functions can be found in [27,28,29,30].

Table 1. A brief descriptions of test functions.

In the comparison, AFA is compared with four FAs. The related FAs are presented as follows.

  • FA

  • MFA [16]

  • CFA [17]

  • RaFA [9]

  • Proposed AFA

The parameters \( N \) and \( MaxFEs \) are equal to \( 20 \) and \( 5.0E + 05 \), respectively. In the standard FA, the parameters α, β 0, and γ are set to 0.2, 1.0, and γ = 1/Γ2, respectively. For MFA, RaFA, and AFA, the initial α, β min, β 0, and γ are set to 0.5, 0.2, 1.0, and 1.0, respectively. The parameter m used in AFA is set to 0.5. Besides AFA, RaFA is also an improved version of MFA.

4.2 Results

Table 2 presents the results of AFA, FA, MFA, CFA, and RaFA on thirteen test functions, where “Mean” is the mean best fitness value over on 30 runs. From the results, AFA outperforms the standard FA on 11 functions, while AFA achieves worse solutions on two functions f 3 and f 7. For function f 3, AFA is trapped into local optima and can hardly obtain reasonable solutions. For function f 7, the standard FA is a little better than AFA. Compared to MFA, the proposed adaptive parameter strategy helps AFA to achieve significant improvements, especially for f 1, f 2, and f 10f 13. CFA outperforms AFA on three functions f 3, f 5 and f 8. For function f 6, AFA, CFA, RaFA, and MFA find the same solution. RaFA is better than AFA on 6 functions, while AFA achieves more accurate solutions than RaFA on 6 functions.

Table 2. Results for different FAs.

Figures 2 and 3 show the convergence graphs on some unimodal functions and multimodal functions, respectively. As seen, both RaFA and AFA converges faster than FA, MFA, and CFA.

Fig. 2.
figure 2

The convergences processes of different FAs on some unimodal functions.

Fig. 3.
figure 3

The convergences processes of different FAs on some multimodal functions.

To compare the optimization performance of the five FA variants on the whole test set, we calculate the mean rank values by the Friedman test. Table 3 gives the mean rank values of the five algorithms. The highest rank is marked in boldface. It is obvious that AFA obtains the highest rank. It demonstrates that AFA is the best one among AFA, FA, CFA, MFA, and RaFA.

Table 3. Results for the Friedman test.

5 Conclusions

In this paper, an adaptive firefly algorithm (AFA) is proposed. It is an enhanced version of MFA. In AFA, a new parameter method is designed to adaptively change the step factor. In the experiment, thirteen test functions are used for performance verification. Simulation results show that AFA is superior to the standard FA, MFA, CFA, and RaFA. The adaptive parameter strategy is a general version of our previous work. In this paper, an empirical value is used. More investigations will be conducted in the future work.