Keywords

1 Introduction

The fireworks algorithm (FWA) [1] is a new family member of evolutionary computation community and simulates explosion process of real fireworks repeatedly to find the global optimum. Many powerful variants of FWA have been sprung up like mushrooms by incorporating various effective search mechanisms, such as enhanced FWA (EFWA) [2], dynamic FWA (dynFWA) [3], adaptive FWA (AFWA) [4], guided FWA (GFWA) [5] and others [6,7,8,9]. They have also solved many complex real-world applications successfully, including multilevel image thresholding [10], RFID network planning [11] and privacy preserving [12], etc., thanks to their excellent characteristics. Although they have achieved gratifying results, there is still plenty of room to further improve FWA performance.

The primary objective of this paper is to propose a new type of weight-based guiding spark individuals to accelerate the convergence of FWA. The first strategy gives different weights to spark individuals to generate proposed weight-based guiding spark individuals, and the second strategy focuses on deciding the number of the guiding spark individuals dynamically, while GFWA always uses only one. The secondary objective is to analyze the effect of our proposal as well as their applicability and point out some open topics for discussion.

Following this introductory section, we roughly summarize optimization principles of FWA and a short introduction of GFWA in the Sect. 2. The proposed two strategies are comprehensively described in the Sect. 3. We evaluate the performance of our proposal using 28 benchmark functions of 5 different dimensions in the Sect. 4. Finally, we analyze some topics coming from the evaluation results in the Sect. 5 and conclude our works in the Sect. 6.

Fig. 1.
figure 1

Search process of FWA. (a) The initial firework individuals are generated randomly, (b) explosion spark individuals (blue solid points) and mutation spark individuals (green irregular points) are generated and (c) firework individuals in the next generation are selected from all individuals in the (b). The (b) and (c) are iterated until a termination condition is satisfied. (Color figure online)

2 Optimization Mechanisms of Fireworks Algorithm

There are many generated sparks around a real firework launched into the sky, which can be considered as a local search pattern around a specific point. Inspired by this explosion process, FWA assigns different explosion amplitude and number of generated spark individuals to each firework individual to balance exploitation and exploration. These explosion processes are repeated until a termination condition is satisfied. The Fig. 1 demonstrates the general framework of the FWA consisting of three major operations: explosion, mutation and selection.

Fig. 2.
figure 2

(a) A guiding spark is calculated by adding a guiding vector information from a firework. The guiding vector is a vector from the centroid of poor spark individuals to that of better spark individuals. (b) An example case of a guiding vector pointing to a wrong direction.

GFWA is one of the most powerful variants of FWA, and its core idea is to divide spark individuals into two groups according to their fitness, determine a guiding vector from the centroid of the poor group to that of the better group, and evolve a firework individual to the guiding direction (Fig. 2(a)). However, an incorrect guiding vector may hinder the convergence of a firework individual as shown in Fig. 2(b). Since we do not focus on GFWA itself, the detailed implementations can refer to the [5]. We propose two strategies to avoid poor guidance by generating multiple high precision guiding vectors.

3 Two Proposed Strategies for GFWA

We propose two strategies to further improve the GFWA performance by introducing the concept of weights and generating multiple potential guiding vectors. The first strategy, weight-based guiding strategy, assigns different weights to generated spark individuals according to their fitness, which is expected to find a more effective guiding direction. The second strategy, quantitative increase strategy, may increase the number of weight-based guiding spark individuals to avoid falling into a local area based on previous searches.

3.1 Weight-Based Guiding Strategy

Top \(\sigma \) spark individuals among those generated by a firework individual \(\varvec{x}^i\) based on fitness rank are copied into a pool, and \(\hat{N}\) spark individuals are randomly selected from the pool to calculate a guiding vector, i.e. we can obtain \(\hat{N}\) vectors from the firework individual to these selected spark individuals.

The next problem is how to handle weights. There are many methods to assign weights to these vectors. In this paper, we simply use the fitness difference between a firework individual and a selected spark individual to determine weights, which means the more potential directions are, the more weight they are given. Thus, the i-th guiding spark individual \(\varvec{g}^i\) is calculated by weighting these vectors using Eq. (1). The i-th guiding vector is defined as a vector from the i-th firework individual \(\varvec{x}^i\) to the guiding spark individual \(\varvec{g}^i\).

$$\begin{aligned} \varvec{g}^i = \sum _{j=1} ^ {\hat{N}} \frac{\left| f(\varvec{s}^i_j) - max\,f(\varvec{s}^i_j) \right| }{\sum _{j=1}^{\hat{N}} \left| f(\varvec{s}^i_j) - max\,f(\varvec{s}^i_j) \right| } \times (\varvec{s}^i_j - \varvec{x}^i) + \varvec{x}^i \end{aligned}$$
(1)

where \(\varvec{s}^{i}_{j}\) is the j-th spark individual generated by the i-th firework individual \(\varvec{x}^i\) (\(1 \le j \le \hat{N}\)), and f() is a fitness function.

Note that

  • if the i-th firework individual does not survive in the next generation, the pool is cleared. Otherwise, the pool is kept and generated better spark individuals are recorded into the pool until the upper limit is met. Once the pool becomes full, newcomers update poorer ones in the pool in turn.

  • \(\hat{N}\) is less than the pool size.

  • if the j-th spark individual \(\varvec{s}^i_j\) is worse than the i-th firework individual \(\varvec{x}^i\), the weight of the vector from the firework individual to the spark individual, \(\varvec{s}^i_j - \varvec{x}^i\), is set to 0.

3.2 Quantitative Increase Strategy

The second strategy is used only when a firework individual has not evolved and survived to the next generation. Multiple guiding sparks then are generated by using the first strategy to help the firework individual to evolve. Suppose the total number of spark individuals generated by the firework individual is M in the coming explosion operation. When the case mentioned in the above happens, we reduce the number of spark individuals generated by the explosion operation to \(\alpha \times M\) and pack the number by generating \((1 - \alpha ) \times M\) guiding spark individuals. We set \(\alpha \) as 0.9 in our experimental evaluations.

The next key problem is how to generate multiple guiding vectors. Since a pool can provide a variety of spark individuals, we randomly select half of spark individuals from the pool to calculate a guiding vector and repeat this operation \((1-\alpha ) \times M\) times to provide multiple different guiding sparks.

Algorithm 1 outlines the flow of FWA combined with our proposed strategies.

figure a

4 Experimental Evaluations

To evaluate the performance of our proposed strategies, we combine the original guiding strategy in the [5] and our proposal with three different FEW variants, EFWA [2], dynFWA [3] and AFWA [4], respectively. Each benchmark function from the CEC2013 benchmark test suit [13] is run 51 times independently in 5 dimension settings of D = 10, 30, 50, 70 and 100.

These functions are designed for real parameter single-objective optimization, and their landscape characteristics include shifted, rotated, global on bounds, unimodal and multi-modal. The parameter settings used in our experimental evaluations showed as following; the number of firework individuals is set to 1, and the total number of spark individuals is set to 200. \(\sigma \) used for selecting top spark individuals is set to 0.2. The explosion amplitude used in EFWA is set to 80. All other parameter settings of EFWA, dynFWA and AFWA are exactly the same with original references [2, 3] and [4], respectively. The dimension selection mechanism is not used in these evaluations to increase population diversity.

We use the number of fitness calls rather than generations to evaluate convergence fairly. The maximal number of evaluations, i.e. termination condition, of each run is 10,000\(\times D\). We apply the Wilcoxon signed-rank test and the Holm’s multiple comparison test on the fitness values at the termination condition to check significant difference between the original guiding Strategy in the [5] and our proposed strategies. Tables 1, 2 and 3 show results of statistical tests.

Table 1. Wilcoxon signed-rank test and Holm’s multiple comparison test results for average fitness of 3 methods for 51 trial runs. \(\gg \), >, and \(\approx \) mean that there are significant differences with significant levels 1%, 5%, and no significance, respectively. \(\backslash \) means that there is no significant difference among them. 0, 1, and 2 mean (EFWA \(+\) the original guiding strategy in the [5]), (EFWA \(+\) the proposed strategy 1), and (EFWA \(+\) the proposed strategies 1 and 2), respectively.
Table 2. Wilcoxon signed-rank test and Holm’s multiple comparison test results for average fitness of 3 methods for 51 trial runs. The symbols used in this Table have same mean with the Table 1. 0, 1, and 2 mean (dynFWA \(+\) the original guiding strategy in the [5]), (dynFWA \(+\) the proposed strategy 1), and (dynFWA \(+\) the proposed strategies 1 and 2), respectively.
Table 3. Wilcoxon signed-rank test and Holm’s multiple comparison test results for average fitness of 3 methods for 51 trial runs. The symbols used in this Table have same mean with the Table 1. 0, 1, and 2 mean (AFWA \(+\) the original guiding strategy in the [5]), (AFWA \(+\) the proposed strategy 1), and (AFWA \(+\) the proposed strategies 1 and 2), respectively.

5 Discussions

5.1 Discussion on the Proposed Strategies

We begin our discussion from an explanation of the superiority of our proposal. The first strategy, weight-based guiding strategy, uses only spark individuals which fitness are better than that of a firework individual to construct multiple vectors from the firework individual to selected spark individuals. Different weights based on their fitness differences are given to these potential vectors to calculate a guiding vector. The possibility of getting a better guiding spark individual by using the guiding vector from the firework individual becomes high.

A guiding spark individual has an anti-noise property to avoid over-preference for a certain direction because of aggregating multiple potential directions. Although the first strategy increases computing costs, i.e. weight processing operation, it is acceptable to add only one additional fitness operation. We can say that it is a low cost, high return strategy from the cost-performance view.

The second strategy, quantitative increase strategy, is to reduce the number of spark individuals generated by an explosion operation and generate the same number of guiding spark individuals to speed up unevolved firework individuals. Since the guiding operation is more likely to favor potential directions rather than a random search, multiple guiding vectors may be beneficial for a firework individual to jump out of the current local area.

To solve the key problem of how to generate diversified guidance vectors, a spark pool is adopted to efficiently use information by storing many excellent spark individuals generated in the past. This strategy does not need additional fitness calculations, but it simply redistributes the proportion of two different types of spark individuals. We can say that it is a low risk, easy-to-use strategy.

5.2 Discussion on Experimental Result

The next discussion is on the effectiveness and applicability of our proposal. To evaluate its performance, we compare it with the original guiding strategy in the [5], and apply them to three different baseline algorithms, EFWA, AFWA and dynFWA, respectively. We apply the Wilcoxon signed-rank test and Holm’s multiple comparison test to the average fitness of 51 trial runs at the termination condition and check significant differences between two guiding methods. From the results of these statistical tests, we found that our proposed strategies had better performance in both unimodal and multimodal tasks on all 5 different dimensions. It may be because our proposal can provide more precise multiple guiding directions to accelerate convergence of FWA. The results show that our proposal can be applied to various variants of FWA successfully and implies that they have a wide range of applicability.

Finally, we discuss several potential approaches to further improve the performance of our proposed strategies. As the next improvement, we may use fitness gradient information instead of fitness difference to handle weights. How to further improve the accuracy of guiding spark individuals and how to use them to accelerate FWA are also our future works.

6 Conclusion

We proposed two effective strategies to improve a guiding information of the original GFWA and further increase its optimization ability. The first strategy uses existing and historical information to construct guiding vectors more reasonably, and the second strategy increases the number of guiding spark individuals to provide multiple potential guiding spark individuals. The experiments confirmed that our proposal can improve the performance of the GFWA significantly.

In our future work, we will continue to explore and exploit the hidden information to accelerate convergence and propose new methods to handle the weights reasonably. Besides, we will use them to solve practical problems.