Keywords

1 Introduction

Metaheuristic algorithms have been widely used to solve complex and highly nonlinear optimization problems. Among them, swarm intelligence optimization, such as particle swarm optimization, ant colony optimization, and the firefly algorithm (FA), simulates group behavior in nature, and has performed well at solving complex optimization problems. Proposed in 2008 by Dr. Xinshe Yang of the University of Cambridge, FA simulates the group behavior of fireflies. Each firefly positions itself by its own fluorescence, through whose intensity it can attract other fireflies, catch prey, or send out warning signals. It can move individually to complete a search process in a solution space. The algorithm has seen many improvements. An improved FA to learn between mates was proposed, expanding upon the original FA, which has a single-sex firefly, and improving its optimization performance [14]. Wang et al. [23] proposed Ying-Yang FA based on a dimensional Cauchy mutation. A randomized attraction model promoted convergence. To better use FA to solve global continuous optimization problems, Wu et al. designed a logarithmic spiral search method to improve the individual’s exploration capability, and a dynamic search method to improve individual exploration capability [25]. Nand et al. [11] proposed an improved FA called FA-Step, and designed a new step search strategy and a step strategy mixed with CMAES. Liu et al. proposed dynamic adaptive FA, which improved the convergence speed and solution accuracy, and stopped the algorithm from suffering from premature convergence [9].

FA has been successfully used in many engineering applications. A hybrid cooperative FA was used to solve the restricted vehicle routing problem, achieving a good balance between intensification and diversity [1]. Cheng et al. proposed a hybrid algorithm based on FA and GA, whose group attraction operator reduced the time complexity. Three combined mutation strategies were introduced, and the algorithm was used for constrained optimization problems. Tao et al. [18] proposed an adaptive strategy in the FA attraction model to design random models and penalty functions. It was used to solve constrained engineering optimization problems, where it achieved rapid convergence and high accuracy. Krishna et al. designed an improved random attraction FA for fuzzy image clustering problems [6]. Tian et al. [20] improved and optimized individual fireflies by combining mutation operations, used them to guide the direction of the group’s operation, and applied the algorithm to the design of the SMC-PHD filter. Xie et al. [26] proposed two improved methods based on FA to overcome the local convergence problem in k-means clustering. Ireneus applied FA to the problem of entity selection in machine learning [4]. Wang et al. [24] designed an improved chaotic FA for wireless sensor resource allocation in the IoT environment. Dimitra et al. [22] designed an individual encoding and decoding strategy to use FA for the prize-collecting vehicle routing problem. Cheng et al. [2] improved the FA attraction model to promote convergence at a lower cost, designed three staged adaptive firefly functions to determine its parameters, and used the designed SAFA for the UAV charging planning problem in wireless sensor networks. Dash et al. designed an improved FA-based optimal design of special signal blocking IIR filters [5]. Rohulla et al. designed a restricted multi-objective optimization model for the privacy protection problem in social networks, and combined FA and fuzzy clustering to solve it [8]. Tian et al. [19] designed an improved FA for particle filtering and used it for multi-target tracking, achieving better accuracy.

This paper addresses the problem of FA easily falling into local optima. Our improved FA based on opposition-based learning (OL), FA-OL, broadens the search space of the algorithm and improves its ability to cover the feasible solution area. The remainder of this article is organized as follows. Section 2 introduces the principle of the standard FA and FA based on the offline decreasing random factor. Section 3 introduces the principle of OL and its application to FA. Section 4 relates the results of experiments comparing our algorithm to seven others, and discusses the randomized parameter-setting method that affects our algorithm. Section 5 discusses our conclusions.

2 Introduction to FA

2.1 FA Concept

The two most basic behaviors of fireflies are attracting fireflies of the opposite sex in the same population and attracting prey, both of which depend on the fluorescence they emit, which is the fundamental principle of FA design. We know from optics that the intensity of light is attenuated because it is absorbed by the air, and the degree of attenuation is directly related to the air absorption and distance of propagation. The intensity of light will also decrease as the distance between the light source and the object increases.

In FA, each feasible solution is expressed as the position of each firefly, and the fluorescence intensities of the fireflies is used as the target function value. In standard FA, the fireflies are all of the same sex, so the attractive relationship between them does not need to be considered. In addition, FA believes that the degree of attraction between fireflies is proportional to their own fluorescence intensity. Individuals with lower fluorescence intensity are attracted to fireflies with higher fluorescence intensity, and will fly to them. The attraction behavior between fireflies is also the convergence behavior of the algorithm, i.e., the fireflies are searching for areas with better target function values.

The individual position update equation of FA has three parts, namely the individual flight inertia; the movement of the i-th firefly attracted by the j-th firefly is equivalent to the convergence behavior of the firefly in the evolutionary process; the randomized search term, which is completely unrelated to fireflies i and j. Through this, random disturbances can be generated to make the fireflies reach a new search position, as follows:

$$\begin{aligned} \begin{aligned} x_{i}^{k}\left( {t + 1} \right) =&x_{i}^{k}\left( t \right) + \beta _{ij}\left( t \right) \cdot \left( {x_{j}^{k}\left( t \right) - x_{i}^{k}\left( t \right) } \right) \\&+ \alpha \left( t \right) \cdot \left( {rand}_{ij}^{k}\left( t \right) - 0.5 \right) \end{aligned} \end{aligned}$$
(1)
$$\begin{aligned} \beta _{ij}\left( t \right) = \beta _{0} \cdot e^{- \gamma r_{ij}^{2}(t)} \end{aligned}$$
(2)
$$\begin{aligned} r_{ij}\left( t \right) = \left\| {x_{j}\left( t \right) - x_{i}\left( t \right) } \right\| = \sqrt{\sum \limits _{k = 1}^{D}\left( x_{j}^{k}\left( t \right) - x_{i}^{k}\left( t \right) \right) ^{2}} \end{aligned}$$
(3)

where \(x_{i}^{k}\left( t \right) \) and \(x_{j}^{k}\left( t \right) \) respectively refer to the positions of fireflies i and j in the k-th dimension of the t-th iteration; \(r_{ij}\left( t \right) \) is the distance between fireflies i and j in the t-th iteration; D is the problem scale; rand is a uniformly distributed random number in [0, 1]; \(\alpha \) is a random factor; \(\gamma \) is the absorption coefficient; \(\beta \) represents the attraction between two fireflies; and \(\beta _{0}\) is the attraction when the distance between two fireflies is zero, i.e., the maximum attraction between them. Regarding the parameters of FA, Yang [27, 28] proposed that, when implementing and applying FA, \(\beta _{0} = 1\), \(\gamma \in \left[0.01,100 \right]\) is a constant generally set to 1, and \(\alpha \in [0,1]\) is a constant generally set to 0.5.

2.2 Improved FA Based on a Proportionally Decreasing Random Factor

From the position update equation of FA, we know that when the absorption coefficient is zero, the update equation is

$$\begin{aligned} x_{i}^{k}\left( t + 1 \right) = x_{j}^{k}\left( t \right) + \alpha \left( t \right) \cdot \left( rand_{ij}^{k}\left( t \right) - 0.5 \right) \end{aligned}$$
(4)

and when the absorption coefficient tends to infinity, the update equation is

$$\begin{aligned} x_{i}^{k}\left( t + 1 \right) = x_{i}^{k}\left( t \right) + \alpha \left( t \right) \cdot \left( rand_{ij}^{k}\left( t \right) - 0.5 \right) \end{aligned}$$
(5)

It can be seen from formulas (4) and (5) that whether the fluorescence is absorbed or not, the random search term is an important part of the firefly position update, and the random search ability will be affected by the random factor. We carried out research on random factors, and propose control methods based on linear decline, proportional decline, parabolic decline, and fixed constants. Through the experimental and statistical analysis results, it can be known that the improved FA adopting the proportional decreasing method has the best comprehensive optimization performance among the compared algorithms. The proportional decreasing method of random factors has the formula

$$\begin{aligned} \alpha \left( t + 1 \right) = k \cdot \alpha \left( t \right) ,\alpha \left( 0 \right) = 0.5 \end{aligned}$$
(6)

where k is the common ratio, which is a constant less than 1. Optimizing the standard test function with the value of k in the interval [0.97, 0.998] can obtain a more reasonable result. Through experiments, it was concluded that the best optimization performance occurs when k is 0.9902 [15]. FA based on a proportionally decreasing random factor is referred to as FA-Prop in this article.

3 Firefly Algorithm with Opposition-Based Learning

3.1 OL Principle

Generally speaking, heuristic algorithms are iterative. Starting from an initial solution, after continuous iterative operations, a heuristic algorithm tries to improve candidate solutions. When the algorithm stop condition is met, the search process is terminated. When the prior information of the solution was lacking, Tizhoosh et al. [21] proposed an OL mechanism, which used a random guess to generate candidate solutions, and whose calculation time was related to the distance between the initial and optimal solutions. Optimization algorithms with superior performance often have a high probability of jumping out of local optima and finding the global optimum, which is closely related to the ability of the particles in the algorithm to explore and open up unknown areas. OL can effectively broaden the search space and cover the feasible solution area.

Definition 1 Opposite number. If \(x \in \left[{a,b} \right]\) and \(x \in R\) then the opposite number x is \(x^{*} = a + b - x\).

Definition 2 Opposite point. If \(P = \left( {x_{1},x_{2},\cdots ,x_{D}} \right) \) is a point in a D-dimensional space, \(x_{i} \in R\), and \(x_{i} \in \left[{a_{i},b_{i}} \right]\left( i = 1,2,\cdots ,D \right) \), then P is the corresponding opposite point, where \(P^{*} = \left( x_{1}^{*},x_{2}^{*},\cdots ,x_{D}^{*} \right) \) and \(x^{*} = a + b - x\).

Fig. 1.
figure 1

Opposition-based learning in one dimension [16]

Opposition-based optimization thinking: Assume \(P = \left( x_{1},x_{2},\cdots ,x_{D} \right) \) is a point in a D-dimensional space (it can be viewed as a candidate solution), with opposite point \(P^{*} = \left( x_{1}^{*},x_{2}^{*},\cdots ,x_{D}^{*} \right) \). Suppose \(f\left( \cdot \right) \) is an evaluation function to calculate the fitness of the candidate solution. Calculate \(f\left( P \right) \) and \(f\left( P^{*} \right) \) in each iteration. If \(f\left( P^{*} \right) \le f\left( P \right) \), then replace P with \(P^{*}\), and otherwise still iterate with P. The one-dimensional space OL process is shown in Fig. 1, where \(O_{k}\) is the k-th OL reference point. Its value is the midpoint of the sum of \(a_{i}\) and \(b_{i}\). It is also called the symmetric OL mechanism. If you take a random position \(O_{k}\), it is random OL.

Zhou et al. proposed normalized OL [29]. Let \(\varphi \left( x \right) = \varDelta - x\) and \(x^{*} = \varDelta - x\), where \(\varDelta \) is a calculated value and the corresponding opposite solution is \(x^{} \in \left[{\varDelta - b,\varDelta - a} \right]\). Compared with the original OL, the difference is the symmetric position of the current and opposite points. The current and opposite points of the original OL are with regard to \((a+b)/2\), while the normal OL is symmetric with regard to \(\left( {2\varDelta - a - b} \right) /2\). Let \(\varDelta = \eta \left( {a + b} \right) \), where \(\eta \) is a real number. Then normal OL is

$$\begin{aligned} x^{*} = \eta \left( {a + b} \right) - x \end{aligned}$$
(7)

According to the different values of \(\eta \), there are four types of OL. When \(\eta = 0\), the current and opposition-based solutions are symmetric with regard to the origin, which is called OL based on solution symmetry; when \(\eta = {1/2}\), the search interval is symmetric about the origin, which is called OL based on interval symmetry; \(\eta = 1\) is the special case of the original OL; and when \(\eta \) is a random number, it is called random OL.

In the swarm intelligence algorithm, the OL mechanism is introduced, and the individual can search at the current and opposite points at the same time. If the opposite point has a better fitness value than the current point, then the search area near the opposite point has a better chance to find the optimal value. The individual can jump directly to the opposite point to continue searching, which can quickly expand the solution space and cover the feasible solution area on a larger scale, ultimately increasing the probability of finding the global optimal solution.

Fig. 2.
figure 2

Flowchart of FA-OL

Table 1. Experimental results of FA-OL on different random parameters
Table 2. Ranking results of FA-OL on different random parameters (ANOVA)

3.2 Opposition Learning FA

In the standard FA and FA-Prop, when comparing the fluorescence intensity of two individual fireflies, the individual with weaker fluorescence intensity needs to learn from the individual with high fluorescence intensity, i.e., according to formulas (1)–(3), complete the location update of the weaker individuals. In FA-OL, to improve the ability of the algorithm to jump out of local optima, individuals with higher fluorescence intensity expand the search space with probability Prob through the OL mechanism, i.e., the position update is completed as

$$\begin{aligned} x_{i}^{k}\left( {t + 1} \right) = Ub + Lb - x_{i}^{k}\left( t \right) + rand\left( {1,D} \right) .*x_{i}^{k}\left( t \right) \end{aligned}$$
(8)

where Ub and Lb are the upper and lower bounds, respectively, of the search space, and D is the problem dimension. It can be seen from formula (8) that after an individual performs an opposition search, it can quickly search for the best point in a new search area, which improves the global search capability of the algorithm. The flowchart of FA-OL is shown as Fig. 2.

4 Experiment

4.1 Experiment Setup

We used nine standard test functions [15] to analyze the performance of FA-OL. We also compared it with the latest improved FA, FACL [14]; improved ant colony optimization algorithm NABC [12]; and improved cuckoo algorithm MSSCS [13]; and with classic improved algorithms PSO with inertia weight (PSO-In) [17], PSO with constriction factor (PSO-Co) [3], Gaussian Bare Bone PSO (GBBPSO) [7], and PSO with lbest (PSO-Lb) [10]. The dimension of the test function was \(D=30\), the population size of each algorithm was 30, and the maximum number of evaluations of the target function was 60,000. Other parameters of each algorithm adopted the recommended values in the corresponding literature. Each algorithm independently and randomly optimized each function 25 times, and the average and standard deviation optimal results were recorded.

Table 3. Comparison results of FA-OL with other 7 algorithms

4.2 Effect Analysis of Random Parameters

In the proposed FA-OL, the execution of OL depends on the random parameter Prob. To clearly understand the impact of its setting on the optimization performance of the algorithm, we set its value to 0.5, 0.1, 0.05, 0.04, 0.03, 0.02, and 0.01. The nine test functions were solved randomly and independently 25 times, with average and standard deviation as displayed in Table 1. The first row of data corresponding to each algorithm is the average value of the optimal solution, and the second row is the standard deviation. It can be seen from Table 1 that the introduction of OL significantly improved the optimization performance of FA on most test functions, especially on the functions of F1, F2, F5, and F6, on which higher solution accuracy was attained.

Table 4. Ranking results of 8 algorithms (ANOVA)

We used analysis of variance (ANOVA) to test for significant differences between the mean value of the algorithm for each test function at the level of 0.05. The score of each algorithm on each function is shown in Table 2. At the same time, the total score of each algorithm was calculated. The smaller the score the better the comprehensive optimization performance. It can be seen from Table 2 that the comprehensive optimization performance of FA-OL is best when the random parameter value is 0.01.

4.3 Comparison with Other Algorithms

Table 3 shows the experimental results of FA-OL and the other seven algorithms, and Table 4 shows their scores on the nine test functions. It can be seen from Table 3 that the proposed algorithm achieves the best optimization results on six functions (F1, F2, F5, F6, F7, and F9), especially in F1, F2, and F5 where the position of the optimal solution was found, showing a strong global search capability. From the statistical scoring results in Table 4, we can see that FA-OL and MSSCS achieve the best statistical results, followed by NABC.

5 Conclusion

We proposed FA-OL, whose OL mechanism can effectively expand the current search area, help individuals jump out of local optima, enhance individuals’ search capability, and improve the algorithm’s global search capability. The OL mechanism and the attraction mechanism of FA act on two individuals participating in comparative fitness, so that the learning capability of the group is enhanced and the optimization capability is improved. The optimization effect of the proposed algorithm was verified based on nine standard test functions, and compared with seven improved algorithms. The optimization results were analyzed by ANOVA. The experimental results show that FA-OL achieves the best optimization performance and robustness.