Keywords

1 Introduction

Particle Swarm Optimization (PSO) algorithm was developed by Eberhart and Kennedy in 1995 [1]. It is inspired by the intelligent behaviour of bird in search of food. The PSO algorithm is used to solve the different complex optimization problems including economics, engineering, complex real-world problems, biology and industry [2]. PSO can be applied to non-linear, non-differentiable, huge search space problems and gives better results with good accuracy [3].

For n- dimensional search space, the velocity and position of the \(i^{th}\) particle represents as: \(V_{i}=(v_{i1}, v_{i2},...,v_{id})^{T}\) and \(X_{i}=(x_{i1}, x_{i2},...,x_{id})^{T}\) respectively. Where, \(v_{id}\) and \(x_{id}\) is the velocity and position of \(i^{th}\) particle in d-dimension respectively. The velocity of the swarm (particle) is defined as follows:

$$\begin{aligned} v_{id}(new)=v_{id}(old)+c_{1}r_{1}(p_{id}-x_{id})+c_{2}r_{2}(p_{gd}-x_{id}) \end{aligned}$$
(1)
$$\begin{aligned} x_{id}(new)=x_{id}(old)+v_{id}(new) \end{aligned}$$
(2)

where, d = 1, 2, ..., n presents the dimension and i = 1, 2, ..., N represents the particle index, N is the size of the swarm, \(c_{1}\) and \(c_{2}\) are called social scaling and cognitive parameters respectively that determines the magnitude of the random force in the direction of particle’s previously best visited position \((p_{id})\) and best particle \((p_{gd})\) and \(r_{1}\), \(r_{2}\) are the uniform random variable between [0, 1]. The maximum velocity (\(V_{max}\)) assists as a constraint to control the position of the swarms within the solution search space.

Further, Shi and Eberhart [4] was developed the concept of an inertia weight (IW) in 1998 to ensure an optimal tradeoff between exploration and exploitation mechanisms of the swarm population. This inertia weight strategy was to be able to eliminate the need of maximum velocity (\(V_{max}\)). Inertia weight controls the particles movement by maintaining its previous memory. The velocity update equation is considered as follows:

$$\begin{aligned} v_{id}(new)=w*v_{id}(old)+c_{1}r_{1}(p_{id}-x_{id})+c_{2}r_{2}(p_{gd}-x_{id}) \end{aligned}$$
(3)

This paper discusses the 30 different inertia weight strategies on 10 benchmark functions for PSO algorithm. A comprehensive review on 30 inertia weight strategies have been presented in next section.

2 A Review on Different Inertia Weight Strategies for PSO

Inertia weight plays an important role in the process of providing a trade-off between diversification and intensification skills of PSO algorithm. When the inertia weight strategy is implemented to PSO algorithm, the particles move around while adjusting their velocities and positions according to Eqs. (1) and (2) in the search space.

In 1998, first time Shi and Eberhart [4] proposed the concept of constant inertia weight. A small inertia weight helps in explore the search space while a large inertia weight facilitates in exploit the search space. Eberhart and Shi [5] proposed a random inertia weight strategy and enhances the performance and efficiency of PSO algorithm.

The linearly decreasing strategy [6] increases the convergence speed of PSO algorithm in early iterations of the search space. The inertia weight starts with some large value and then linearly decreases to some smaller value. The inertia weight provides the excellent results from 0.9 to 0.4. In global-local best inertia weight [7], the inertia weight is based on the global best and local best of the swarms in each generation. It increases the capabilities of PSO algorithm and neither takes a linearly decreasing time-varying value nor a constant value.

Fayek et al. [8] introduced a particle swarm simulated annealing technique (PSOSA). This inertia weight strategy is optimized by using simulated annealing and improves its searching capability.

Chen et al. [9] present two natural exponent inertia weight strategies as e1-PSO and e2-PSO, which are based on the exponentially decreasing the inertia weight. Experimentally, these strategies become a victim of premature convergence, despite its quick convergence speed towards the optimal positions at the early stage of the search process.

Using the merits of chaotic optimization, chaotic inertia weight has been proposed by Feng et al. [10] and PSO algorithm becomes better global search ability, convergence precision and quickly convergence velocity.

Malik et al. [11] presented a sigmoid increasing inertia weight (SIIW) and sigmoid decreasing inertia weight (SDIW). These strategies provide better performance with quick convergence ability and aggressive movement narrowing towards the solution region.

Oscillating Inertia Weight [12] provides a balance between diversification and intensification waves and concludes that this strategy looks to be competitive and, in some cases, better performs in terms of consistency.

Gao et al. [13] proposed a logarithmic decreasing inertia weight with chaos mutation operator. The chaos mutation operator can enhance the ability to jump out the premature convergence and improve its convergence speed and accuracy.

To overcome the stagnation and premature convergence of the PSO algorithm, Gao et al. [14] proposed an exponent decreasing inertia weight (EDIW) with stochastic mutation (SM). The stochastic mutations (SM) is used to enhance the diversity of the swarm while EDIW is used to improve the convergence speed of the individuals (Table 1).

Table 1. Inertia weight strategies

Linearly decreasing inertia weight have been proposed by Shi and Eberhart [4] and greatly improved the accuracy and convergence speed. A large inertia weight facilitates at the inceptive phase of search space while later linearly decreases to a small inertia weight.

Adewumi et al. [25] proposed the swarm success rate random inertia weight (SSRRIW) and swarm success rate descending inertia weight (SSRDIW). These strategies use swarm success rates as a feedback parameter. Further, it enhances the effectiveness of the algorithm regarding convergence speed and global search ability.

Shen et al. [18] proposed the dynamic adaptive inertia weight and used to solve the complex and multi-dimensional function optimization problems. This strategy can timely adjust the particle speed, jump out of a locally optimal solution and improve the convergence speed.

Ting et al. [24] proposed the exponent inertia weight. There exist two important parameters as a local attractor (a) and global attractor (b). This method controls the population diversity by adaptive adjustment of local attractor (a) and global attractor (b).

Chatterjee and Siarry [22] proposed nonlinear decreasing inertia weight strategy with nonlinear modulation index. This strategy is quite effective as well as avoid premature issues. Lei et al. [17] proposed adaptive inertia weight. It furnishes with automatically harmonize global and local search ability and obtained the global optima.

J. asoc. [23] proposed the linear or non-linear decreasing inertia weight. This strategy has global search ability and also helpful to find a better optimal solution. It overcomes the weakness of premature convergence and converges faster at the early stage of the search process. Jiao et al. [19] proposed the decreasing inertia weight (DIW). This strategy provides the algorithm with dynamic adaptability and controls the population diversity by adaptive adjustment of inertia weight.

Li, L. et al. [21] proposed the tangent decreasing inertia weight (TDIW) based on tangent function (TF). This strategy is to increase the diversity of swarm for more exploration of the search space at initial iterations while later exploit the search area. So that this approach provides better results with accuracy.

Chauhan et al. [2] proposed the double exponential dynamic inertia weight (DEDIW). The inertia weight is calculated for whole swarm iteratively by using gompertz function, and it is capable of providing a stagnation free environment with better accuracy. Peram et al. [20] proposed a new inertia weight that provides the less susceptible to premature convergence and less likely to be stuck in local optima. Sheng-Ta Hsieh et al. [16] introduced fixed inertia weight (FIW). It provides better convergence speed and less computational efforts.

The decreasing exponential function inertia weight (DEFIW) [15] decreases the value of inertia weight iteratively as the algorithm approaches equilibrium state and furnishes the superiority to the competitors in fitness quality.

Arasomwan et al. [15] Proposed chaotic adaptive inertia weights as CAIWS-D and CAIWS-R. These strategies simply combine chaotic mapping with the swarm success rate as a feedback parameter to harness together chaotic and adaptivity characteristics. These approaches provide more refine accuracy, faster convergence speed as well as global search ability.

3 Experimental Results

To evaluate the performance of the inertia weight strategy, it is tested over 10 different benchmark functions (\(F_{1}\) to \(F_{10}\)) as given in Table 2.

3.1 Parameter Settings

Following experimental settings are adopted:

  • \(G_{0} = 100\) and \(\alpha = 20\) [26],

  • Number of runs = 30,

  • Number of populations = 50,

  • Maximum number of iterations (T) = 1000,

  • Value of \(c_{1}\) and \(c_{2}\) are 2.0 [25].

3.2 Results and Discussion

In this section, 30 different inertia weight strategies are analyzed on 10 benchmark problems in terms of average number of function evaluations (AFE’s), mean error (ME) and standard deviation (SD). The AFE’s, ME and SD are presented in Tables 3, 4 and 5 respectively. Boxplot of AFE’s, ME and SD are shown in Figs. 1, 2 and 3 respectively.

Table 2. Test problems, D: Dimensions, AE: Acceptable Error
Fig. 1.
figure 1

Boxplots for average number of function evaluations of 30 different Inertia Weight strategies on 10 benchmark functions as per Table 3

It is clear from the reported results that most of the Inertia weight strategies produce poor results in case of michalewicz function (\(F_{5}\)). It clear from Fig. 1 that constant inertia weight and linearly decreasing inertia weight (LDIW) is best and worst strategy respectively in terms of AFE’s. It is observed from Fig. 2 that the mean error taken by chaotic random inertia weight strategy and global local best inertia weight strategy are minimum and maximum in terms of mean error respectively compared to the other inertia weight strategies.

Table 3. Average number of function evaluations of different inertia weight strategies for different benchmark functions
Table 4. Mean error value of different inertia weight strategies for different benchmark functions
Table 5. Standard deviation value of different inertia weight strategies for different benchmark functions
Fig. 2.
figure 2

Mean error value of 30 different Inertia Weight strategies on 10 benchmark functions as per Table 4

Fig. 3.
figure 3

Standard Deviation value of 30 different Inertia Weight strategies on 10 benchmark functions as per Table 5

If the comparison is made through standard divisions (SD’s)the chaotic random inertia weight produces near optimal solutions in comparison to other inertia weight strategies as shown in Fig. 3. The summary results of inertia weight strategies are shown in Table 6.

Table 6. Summary Results for Inertia Weight

4 Conclusion

This paper presents the significance of inertia weight strategies in the solution search process of particle swarm optimization (PSO). Here, total 30 inertia weight strategies in PSO are analyzed in terms of efficiency, reliability and robustness while testing over 10 complex test functions. Through boxplots and success rate, it is found that the chaotic random inertia weight is better in terms of accuracy while constant inertia weight performs better in terms of efficiency of PSO among the considered inertia weight strategies.