Keywords

1 Introduction

Particle Swarm Optimization (PSO) is an optimization method modeled on the social behavior of the group of organisms in their natural environment [1]. Proposed for the first time by Kennedy and Eberhart in 1995 [2, 3], now belongs to the most frequently used evolutionary methods. Because of its many advantages such as simplicity, few parameters to adjust and easy implementation, it has been applied in almost all fields of science and engineering, where optimization is required [4,5,6,7,8,9]. A parameter that significantly influences the effectiveness of the particle swarm optimization method is inertia weight. Its role is to control deviation of the particles from their original direction and keep balance between local and global explorations. An incorrectly selected value of inertia weight maintains a balance between local and global exploration and can negatively affect the algorithm performance.

A factor of the inertia weight was first proposed and introduced to the PSO method by Shi and Eberhart [10, 11]. In the subsequent years, a lot of research on inertia weight have been undertaken. Clerc [12] and Trelea [13] suggested that the inertia weight should rather be a constant value. She and Eberhart [11, 14] recommended a linear decreasing inertia weight (LDW). A flexible inertia weight, that can be a positive or negative real number, was used by Han et al. [15]. In order to avoid some troubles of the LDW strategy connected with a poor local search ability at the beginning of the method, and the lack of global search ability at the end of the method, Zhang et al. [16] proposed a random inertia weight (RNW). PSO with random inertia weight was also proposed by Niu et al. [17] and Eberhart and She [18]. Three different concepts of inertia weight were investigated by Arumugan and Rao [19, 20] and Umapathy et al. [21] They considered a constant inertia weight (CIW), time-varying inertia weight (TVIW) and a global-local best inertia weight (GLbestIW) and reported that GLbestIW outperforms the other methods in terms of high quality solution, consistency, faster convergence and accuracy. Another approach was developed by Yang et al. [22]. In their study, the inertia weight depends on two parameters: an aggregation degree factor and an evolution speed factor, and is different for each individual of the swarm. A modified inertia weight was also considered by Miao et al. [23]. In this case, weight is updated by dispersion degree and advance degree factors. Performance of PSO with nonlinear strategies was examined by Chauhan et al. [24]. The authors examined an exponential self adaptive (DESIWPSO) and a dynamic (DEDIVPSO) inertia weight based on Gompertz function and a fine grained inertia weight (FGIWPSO). Another exponential inertia weight was also developed by Ememipour et al. [25], Borowska [26] and Ghali et al. [27]. A fuzzy adaptive inertia weight was proposed by Shi and Eberhart [28]. An approach based on the fuzzy systems was also described in [29,30,31,32,33].

This paper presents a modified PSO method named DWPSO in which a new strategy for inertia weight was developed. In DWPSO, the values of inertia weight are dynamically changing and are determined based on fitness function. The proposed weight is a function of the best and the worst fitness of particles. Moreover, the new presented method was tested on a set of benchmark function. Then the results were compared to the RNW-PSO method with a random inertia weight [16], the LDW-PSO method with a linear decreasing inertia weight, and the nonlinear EWPSO method [26].

2 The PSO Method

The PSO algorithm belongs to the group of optimization methods based on the population. In case of PSO this population is called swarm and consists of individuals named particles. Each particle is a point of the space of the feasible solutions. The movement of the particle in this space enables the velocity vector. Initial location and velocity of the particle are randomly generated at the beginning of the algorithm. The quality of the particles is evaluated according to the fitness function of the optimization problem. In each iteration, particles update information about their own best position (named pbest) found so far. The knowledge about the particle with the best fitness among all the particles in the whole swarm (named gbest) is also remembered and updated in every iteration. The change of the velocity and location of the particles is carried out according to the formula:

$$ V_{i} = wV_{i} + c_{1} r_{1} (pbest_{i} - X_{i} ) + c_{2} r_{2} (gbest - X_{i} ) $$
(1)
$$ X_{i} = X_{i} + V_{i} $$
(2)

where Vi = (vi1, vi2, … , viD) is a velocity vector of the particle i in the D-dimensional search space. Vector Xi = (xi1xi2 , xiD) represents a location of the particle i. Factor w is the inertia weight. Vector pbesti = (pbesti1pbesti2 , pbestiD) means a personal best location of the particle i and gbest = (gbest1, gbest2 , gbestD) denotes a location of the particle with the best fitness function among all the particles in the whole swarm. The variables c1 and c2 are acceleration coefficients. They decide how strong the particle is influenced by its knowledge about its pbest and gbest value. Parameters r1 and r2 represent randomly generated numbers between 0 and 1 to maintain diversity of the population.

3 The Proposed DWPSO Algorithm

The proposed DWPSO algorithm is a variant of the particle swarm optimization method in which the new strategy for determination of inertia weight was introduced. In DWPSO a commonly used linear weight was omitted and replaced with an exponential inertia weight. In the proposed approach, the inertia weight is changing dynamically based on a fitness of the particles in the swarm. In each iteration, the particles of the swarm move in the search space according to Eqs. 12. After evaluating the quality of the new location of the particles, the individuals with the best and the worst fitness are found and recorded. Then, on their basis, the new value of the weight is counted. The new weight is represented by a nonlinear function of the best and the worst fitness of the particles. In each iteration, a different inertia weight is calculated and applied for the whole swarm. The proposed strategy has been defined as follows:

$$ dw = ((gbest - f_{max} /2)^{ - 1} /(( - \ln [f_{\hbox{min} } ])))/10^{5} fh $$
(3)
$$ w(t + 1) = w(t) - dw(t) $$
(4)

where fmax and fmin are the values of maximal and minimal fitness in the current iteration, respectively. Factor fh is a randomly generated number in the range [0, 1].

4 Results

The simulation tests of the DWPSO method with proposed strategy was carried out on a set of nonlinear benchmark functions depicted in Table 1.

Table 1. Optimization test functions

The results of the tests were compared with the performance of PSO with a random number inertia weight (RNWPSO), LDW-PSO with a linear decreasing weight as well as EWPSO with an exponential inertia weight.

The DWPSO algorithm started with the inertia weight of 0.7 and was changing according to the formula 34. The acceleration coefficients c1, c2 used for executed computation were set to 1.6. The simulations were performed with four dimension sizes D = 10, 20 and 30 for N = 20, 40, 60 and 80 particles in the swarm respectively. Each experiment was run 50 times. In all cases, the iteration number was 1000.

The exemplary results of the tests performed for 20, 40 and 80 particles of the swarm are illustrated in Tables 2, 3 and 4. The presented values were averaged over 50 trials.

Table 2. Performance of the LDW-PSO, RNW-PSO, EWPSO and DWPSO algorithms for Rosenbrock function
Table 3. Performance of the LDW-PSO, RNW-PSO, EWPSO and DWPSO algorithms for Rastrigin function
Table 4. Performance of the LDW-PSO, RNW-PSO, EWPSO and DWPSO algorithms for Griewank function

The average best fitness in the following iterations for both DWPSO, EWPSO, RNW-PSO algorithms and LDW-PSO model for 40 particles (swarm size) and 30 dimensions is illustrated in Figs. 1, 2, 3 and 4. The vertical coordinates indicate the average best fitness in a logarithmic scale.

Fig. 1.
figure 1

The average best fitness for Rosenbrock30 and the population of 40 particles

Fig. 2.
figure 2

The average best fitness for Rastrigin30 and the population of 40 particles

Fig. 3.
figure 3

The average best fitness for Ackley30 and the population of 40 particles

Fig. 4.
figure 4

The average best fitness for Griewank30 and the population of 40 particles

The results of simulations show that the proposed DWPSO method is more effective than the other methods investigated in this study. The dynamic strategy for inertia weight introduced for DWPSO facilitates the algorithm to maintain a diversity of the individuals in the search space and helps overcome the problem premature convergence.

In almost all cases (except D = 10), the average function values found by DWPSO were lower than the results achieved by LDW-PSO and RNW-PSO, and lower or in rare cases comparable to those obtained by EWPSO. Moreover, the lowest standard deviation for DWPSO reported in most cases indicates its better stability compared to remaining investigated method. Furthermore, the minimum value was also lower in case of the DWPSO algorithm.. Additionally, in most simulations, DWPSO converged faster than LDW-PSO, RNW-PSO and EWPSO (Figs. 1, 2, 3 and 4) and only at the beginning, in the first two hundred iterations, the algorithm converged a bit slower than EWPSO or RNW-PSO (After first two hundred iterations DWPSO was the fastest). The RNW-PSO algorithm converged slower than EWPSO but still better than LDW-PSO.

Different performance of the algorithms have been noticed only for Griewank and Rastrigin functions and small dimensions. In case of Griewank function with D = 10 dimension size, the DWPSO algorithm performed a bit worse than EWPSO but better than RNW-PSO and LDW-PSO. For Rastrigin function with D = 10 for 20 and 40 particles, DWPSO achieve worse results than the remaining algorithms even when the minimum value was lower.

5 Summary

In this study, a modified particle swarm optimization algorithm named DWPSO with a novel strategy for inertia weight has been proposed. In the considered approach, instead of a commonly used constant or linear decreasing inertia weight, a dynamically changing weight was adopted. Values of the inertia weight coefficient depend on fitness of the individuals of the population. The effectiveness of the proposed strategy was tested on a set of benchmark test functions. The results of the simulations were compared with those obtained through the nonlinear EWPSO method, the RNW-PSO method with a random inertia weight and the LDW-PSO method with a linear decreasing inertia weight.

The use of the proposed strategy helps maintain diversity of individuals of the population and performs better than the other investigated methods. Furthermore, the DWPSO algorithm is also faster and more efficient in avoiding the premature convergence, compared to the other methods.