Keywords

1 Introduction

Particle swarm optimization (PSO) is a method modeled on the behavior of the swarm of insects in their natural environment. It was developed by Kennedy and Eberhart in 1995 [1,2,3] and nowadays has been successfully applied in many areas of science and engineering connected with optimization [4,5,6,7,8,9,10]. However, likewise other evolutionary optimization methods, PSO can experience some problems related to the convergence speed and escaping from a local optima [11]. Important parameters that affect the effectiveness of PSO are acceleration coefficients called cognitive and social coefficients. The cognitive coefficient affects local search [12] whereas the social coefficient maintains the right velocity and direction of global search. A fine tuning social coefficient can help overcome disadvantages connected with premature convergence and improve the efficiency of PSO.

Many various approaches at the right choice of coefficients have been studied. Eberhart and Kenedy [13] recommended a fixed value of the acceleration coefficient. In their opinion, both social and cognitive coefficients should be the same and equal to 2.0. Ozcan [14] and Clerc [15] in their research agreed that the coefficients should be the same but proved they should rather be equal to 1.494 what results in faster convergence. According to Venter and Sobieszczański [16], the algorithm performs better when coefficients are different and propose to applied a small cognitive coefficient (c1 = 1.5) and a large social coefficient (c2 = 2.5). A different approach has been proposed by Ratnaweera et al. [17]. The authors examined the efficiency of a self-organizing PSO with time-varying acceleration coefficients. They concluded that the PSO performance can be greatly improved by using a simultaneously decreasing cognitive coefficient and an increasing social coefficient. Hierarchical PSO with jumping time-varying acceleration coefficients for real-world optimization was proposed by Ghasemi et al. [18]. The particle swarm optimization with time varying acceleration coefficients were also explored in [19,20,21,22]. A different approach based on the nonlinear acceleration coefficient affected by the algorithm performance was recommended by Borowska [23]. According to Mehmod et al. [24] algorithm PSO performs faster with fitness-based acceleration coefficients. PSO based on self-adaptation acceleration coefficient strategy was suggested by Guo and Chen [25]. A new model on social coefficient was presented by Cai et al. [26]. In order not to lose some useful information inside the swarm, they proposed to use the dispersed social coefficient with an information index about the differences of particles. Additionally, to provide a diversity of the particles, a mutation strategy was also introduced. A novel particle swarm concept based on the idea of two types of agents in the swarm with adaptive acceleration coefficients were considered by Ardizzon et al. [27].

This paper presents an improved effective particle swarm optimization algorithm named SCPSO. In SCPSO, a new approach connected with the social coefficient is proposed in order to better determine the velocity value and search direction of the particles and, consequently, to improve the convergence speed as well as to find a better solution. The presented method was tested on a set of benchmark functions and the results were compared with those obtained through MPSO-TVAC with a time-varying acceleration coefficient [17], the standard PSO (SPSO) and DPSO with the dispersed strategy [26]. The simulation results indicate that SCPSO is an effective optimization method.

2 The Standard PSO Method

In the PSO method, the optimization process is based on the behavior of the swarm of individuals. In practice, a swarm represents a set of particles each of which is a point in the coordinate system that possess a position represented by a vector Xi = (xi1, xi2, …, xiD) and a velocity represented by a vector Vi = (vi1, vi2, …, viD). In the first step of the algorithm, both the position and velocity of each particle are randomly generated. In subsequent iterations, their positions and velocities in the search space are updated according to the formula:

$$ V_{i} = wV_{i} + c_{1} r_{1} (pbest_{i} - X_{i} ) + c_{2} r_{2} (gbest - X_{i} ) $$
(1)
$$ X_{i} = X_{i} + V_{i} $$
(2)

where the inertia weight w controls the impact of previously found velocity of particle on its current velocity. Factors c1 and c2 are cognitive and social acceleration coefficients, respectively. They decide how much the particles are influenced by their best found positions (pbest) as well as by the best position found by the swarm (gbest). The variables r1 and r2 are random real numbers selected between 0 and 1. In each iteration, the locations of the particles are evaluated based on the objective function. The equation of the objective function depends on an optimization problem. Based on the assessment, each particle keeps its knowledge about the best position that has found so far and the highest quality particle is selected and recorded in the swarm. This knowledge is updated in each step of the algorithm. In this way particles move in the search space towards the optimum.

3 The SCPSO Strategy

The proposed SCPSO is a new variant of the PSO algorithm in which the new approach connected with the social acceleration coefficient has been introduced. In the original PSO, to find an optimal value, particles follow two best values: the best position found by the particle itself and the best value achieved by the swarm. The direction and rate of particle speed are affected by the acceleration coefficients, that are the real numbers, the same in a whole swarm and randomly selected between 0 and 1. It implies that some information connected with swarm behaviours can be lost or not taken into account. Owing to this, a new approach for calculating the social coefficient has been introduced. The value of this coefficient is not constant but is changing dynamically as a function of the maximal and minimal fitness of the particles. It also depends on the current and total numbers of iterations. In each iteration, a different social coefficient is established. The equations describing this relationship are as follows:

$$ c_{2} = c_{2} + ((f_{ \hbox{min} } /f_{ \hbox{max} } )k^{ - 1} ) /iter_{\hbox{max} } $$
(3)
$$ V_{i} = wV_{i} + c_{1} r_{1} (pbest_{i} - X_{i} ) + c_{2} r_{2} (gbest - X_{i} ) $$
(4)

where a parameter k determines the current number of iterations, fmin and fmax are the values of maximal and minimal fitness in the current iteration, respectively, itermax describes the maximal number of iterations. To secure the diversity of the particles and help omit local optima, the mutation operator is applied. The proposed approach helps maintain the right velocity values and search direction of the particles and improves the convergence speed.

4 Simulation Results

The presented SCPSO method was tested on the benchmark functions described in Table 1. The results of these tests were compared with the simulation results of the dispersed particle swarm optimization (DPSO) [17], the standard PSO and the modified version (MPSO-TVAC) with a time-varying accelerator coefficient [26].

Table 1. Optimization test functions

For all tested functions, the applied population consisted of 100 particles with four different dimension sizes D = 30, 50, 100 and 200, respectively. The inertia weight was linearly decreasing from 0.9 to 0.4. All settings (the acceleration coefficients and their values) regarding the MPSO-TVAC and DPSO were adopted from Ratnawera [17] and Cai [26], respectively. For the modified MPSO-TVAC, c1 coefficient is decreasing from 2.5 to 0.5. The value of c2 coefficient is increasing from 0.5 to 2.5. In the DPSO algorithm, c1 and cup coefficients are equal to 2.0 while the value of clow coefficient is set to 1.0. Detailed settings of the parameter values of all tested methods were collected in Table 2. The details of MPSO-TVAC and DPSO used for comparison can be found in [17] and [26], respectively. For each case, the simulations were run 20 times. The maximum number of iterations depends on the dimension and was equal to 1500 for 30 dimensions, 2500 for 50 dimensions, 5000 for 100 dimensions, and 10000 for 200 dimensions, respectively.

Table 2. Parameter values of algorithms.

The exemplary results of the simulations are shown in Tables 3, 4, 5, 6 and 7. The presented values have been averaged over 20 trials.

Table 3. Performance of the MPSO-TVAC, DPSO, SPSO and SCWPSO algorithms for Schwefel 2.26 function.
Table 4. Performance of the MPSO-TVAC, DPSO, SPSO and SCPSO algorithms for Rastrigin function.
Table 5. Performance of the MPSO-TVAC, DPSO, SPSO and SCPSO algorithms for Ackley function.
Table 6. Performance of the MPSO-TVAC, DPSO, SPSO and SCPSO algorithms for Brown function.
Table 7. Performance of the MPSO-TVAC, DPSO, SPSO and SCPSO algorithms for Zakharov function.

The results of the performed simulations show that the SCPSO algorithm with the proposed approach achieves superior optimization performance over the remaining tested algorithms. In almost all considered cases, the average function values found by the SCPSO algorithm were lower than those achieved by the remaining tested methods. The mean values of Rastrigin, Brown and Schwefel problems achieved by the SCPSO are lower, despite higher (in most cases) standard deviations. For Ackley function, in almost all cases (except the case with 30 dimension) the mean values achieved by SCPSO were lower than those achieved by the other algorithms. The standard deviations were also lower, which indicates greater stability of SCPSO. For 30 dimensions, the outcomes found by SCPSO were a bit worse than the mean values found by MPSO-TVAC but better than the results achieved by DPSO and SPSO.

The proposed approach extends the adaptive capability of the particles and improves their search direction. The mutation strategy helps SCPSO maintain diversity between particles in the search space and facilitates the avoidance of the overcome premature convergence problem.

The increase in the number of particles (with the same dimension) resulted in faster convergence of the algorithms and allowed to find better optimal values. In turn, the decrease in the number of particles in the swarm caused the increase in dispersion of the results, and deterioration of the results.

5 Summary

In this paper, a novel version of particle swarm optimization algorithm called SCPSO has been implemented. The changes are connected with social cooperation and movement of the particles in the search space and has been introduced to improve the convergence speed and to find a better quality solution. The influence of the introduced changes on a swarm motion and performance of the SCPSO algorithm was studied on a set of known benchmark functions.

The results of the described investigations were compared with those obtained through MPSO-TVAC with time-varying accelerator coefficient, the standard PSO and the dispersed particle swarm optimization (DPSO). The proposed strategy improved the algorithm performance. The new algorithm was more effective over MPSO-TVAC, SPSO and DPSO in almost all cases.