Abstract
Particle swarm optimization (PSO) is a population-based stochastic search algorithm that has been widely used to solve many real-world problems. However, like other evolutionary algorithms, PSO also suffers from premature convergence and entrapment into local optima when addressing complex multimodal problems. In this paper, we propose a chaos-embedded particle swarm optimization algorithm (CEPSO). In CEPSO, the chaos-based swarm initialization is first applied to yield high-quality initial particles with better stability. Afterwards the chaotic inertia weight and the chaotic sequence based random numbers are introduced into the velocity update scheme for PSO to improve its global and local search capabilities. In addition, two different mutation strategies (chaos and levy) are utilized to enhance the swarm diversity without being trapped in local optima. Finally, the CEPSO proposed in this work is compared with several classical PSOs on a set of well-known benchmark functions. Experimental results show that CEPSO can achieve better performance compared to several other PSO variants in terms of the solution accuracy and convergence rate.
Access provided by Autonomous University of Puebla. Download conference paper PDF
Similar content being viewed by others
Keywords
- Particle swarm optimization
- Chaos
- Swarm diversity
- Inertial weight
- Premature convergence
- Local optima
- Mutation strategy
1 Introduction
Particle swarm optimization (PSO) is a stochastic population-based method motivated by the intelligent collective behaviour of some animals such as flocks of birds and schools of fish [1]. Due to the advantages of PSO include fast convergence toward the global optimum, easy implementation and fewer parameters to adjust. All of these make it as a potential method to solve different optimization problems in a wide variety of applications, such as text mining [2], data clustering [3], image processing [4], optimal scheduling [5] and machine learning [6], etc. Meanwhile, an numerous number of different PSO variants have been proposed by researchers in the literature, and most of them can achieve encouraging results and impressive performance. However, PSO still suffers from the issues of premature convergence and entrapment into local optima like other stochastic search techniques, especially in the context of the complex multimodal optimization problems. To tackle the above issues, a huge number of PSOs have been developed to enhance the performance. From the literature, these previous works can be roughly divided into the following categories: (i) swarm initialization. Note that in (i), some PSO variants are initialized with chaos sequence [4, 7], opposition-based learning [2, 8], and some other initialization strategies [9] instead of the purely random mechanism to improve PSO performance. In particular, the chaos based swarm initialization has been extensively studied in our prior works [10, 11] with significantly improved performance. (ii) Parameter selection. The parameters inertia weight [4], acceleration coefficients [10], and random numbers [2] attract much more attention and have become focus of research in the area of PSO in recent years. (iii) Non-parametric update. In this paradigm, there is no need to tune any algorithmic parameters in PSO by removing all the parameters from the standard particle swarm optimization [12, 13]. (iv) Multi-swarm scheme. In (iv), the whole swarm in PSO can be divided into several sub-swarms during the search process so as to explore different sub-regions of the solution space with different search strategies [14, 15]. (v) Hybrid mechanism. As for (v), different evolutionary algorithms (such as genetic algorithm [8], cuckoo search [16], differential evolution [17], simulated annealing [18], artificial bee colony [19], firefly algorithm [20]) and evolutionary operators (like crossover [21], mutation [11]) are integrated together to improve the performance of PSO.
Motivated by the above PSO researches, we put forward a chaos-embedded particle swarm optimization (CEPSO). On the one hand, the chaos-based swarm initialization is applied to yield high-quality initial particles with better stability. On the other hand, the chaotic inertia weight and chaotic sequence based random numbers are introduced into the velocity update scheme for PSO to balance the exploration and exploitation and thus result in a better optimal solution. In addition, two different mutation operators (Gaussian and chaos) are utilized to enhance the swarm diversity and avoid the premature convergence of the CEPSO. Conducted experiments validate that the proposed PSO outperforms several state-of-the-art PSOs regarding their effectiveness and efficiency in the task of numerical function optimization. The rest of this paper is organized as follows. Section 2 introduces the standard PSO. In Sect. 3, the CEPSO is elaborated from three aspects of swarm initialization, parameter selection and mutation strategies adopted in this work, respectively. Experimental results on a set of well-known benchmark functions are reported in Sect. 4. At length, the concluding remarks and future work are provided in Sect. 5.
2 Standard PSO
Particle swarm optimization [1] is a population based meta-heuristic algorithm. The basic principle of PSO mimics the swarm social behaviour such as bird flocking and fish schooling. In PSO, the population is called a swarm and each individual in the swarm is referred as a particle. Each particle in the swarm represents a potential solution to an optimization problem. Specifically, the position of the ith particle can be expressed as a D-dimensional vector Xi = [xi1,xi2,…,xiD] where xij ∈ [xmin,xmax] denotes the position of the jth dimension of the ith particle, and the corresponding velocity can be shown as Vi = [vi1,vi2,…,viD] where vij ∈ [vmin,vmax] is used to reduce the likelihood of the particles flying out of the search space. The best previous position (the position giving the best fitness value) of the ith particle is denoted by pbesti = (pbesti1,pbesti2,…, pbestiD), while the global best position of the whole swarm found so far is indicated as gbest = (gbest1,gbest2,…,gbestD). To start with, the particles are randomly distributed over the search space with random velocity values. Followed by each particle’s velocity is updated using its own previous best experience known as the personal best experience (pbest) and the whole swarm’s best experience known as the global best experience (gbest) until a global optimal solution is found. In PSO, each particle is associated with two properties (velocity vector V and position vector X) and it moves in the search space with a velocity that is dynamically adjusted according to pbest and gbest simultaneously. Mathematically, velocity and position of particles are updated according to the following formula:
where ω is the inertia weight used for balancing the global and local search. In general, a large inertia weight facilitates the global exploration while a small inertia weight tends to facilitate the local exploitation. c1 and c2 are positive constants and called the acceleration coefficients reflecting the weight of the stochastic acceleration terms that pull each particle toward pbesti and gbest positions, respectively. r1ij and r2ij denote two random numbers uniformly distributed in the interval [0,1].
3 Proposed CEPSO
3.1 Swarm Initialization
Swarm initialization plays a crucial role in any population based evolutionary algorithms as it affects the convergence speed and quality of the final solution. In general, random initialization is the most frequently used method to generate initial swarm in absence of any information about the solution. From the literature, it can be seen that different initialization strategies have been tested with PSO to improve its performance. Particularly, the chaotic sequence rather than random sequence based initialization is a powerful strategy to diversify the particles of swarm and improve the performance of PSO by preventing the premature convergence [7]. Moreover, it is reported that the stability of PSOs and the quality of final solution can also be improved to some extent [4, 10]. Based on this argument, the commonly used logistic map is employed to generate the initial position instead of the uniform position, which can be described as follows:
where Chi denotes the ith chaotic variable in the interval (0,1), such that the initial Ch0 ∈ (0,1) and Ch0 ∉ (0,0.25,0.5,0.75,1). μ is a predetermined constant called bifurcation coefficient. When μ increases from zero, the dynamic system generated by Eq. (3) changes from one fixed-point to two, three,…, and until 2i. During this process, a large number of multiple periodic components will locate in narrower and narrower intervals of μ as it increases. This phenomenon is obviously free from constraint. But μ has a limit value μt = 3.569945672. Note that when μ approaches the μt, the period will become infinite or even non-periodic. At this time, the whole system evolves into the chaotic state (behavior). On the other hand, when μ is greater than 4, the whole system becomes unstable. Hence the range [μt,4] is considered as the chaotic region of the whole system. Its bifurcation diagram is illustrated in Fig. 1.
As described above, the pseudocode of logistic map can be described as below, which is able to generate chaotic sequence and avoid plunging into the small periodic cycles effectively.
3.2 Parameter Selection
Proper selection of PSO parameters can significantly affect the performance of particle swarm optimization. It is generally believed that a larger inertia weight facilitates global exploration while a smaller inertia weight tends to facilitate local exploitation to fine-tune the current search space. By changing the inertia weight dynamically, the search capability of PSO can be dynamically adjusted. This is a general statement about the impact of w on PSO’s search behavior shared by many other researchers. Based on the nature of ergodicity and non-repetition of the chaos mentioned in Sect. 3.1, the chaotic inertia weight [22] is applied in this work to strike a better balance between the exploration and exploitation, which is defined by adding a chaotic term to the linearly decreasing inertia weight as follows:
where ch ∈ (0,1) is a chaotic number, wmax and wmin denote the initial value and final value of inertia weight respectively. t is the current iteration of the algorithm and tmax is the maximum number of iterations the PSO is allowed to continue.
Note that the random numbers r1 and r2 are the key components affecting the convergence behavior of PSO. It is reported that the convergence of PSO is strongly connected to its stochastic nature and PSO uses random sequence for its parameters during a run. In particular, it has been shown that the final optimization results may be very close but not equal when different random sequences are used during the PSO search process [23]. Thus the chaotic sequence generated by Eq. (3) is used to substitute the random numbers r1 and r2 in the velocity update equation (the value of Ch varies between 0 and 1) [2].
3.3 Mutation Strategy
Mutation strategy is an important part of evolutionary computation technique, which can effectively prevent the loss of population diversity and allow a greater region of the search space to be covered. Based on this fact, different mutation operators, such as wavelet [4], gaussian [11], cauchy [11] and levy [24], have been widely used in evolutionary computation, especially for PSO to enhance its global search ability. However, these mutation techniques are not suitable for all kinds of problems, that is to say, most of them are problem-oriented. Thus the two most commonly used mutation strategies, gaussian and chaos, are alternately exploited to mutate the pbest and gbest based on the swarm diversity defined below, which is expected to well guide the search process of the particle swarm.
where N is the swarm size, D denotes the space dimension, xij(t) is the jth value of ith particle in the tth iteration and \(\overline{{x_{j} (t)}}\) is the average value of the jth dimension over all the particles in the swarm.
So far, the procedure of CEPSO can be succinctly described as follows.
4 Experimental Results and Discussion
To evaluate the performance of the proposed CEPSO, extensive experiments are conducted on a set of well-known benchmark functions consisting of four global optimization problems. Particularly, the performance of PSO with different swarm initialization and different inertia weights are completely compared and investigated respectively. Note that all the test functions are to be minimized, they are numbered f1–f4 given in Table 1, including their expression, dimension, allowable search space, global optimum and property, respectively.
-
(1)
Sphere function
$$ \min f_{1} (x) = \sum\limits_{i = 1}^{n} {x_{i}^{2} } $$where the global optimum x* = 0 and f(x*) = 0 for −10 ≤ xi ≤ 10.
-
(2)
Schwefel function
$$ \min f_{2} (x) = \sum\limits_{i = 1}^{n} {\left| {x_{i} } \right| + \prod\limits_{i = 1}^{n} {\left| {x_{i} } \right|} } $$where the global optimum x* = 0 and f(x*) = 0 for −100 ≤ xi ≤ 100.
-
(3)
Rastrigin function
$$ \min f_{3} (x) = \sum\limits_{i = 1}^{n} {[x_{i}^{2} - 10\cos (2\pi x_{i} ) + 10]} $$where the global optimum x* = 0 and f(x*) = 0 for −5.12 ≤ xi ≤ 5.12.
-
(4)
Ackley function
$$ \min f_{4} (x) = - 20\exp \left( { - 0.2\sqrt {\frac{1}{n}\sum\limits_{i = 1}^{n} {x_{i}^{2} } } } \right) - \exp \left( {\frac{1}{n}\sum\limits_{i = 1}^{n} {\cos (2\pi x_{i} )} } \right) + 20 + e $$where the global optimum x* = 0 and f(x*) = 0 for −32 ≤ xi ≤ 32.
Figure 2 depicts the 2-dimensional graphical shows of the four well-known benchmark functions.
To illustrate the effect of different initialization methods, inertia weights and mutation strategies mentioned above, different combinations of PSO with random/chaos swarm initialization, standard/chaotic parameters, and without/chaos mutation are tested respectively. For readability, note that different PSO paradigms are specified in Table 2 by their acronyms, respectively.
Consider the following given conditions: the swarm size N = 40, the standard parameters include wmax = 0.9, wmin = 0.4 and the acceleration coefficients c1 = c2 = 2 for all the related PSO variants. The threshold of the swarm diversity is predetermined Divth = 2.8e−20 by trial and error. For each test function, 30 independent runs are performed by each PSO, and each run is with 1000 iterations. The algorithm terminates when it reaches the maximum number of allowed iterations.
From Table 3, it can be clearly observed that CEPSO evidently outperforms other PSO variants on all the four test functions. Taking f1 and f2 for example, note that under the same conditions of the swarm initialization and related parameters, the PSOs with mutation strategies markedly surpass those without mutations. This validates the importance of mutation operators to sustain the swarm diversity without being trapped in local optima. In particular, it is worth noting that the performance of PSO with chaos swarm initialization is far superior to those corresponding PSOs with random initialization except for PSO-Chs-S-W and PSO-Rnd-S-W for function f1, which implies that PSO with the chaos-based initialization can alleviate its inherent defects of low stability. In other words, just as the conclusions drawn in references [4, 7], this fully demonstrates the significance of the high-quality initial particles to the convergent performance of PSO algorithm. In addition, note also that PSO with the chaotic parameters can achieve better performance than those with the standard parameters under the same other experimental conditions. As for the test function f3, due to it owns a large number of local optima, thus it is difficult to find the global optimization values. However, it is interesting to note that its optimal value can be found based on the PSO-Rnd-S-M, PSO-Rnd-C-M, PSO-Chs-S-M and PSO-Chs-C-M respectively so as to further show the importance of mutation operation. Likewise, CEPSO is able to get better solutions for function f4 even though it has one narrow global optimum basin and many minor local optima. In sum, the promising results obtained by the CEPSO are largely attributed to the chaos-based swarm initialization, the chaotic inertia weight and chaotic sequence based random numbers as well as two different mutation strategies (chaos and levy), all of these are complementary to each other and the appropriate combination makes them benefit from each other.
To illustrate the particles search process, Fig. 3 depicts the evolution curves of PSO with different configurations for all the four test functions, which clearly shows that PSO-Chs-C-M performs much better than the other PSO variants in almost all cases. To be specific, taking Fig. 3(a) for example, the evolution curve of PSO-Chs-C-M consistently decreases fastly compared to that of PSO-Rnd-S-W, PSO-Rnd-C-W and PSO-Chs-S-W respectively. In contrast, the convergence rates of PSO-Rnd-S-M and PSO-Chs-S-M are not as fast as that of PSO-Chs-C-M, PSO-Chs-C-W and PSO-Rnd-C-M, both of them evolve slowly as the search proceeds but are better than the PSO-Rnd-S-W, PSO-Rnd-C-W and PSO-Chs-S-W respectively. On the other hand, it is interesting to note that PSO-Chs-C-M yields the fastest convergence rate at the early 150 iterations in Fig. 3(c). Compared to PSO-Rnd-S-W, PSO-Rnd-C-W, PSO-Chs-S-W and PSO-Chs-C-W, the other three PSOs with mutation also evolve fast in the early 500 iterations and finally converge to the global optima.
To further validate the effectiveness of the proposed algorithm, CEPSO is compared with the other seven PSO variants in Table 4, including LPSO [25], FIPSO [26], HPSO-TVAC [27], DMSPSO [28], CLPSO [29], LFPSO [30] and OPSO [31]. Note that the dimension of all the functions is set as 30 in this section. The mean best solution (Mean) and standard deviation (Std.) are applied to measure the performance. From the results shown in Table 4, we can see that CEPSO gets rank 1 three times in spite of the rank 3 on function f4. According to the final rank, it is clearly observed that CEPSO can achieve better performance than the others in terms of the average best solution and standard deviation. To summarize, the encouraging results obtained by the CEPSO are largely ascribed to the chaos-based swarm initialization, chaotic parameters and multi-mutation strategies exploited in this work, respectively.
5 Conclusions and Future Work
In this paper, we propose a chaos-embedded particle swarm optimization algorithm (CEPSO). On the one hand, the chaos-based swarm initialization is first used to yield high-quality initial particles with better stability. On the other hand, the chaotic inertia weight and the chaotic sequence based random numbers are introduced into the velocity update scheme for PSO to improve its global and local search ability. In the meanwhile, two different mutation operators (chaos and levy) are used to enhance the swarm diversity and avoid the premature convergence. At length, extensive experiments on a set of well-known benchmark functions demonstrate that CEPSO is much more effective than several other PSOs in dealing with numerical function optimization.
As future work, we plan to compare CEPSO with more state-of-the-art PSO variants in the task of complex multi-optima and multi-objective problems, or even some real-world applications from other fields to further investigate the effectiveness of the CEPSO. More interesting future work is to introduce the other most common chaotic maps, viz. Tent map, Lozi map, Arnold’s cat map, Sinai map, Burgers map, Dissipative standard map, Tinkerbell map, Circle map and Sinusoidal map, into the PSO to investigate how to improve its performance without being trapped in local optima. Meanwhile, we also intend to delve deeper into the parallelization of CEPSO for large-scale optimization problems and exploring the use of different chaotic parameters for PSO in different scenarios simultaneously, especially for the adequate parameter tuning in a wide range of problems. Lastly, and arguably most importantly, the qualitative relationships between the chaos-based swarm initialization and the stability of PSO, from the viewpoint of mathematics, will be elaborated and proved comprehensively.
References
Kennedy, J., Eberhart, R.: Particle swarm optimization In: IEEE Conference on Neural Networks (ICNN 1995), pp. 1942–1948 (1995)
Bharti, K., Singh, P.: Opposition chaotic fitness mutation based adaptive inertia weight BPSO for feature selection in text clustering. Appl. Soft Comput. 43, 20–34 (2016)
Chuang, L., Hsiao, C., Yang, C.: Chaotic particle swarm optimization for data clustering. Expert Syst. Appl. 38(12), 14555–14563 (2011)
Tian, D., Shi, Z.: MPSO: Modified particle swarm optimization and its applications. Swarm Evol. Comput. 41, 49–68 (2018)
Petrović, M., Vukovic, N., Mitic, M., et al.: Integration of process planning and scheduling using chaotic particle swarm optimization algorithm. Expert Syst. Appl. 64, 569–588 (2016)
Das, P., Behera, H., Panigrahi, B.: A hybridization of an improved particle swarm optimization and gravitational search algorithm for multi-robot path planning. Swarm Evol. Comput. 28, 14–28 (2016)
Tian, D.: Particle swarm optimization with chaos-based initialization for numerical optimization. Intell. Autom. Soft Comput. 24(2), 331–342 (2018)
Dong, N., Wu, C., Ip, W., et al.: An opposition-based chaotic GA/PSO hybrid algorithm and its application in circle detection. Comput. Math. Appl. 64, 1886–1902 (2012)
Xue, B., Zhang, M., Browne, W.: Particle swarm optimisation for feature selection in classification: novel initialisation and updating mechanisms. Appl. Soft Comput. 18, 261–276 (2014)
Tian, D., Zhao, X., Shi, Z.: Chaotic particle swarm optimization with sigmoid-based acceleration coefficients for numerical function optimization. Swarm Evol. Comput. 51, 126–145 (2019)
Tian, D., Zhao, X., Shi, Z.: DMPSO: Diversity-guided multi-mutation particle swarm optimizer. IEEE Access 7(1), 124008–124025 (2019)
Beheshti, Z., Shamsuddin, S.: Non-parametric particle swarm optimization for global optimization. Appl. Soft Comput. 28, 345–359 (2015)
Liu, Z., Ji, X., Liu, Y.: Hybrid non-parametric particle swarm optimization and its stability analysis. Expert Syst. Appl. 92, 256–275 (2018)
Chen, Y., Li, L., Peng, H., et al.: Dynamic multi-swarm differential learning particle swarm optimizer. Swarm Evol. Comput. 39, 209–221 (2018)
Xia, X., Gui, L., Zhan, Z.: A multi-swarm particle swarm optimization algorithm based on dynamical topology and purposeful detecting. Appl. Soft Comput. 67, 126–140 (2018)
Bouyer, A., Hatamlou, A.: An efficient hybrid clustering method based on improved cuckoo optimization and modified particle swarm optimization algorithms. Appl. Soft Comput. 67, 172–182 (2018)
Mao, B., Xie, Z., Wang, Y., et al.: A hybrid differential evolution and particle swarm optimization algorithm for numerical kinematics solution of remote maintenance manipulators. Fusion Eng. Des. 124, 587–590 (2017)
Javidrad, F., Nazari, M.: A new hybrid particle swarm and simulated annealing stochastic optimization method. Appl. Soft Comput. 60, 634–654 (2017)
Li, Z., Wang, W., Yan, Y., et al.: PS-ABC: A hybrid algorithm based on particle swarm and artificial bee colony for high-dimensional optimization problems. Expert Syst. Appl. 42(22), 8881–8895 (2015)
Aydilek, İ: A hybrid firefly and particle swarm optimization algorithm for computationally expensive numerical problems. Appl. Soft Comput. 66, 232–249 (2018)
Meng, A., Li, Z., Yin, H., et al.: Accelerating particle swarm optimization using crisscross search. Inf. Sci. 329, 52–72 (2016)
Feng, Y., Yao, Y., Wang, A.: Comparing with chaotic inertia weights in particle swarm optimization. In: IEEE Conference on Machine Learning and Cybernetics (ICMLC 2007), pp. 329–333 (2007)
Alatas, B., Akin, E., Ozer, A.: Chaos embedded particle swarm optimization algorithms. Chaos Solitons Fractals 40(4), 1715–1734 (2009)
Wang, H., Wang, W., Wu, Z.: Particle swarm optimization with adaptive mutation for multimodal optimization. Appl. Math. Comput. 221, 296–305 (2013)
Kennedy, J., Mendes, R.: Population structure and particle swarm performance. In: IEEE Congress on Evolutionary Computation (CEC 2002), pp. 1671–1676 (2002)
Mendes, R., Kennedy, J., Neves, J.: The fully informed particle swarm: simpler, maybe better. IEEE Trans. Evol. Comput. 8(3), 204–210 (2004)
Ratnaweera, A., Halgamuge, S., Watson, H.: Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients. IEEE Trans. Evol. Comput. 8(3), 240–255 (2004)
Liang, J., Suganthan, P.: Dynamic multi-swarm particle swarm optimizer. In: IEEE Conference on Swarm Intelligence Symposium (SIS 2005), pp. 124–129 (2005)
Liang, J., Qin, A., Suganthan, P., et al.: Comprehensive learning particle swarm optimizer for global optimization of multimodal functions. IEEE Trans. Evol. Comput. 10(3), 281–295 (2006)
Haklı, H., Uğuz, H.: A novel particle swarm optimization algorithm with Levy flight. Appl. Soft Comput. 23, 333–345 (2014)
Ho, S., Lin, H., Liauh, W., et al.: OPSO: Orthogonal particle swarm optimization and its application to task assignment problems. IEEE Trans. Syst. Man Cybern. A Syst. Hum. 38(2), 288–298 (2008)
Acknowledgment
This work is fully supported by the Program of the Science and Technology Department of Xinjiang Uygur Autonomous Region (No. 2022D01A16) and the Program of the Applied Technology Research and Development of Kashi Prefecture (No. KS2021026).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 IFIP International Federation for Information Processing
About this paper
Cite this paper
Tian, D., Li, B., Liu, C., Li, H., Yuan, L. (2022). Comparative Study of Chaos-Embedded Particle Swarm Optimization. In: Shi, Z., Zucker, JD., An, B. (eds) Intelligent Information Processing XI. IIP 2022. IFIP Advances in Information and Communication Technology, vol 643. Springer, Cham. https://doi.org/10.1007/978-3-031-03948-5_2
Download citation
DOI: https://doi.org/10.1007/978-3-031-03948-5_2
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-03947-8
Online ISBN: 978-3-031-03948-5
eBook Packages: Computer ScienceComputer Science (R0)