Abstract
Particle swarm optimization (PSO) is a population oriented heuristic numerical optimization algorithm, influenced by the combined behavior of some birds. Since its introduction in 1995, a large number of variants of PSO algorithm have been introduced that improves its performance. The performance of the algorithm mostly rely upon inertia weight and optimal parameter setting. Inertia weight brings equivalence among exploitation and exploration while searching optimal solution within the search region. This paper presents a new improved version of PSO that uses adaptive inertia weight technique which is based on cumulative binomial probability (CBPPSO). The proposed approach along with four other PSO variants are tested over a set of ten well-known optimization test problems. The result confirms that the performance of proposed algorithm (CBPPSO) is better than other PSO variants in most of the cases. Also, the proposed algorithm has been evaluated on three real-world engineering problems and the results obtained are promising.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
The particle swarm optimization is a population oriented meta-heuristic optimization technique which was first proposed by Russell Eberhart and James Kennedy [1]. This algorithm is motivated from the social behavior of some animal groups like fish schools or bird flocks. Similar to other meta-heuristic optimization algorithms, in PSO, a swarm of possible solutions is derived in succeeding iterations. PSO is relatively easy to understand and implement since there are very few parameter settings that are required to tune in comparison to other optimization strategies.
In PSO, each particle represents a prospective solution to an optimization problem. The group of particles (or swarm) fly within search space by trailing the current optimum solutions. Every particle remembers its best coordinate position in the search region which is related to the best position it has attained so far. This position is called pbest. Similarly, the best current position of the particle within whole swarm, called as gbest is also remembered. In PSO, each particle changes its velocity towards its gbest and pbest location after each time step. The basic PSO [1] is slow in most cases and prematurely converges to local optima. The solution to their problem is use of inertia weight. The inertia weight [2] is the primary parameter of the PSO, and has an important part in balancing the exploration and exploitation. After the introduction of inertia weight, many versions of the PSO algorithm have been presented aiming balanced exploration–exploitation trade-off.
Shi and Eberhart [2] examined different constant values of inertia weight and reached to the conclusion that within specific range, PSO search the optimum global solution within a acceptable number of iterations. Mostly with lower value of inertia weight, PSO converges in local optima region and with higher value of inertia weight; it diverges and hence performs little exploration in the search region. For finding an optimum solution in a dynamic environment, a random inertia weight strategy is used by Russell Eberhart and Shi [3] and varies in the range [0.5, 1]. This approach is used to alter exploitation and exploration randomly.
A large number of inertia weight variants use time-varying inertia weight method which utilizes iteration number to determine the value of inertia weight. Most famous among them is linear decreasing inertia weight technique [4], in which the inertia weight linearly drops from \({\omega _{\rm{max} }}\) to \({\omega _{\rm{min} }}\). It is found empirically that in most of the cases inertia weight in range [0.4, 0.9] provides the optimum result. Some other time-varying techniques are simulated annealing [5], sigmoid [6], exponential decreasing [7], and logarithmic decreasing technique [8]. Nickabadi and Ebadzadeh [9] suggests adaptive inertia weight law which controls population diversity and enhances its searching ability. Lei et al. [10] suggests Sugeno function as inertia weight method to auto-tune the global and local search ability to avoid premature convergence problem.
Zhan et al. [11] proposed an evolutionary state estimation technique which uses fuzzy classification method to determine the inertia weight. A Gaussian perturbation-based elitist learning strategy has also been adopted which facilitates the particles to move out of local optima region. As a feedback parameter, Nickabadi et al. [12] utilized a percentage of the particles that have shown improvement when compared to the last iteration. They applied this strategy to solve a real-world engineering problem efficiently. Kessentini and Barchiesi [13] presented an adaptive inertia method which uses the standard deviation of dimensions of particles. This technique improves the exploration and exploitation balance in comparison to other algorithms, which helps in converging to optima.
Bansal et al. [14] reviewed 15 famous variants of PSO and compares their performance. Jordehi et al. [15] and Wang et al. [16] examined all popular PSO variants after applying them to deal with discrete optimization problem. Ghamisi et al. [17] applied hybrid PSO and Genetic Algorithm to deal with pipe and road problem. Similar hybrid approach is utilized to find the compressive strength of rock using PSO based Artificial Neural Network by Momeni et al. [18]. Gao et al. [19] suggested novel strategy so that some particles that are better informed could lead to remaining particles. An improved version of PSO is applied for data classification [20]. Various modified particle swam optimization along with their applications is discussed by Tian et al. [21].
The remaining paper is structured as follows: the background of PSO is overviewed in next section. The proposed inertia weight law is presented in further section. Further paper contains the set of optimization test problems for evaluation of the proposed technique, similar other techniques which are compared with proposed technique, simulation setting and results. It also contains the real-world engineering optimization problems used to evaluate and compare the proposed method with other methods. Finally last section contains conclusion of the paper.
2 Background
The PSO is basically a cooperative approach, in which N number of particles flies through the n-dimensional problem search space. The particle’s position depends on its own and its neighbor’s past best positions. The ith particle’s position is written as: \(\overrightarrow {{x_i}} =\left( {{x_{i1}},{x_{i2}},{x_{i3}}, \ldots ,{x_{in}}} \right)\) which is a n dimensional vector. Similarly, velocity of each particle is also a n dimensional vector represented by a vector is given by: \(\overrightarrow {{v_i}} =\left( {{v_{i1}},{v_{i2}},{v_{i3}}, \ldots ,{v_{in}}} \right)\). At each iteration \(t\), the particles of swarm adjust their position and velocity according to following equations [12, 14]:
where ω is the inertia weight, \({R_{1id}}\) and \({R_{2id}}\) are uniform random positive numbers in range (0,1), \(d=1,2, \ldots ,n\) represents dimensions, \({c_1}\) is cognitive and \({c_2}\) is social learning parameter respectively, \(pbes{t_{id}}\) is ith particle’s best previous position, \(gbes{t_d}\) is globally best position among all particles of the swarm. It becomes social-only model when \({c_1}=0\) (and \({c_2} \ne 0\)), and cognition-only model when \({c_2}=0\) (and \({c_1} \ne 0\)). For success of an optimization algorithm, the tuning between the local and global search is important during a run. In most of the cases, the higher value of inertia weight enhances the ability to explore the region of search space, and the lower value of inertia weight improves the capacity to concentrate the search in the vicinity of the promising region to improve the prospective solution. Therefore, for maintaining equilibrium between exploration–exploitation tradeoff effectively, we introduce a new inertia weight law in this paper which is adaptive in nature. During the algorithm process, the inertia weight is adjusted dynamically using feedback on particle’s best positions to alternate exploration and exploitation.
3 Proposed inertia weight strategy
The performance and the capability of the swarm based optimization algorithm depend mainly on the exploration–exploitation tradeoff. The proposed inertia weight strategy i.e. CBPPSO applies the idea that in order to balance the local and global search, particles whose position is not improved, must move towards the particles whose position is improved with respect global optimum. Thus movement of each particle will depend on the adaptive inertia weight that uses cumulative binomial probability of finding the global optimum by particles with improved position.
For calculating inertia weight using this method, at each iteration status of the population is determined. The indicator function that denotes the improvement in position of particle i at time t is given as [12, 22]:
Further calculate the total particles with improved position at iteration t as:
Considering each iteration as an experiment, in which each individual particle represents a trial which can have one of either state i.e. in a failure or success state. So the probability of success \(p\) and failure \(q\) is 0.5 for each particle and particles are independent as the trials are independent of each other. Population size is constant (N) in all the iterations. Here \(X(i,t)\) which denotes the total particles with improved position is binomial random variable [22]. The cumulative binomial probability is the probability of getting \(X(i,t)\) or fewer particles that have improved their position and is given by the following equation:
where \(X=X(i,t)\). The inertia weight at time t is the function of \(P(X)\) and represented as:
In our algorithm, the linear function used to map the inertia weight and \(P(X)\) is given as:
The velocity of particle plays an important role as it must not be too low so that the particles cannot jump out of local optima regions and neither is too high so that they could avoid the region having the global optimum. Initially when more number of particles improves their respective position while exploring the global optimum, the cumulative binomial probability of finding the global optimum by the particles that improve their position should be high and later on it decreases while converging towards the global optimum. The Fig. 1 shows above property of the proposed adaptive inertia weight when tested on sphere function. The inertia weight varies in range [0.4, 0.9] and is dynamic in nature. For avoiding the premature convergence the lower bound is set to 0.4. This strategy enhances the capability of balancing the local and global search of the algorithm.
4 Experimental setup
4.1 Optimization test problems
To measure the performance of the introduced PSO variant, we have used ten well-known benchmark functions [23,24,25,26,27,28,29]. The CBPPSO has been compared with four other significant PSO algorithm which are based on different inertia weight methods over these benchmark functions. In Table 1 the benchmark function are listed in which D is number of dimensions, taken as 30.
Functions \({f_1}\), \({f_3}\), \({f_4}\), \({f_6}\) are bowl shaped functions. These functions have only global minimum. These functions are convex, continuous and unimodal in nature. Functions \({f_2}\), \({f_5}\) are Valley shaped functions and mostly used to test gradient optimization algorithms. These functions are also unimodal in nature. Functions \({f_7},~{f_8},{f_9}\) are functions with multiple local minima. The outer region of function \({f_7}\) is nearly flat and it have a large pit at centre. Functions \({f_{8~}}\) and \({f_9}\) have multiple regularly distributed local minima. These functions are highly multimodal and the optimization algorithms are most likely to be stuck in one of multiple local minima. All the problems are minimization problems. Best solution and Best fitness for the problem is expressed by \({x^*}\) and \(f\left( {{x^*}} \right)\) respectively.
4.2 Inertia weight methods used for comparison
In this paper for comparison with proposed approach, we have carefully selected the PSO algorithms that studied the impact of inertia weight strategy. Based on previous comparative performance study [12, 14], the introduced algorithm is simulated and compared with following time varying and adaptive algorithms:
-
A.
GPSO [4]:
$$\omega =0.9 - 0.5\left( {\frac{t}{T}} \right) \in [0.4,0.5]$$where t is current step number and T is maximum step number.
-
B.
Sugeno [10]:
$$\omega =0.4+0.5\frac{{1 - \frac{t}{T}}}{{1+s\left( {\frac{t}{T}} \right)}} \in [0.4,0.5]$$where s is constant and is taken as 10.
-
C.
AIPWSO [12]:
$$\omega =\left( {\frac{{S(t)}}{N}} \right) \in [0,1]$$where \(S(t)\) is total particles with improved best positions and \(N\) is the swarm size.
-
D.
w-PSO [13]:
$$\omega =0.9 - 0.4\left( {\frac{{d(k)}}{{{{\rm{max} }_{1 \leq k \leq K}}\{ d(k)\} }}} \right) \in [0.4,0.9]$$where K is a constant (taken as 1000) and d(k) is adaptive vector of K elements.
4.3 Evaluation and comparison criteria
To the ten well-known benchmark functions listed in Table 1, four inertia weight methods along with CBPPSO is tested and their results is compared. All the parameters are kept constant for fair comparison of different inertia weight strategies during the whole experiment so that we can analyze and study the impact of inertia weight in the algorithm. The swarm size is 20 and the dimension is 30 [12]. According to the law presented by Martinez and Gonzalo [30] i.e., \({c_1}+{c_2}<4(1+\omega ),\) we took \({c_1}={c_2}=2\). The inertia weight varies in range [\({\omega _{\rm{min} }},\) \({\omega _{\rm{max} }}\)] where \({\omega _{\rm{min} }}=0.4\) and \({\omega _{\rm{max} }}=0.9\) [4]. The maximum number of functional evaluations (FEs) that each algorithm is allowed to run is fixed to 200,000 for proper comparison among all algorithms. The average results are recorded after running each experiment 30 times.
5 Results and discussion
The best fitness average and standard deviation is recorded in 30 independent runs are listed in Table 2 for each algorithm. The outcomes are compared on the basis of the convergence speed and the final accuracy [13].
-
For function \({f_1}\) and \({f_6}\), CBPPSO yields best solution and it is followed by AIWPSO. The convergence curve is almost straight line indicating that it is approaching towards the optimal solution with constant convergence rate.
-
For function \({f_2}\) and \({f_9}\), CBPPSO outperformed the other algorithms. The convergence rate of CBPPSO is more in comparison to other algorithms. The horizontal line shows the inability of the algorithm to further converge towards the optimal solution.
-
For function \({f_3}\) and \({f_5}\), w-PSO and CBPPSO both provided the best solutions. The CBPPSO converges slightly faster than w-PSO towards the optimal solution. Both the algorithm followed by AIWPSO converge must faster towards the optimal solution than the other algorithms.
-
For function \({f_4}\) and \({f_{10}}\), best solution is provided by AIWPSO followed by CBPPSO algorithm. The AIWPSO and CBPPSO converge faster in direction of the best solution than the other algorithms.
-
For function \({f_7}\), GPSO, Sugeno and w-PSO yield the best solution in comparison to the CBPPSO and AIWPSO. However, CBPPSO converge faster towards the best solution followed by AIWPSO.
-
For function \({f_8}\), w-PSO outperformed the other algorithms in terms of best solution. The AIWPSO and CBPSO converge in vicinity of the best solution at faster rate than other algorithms.
The outcome shows that CBPPSO yields the best accuracy for 6 out of 10 benchmark problems i.e., for functions \({f_1},{f_2},{f_3},{f_5},{f_6}\) and \({f_9}\). The convergence curve in the logarithmic scale for all the algorithms is shown in Fig. 2 which illustrates the above results. The performance of CBPPSO is comparable in case of functions \({f_4},{f_8}\) and \({f_{10}}\). The algorithms are scored according to number of times the algorithm provides the comparable and best results. The scores of the w-PSO, AIWPSO, Sugeno, GPSO, CBPPSO are 7, 7, 5, 4, and 9 respectively. Most of the time, the best position are discovered by the algorithms with adaptive inertia weight approach. Also these algorithms converges faster towards the optimal solution in comparison to the other time-varying inertia weight based algorithms. In particular, it can be observed that the CBPPSO converges at faster rate towards the best solution in comparison to the other algorithms most of the time.
The proposed approach has also been tested over a real world engineering benchmark problem and the performance is evaluated and compared with other existing algorithms on standard parameters reported in literature. This is discussed in the next section.
6 Real world engineering optimization problem
Apart from the well known benchmark functions, the potential of CBPPSO is also investigated on three real world engineering problems. The swarm size is 20, and the minimum result of CBPPSO is recorded after 10 independent runs, continued for 2500 iterations [15, 31]. For comparison, we have taken results from [12, 31, 32] for problem 1, 2 and 3 respectively as reported in the literature. Best minimum optimization result for the respective problems are indicated in bold in Tables 3, 4 and 5.
6.1 Problem 1: Design of a pressure vessel [33]
The pressure vessel design is a constrained engineering design problem in which the aim is to minimize the cost of manufacturing pressure vessel. The problem is given as follows [12]:
Minimize:
Subject to:
The shell thickness \({T_s}\)(\({x_1}\)), spherical head thickness \({T_h}\)(\({x_2}\)), radius of cylindrical shell \(R\)(\({x_3}\)) and shell length \(L\)(\({x_4}\)) are four design parameters of pressure vessel design problem as shown in Fig. 3. Here \({x_1}\) and \({x_2}\) are integer multipliers of 0.0625. \({x_3} \in [40,80]\) and \({x_4} \in [20,60]\).
Table 3 summate the optimization results on pressure vessel design problem by CBPPSO, AIWPSO [12], Sandgren [33], CODEQ [35], Harmony search [36], and genetic algorithm [37]. Among the six experimented algorithms, CBPPSO gives the best results.
6.2 Problem 2: Design of a tension/compression spring [39]
The goal in this problem is to minimize the weight of the tension/compression spring, subject to constraint over surge frequency, minimum deflection, shear stress, diameter and design variables. The problem is presented as:
Minimize:
Subject to:
The design variables are the mean coil diameter, \(D\)\(({x_2})\), the number of active coils, \(P\) \(({x_3})\), and the wire diameter, \(d\) \(({x_1})\) as shown in Fig. 4. The design variables have following desired ranges: \(0.05 \leq {x_1} \leq 2,\)\(0.25 \leq {x_2} \leq 1.3,\)and \(2 \leq {x_3} \leq 15.\) Table 4 summates the best results of Belegundu [38], Arora [39], He and Wang [32], Lobato et al. [31], and CBPSO. The CBPPSO provide the better result in comparison to the other approaches.
6.3 Problem 3: Design of a welded beam [39]
The aim in this problem is to minimize the cost of welded beam subject to the constraint over the end deflection of the beam (δ), bending stress in the beam (σ), shear stress (τ), buckling load in the bar (\({P_C}\)), and side constraints. The problem is presented as:
Minimize:
Subject to:
where,
The problem has four design variables as shown in Fig. 5, i.e., \(t\)\(({x_3}),\) \(l\)\(({x_2}),\) \(h\)\(({x_1}),\) and \(b\)\(({x_4})\). The variables have following desired ranges : \({x_1},{x_4} \in [0.1,2],\) and \({x_2},{x_3} \in [0.1,10].\) Table 5 summarizes the best results of Deb [40], Coello [41], He and Wang [32], Coello and Montes [42], and CBPSO. Here also, CBPPSO provide the better result in than the other algorithms.
7 Conclusion
The paper introduces a new adaptive inertia weight approach (CBPPSO) which is based on the cumulative binomial probability of the total particles that have improved their position. In this approach, to realize the particles’ current state in the search region, the feedback is provided by this cumulative binomial probability and in turn adjusts the inertia weight value accordingly. Experimental results clearly show that the search behavior of the particles is improved due to introduced approach and is more effective in comparison to the other successful variants of PSO. The comparisons and analysis are done in terms of accuracy of the solution and the convergence speed.
Also, we applied CBPPSO to solve three real world engineering problems and compared result with the other approaches. The result provided by CBPPSO is better than outcome of other strategies. Since CBPPSO is simple and efficient, it can be applied to solve the other problems effectively.
References
Kennedy J, Eberhart RC (1995) Particle swarm optimization. In: Proceedings of IEEE international conference on neural networks, Perth, vol 4, pp 1942–1948
Shi Y, Eberhart R (1998) A modified particle swarm optimizer. In: Evolutionary computation proceedings, IEEE world congress on computational intelligence, The IEEE international conference, Anchorage, pp 69–73
Eberhart RC, Shi Y (2001) Tracking and optimizing dynamic systems with particle swarms. In: Evolutionary computation. Proceedings of the IEEE congress, Seoul, vol 1, pp 94–100
Shi Y, Eberhart RC (1999) Empirical study of particle swarm optimization. In: Proceedings of the IEEE congress, CEC 99, Washington, DC, vol 3, pp 1945–1950
Al-Hassan W, Fayek MB, Shaheen SI (2006) Psosa: an optimized particle swarm technique for solving the urban planning problem. In: International conference on computer engineering and systems, pp 401–405
Malik RF, Rahman TA, Hashim SZM, Ngah R (2007) New particle swarm optimizer with sigmoid increasing inertia weight. Int J Comput Sci Secur 1(2):35–44
Chen G, Huang X, Jia J, Min Z (2006) Natural exponential inertia weight strategy in particle swarm optimization. In: 6th world congress on intelligent control and automation, vol 1, pp 3672–3675
Gao YL, An XH, Liu JM (2008) A particle swarm optimization algorithm with logarithm decreasing inertia weight and chaos mutation. In: Computational intelligence and security, IEEE international conference, vol 1, pp 61–65
Nikabadi A, Ebadzadeh M (2008) Particle swarm optimization algorithms with adaptive inertia weight: a survey of the state of the art and a novel method. IEEE J Evol Comput
Lei K, Qiu Y, He Y (2006) A new adaptive well-chosen inertia weight strategy to automatically harmonize global and local search ability in particle swarm optimization. In: 1st IEEE international symposium on systems and control in aerospace and astronautics, Harbin, pp 977–980
Zhan ZH, Zhang J, Li Y, Chung HSH (2009) Adaptive particle swarm optimization. IEEE Trans Syst Man Cybern Part B Cybern 39(6):1362–1381
Nickabadi A, Ebadzadeh MM, Safabakhsh R (2011) A novel particle swarm optimization algorithm with adaptive inertia weight. Appl Soft Comput 11(4):3658–3670
Kessentini S, Barchiesi D (2015) Particle swarm optimization with adaptive inertia weight. Int J Mach Learn Comput 5(5):368–373
Bansal JC, Singh PK, Saraswat M, Verma A, Jadon SS, Abraham A (2011) Inertia weight strategies in particle swarm optimization. In: Nature and biologically inspired computing (NaBIC), third world congress, pp 633–640
Jordehi AR, Jasni J (2015) Particle swarm optimization for discrete optimization problems: a review. Artif Intell Rev 43(2):243–258
Wang D, Tan D, Liu L (2018) Particle swarm optimization algorithm: an overview. Soft Comput 22(2):387–408
Ghamisi P, Benediktsson JA (2015) Feature selection based on hybridization of genetic algorithm and particle swarm optimization. IEEE Geosci Remote Sens Lett 12(2):309–313
Momeni E, Armaghani DJ, Hajihassani M, Amin MFM (2015) Prediction of uniaxial compressive strength of rock samples using hybrid particle swarm optimization-based artificial neural networks. Measurement 60:50–63
Gao Y, Du W, Yan G (2015) Selectively-informed particle swarm optimization. Sci Rep 5:9295
Satapathy SC, Chittineni S, Krishna SM, Murthy JVR, Reddy PP (2012) Kalman particle swarm optimized polynomials for data classification. Appl Math Model 36(1):115–126
Tian D, Shi Z (2018) MPSO: modified particle swarm optimization and its applications. Swarm Evol Comput 41:49–68
Agrawal A et al (2018) Particle swarm optimization with probabilistic inertia weight. In: Harmony search and nature inspired optimization algorithms, ICHSA 2018, pp 239–248
Surjanovic S, Bingham D (2013) Virtual library of simulation experiments: test functions and datasets [online]. http://www.sfu.ca/~ssurjano/. Accessed Dec 2017
Molga M, Smutnicki C (2005) Test functions for optimization needs. http://www.zsd.ict.pwr.wroc.pl/files/docs/functions.pdf. Accessed Dec 2017
Global optimization test problems. http://www-optima.amp.i.kyoto-u.ac.jp/member/student/hedar/Hedar_files/TestGO.htm. Accessed Dec 2017
Dixon LCW, Szego GP (1978) The global optimization problem: an introduction. Towards Glob Optim 2:1–15
Back T (1996) Evolutionary algorithms in theory and practice: evolution strategies, evolutionary programming, genetic algorithms. Oxford University Press, New York
Laguna M, Marti R (2002) Experimental testing of advanced scatter search designs for global optimization of multimodal functions. http://www.uv.es/rmarti/paper/docs/global1.pdf. Accessed Dec 2017
Global optimization test functions index. http://infinity77.net/global_optimization/test_functions.html#test-functions-index. Accessed Dec 2017
Martínez JF, Gonzalo EG (2009) The PSO family: deduction, stochastic analysis and comparison. Swarm Intell 3(4):245–273
Lobato FS, Steffen Jr V (2014) Fish swarm optimization algorithm applied to engineering system design. Latin Am J Solids Struct 11(1):143–156
He Q, Wang L (2007) An effective co-evolutionary particle swarm optimization for constrained engineering design problems. Eng Appl Artif Intell 20(1):89–99
Sandgren E (1990) Nonlinear integer and discrete programming in mechanical design optimization. J Mech Des 112(2):223–229
Dong M, Wang N, Cheng X, Jiang C (2014) Composite differential evolution with modified oracle penalty method for constrained optimization problems. Math Probl Eng 2014:617905
Omran MG (2010) CODEQ: an effective metaheuristic for continuous global optimization. Int J Metaheuristics 1(2):108–131
Lee KS, Geem ZW (2005) A new meta-heuristic algorithm for continuous engineering optimization: harmony search theory and practice. Comput Methods Appl Mech Eng 194(36):3902–3933
Wu SJ, Chow PT (1995) Genetic algorithms for nonlinear mixed discrete-integer optimization problems via meta-genetic parameter optimization. Eng Optim 24(2):137–159
Belegundu AD (1982) A study of mathematical programming methods for structural optimization. Dept. of Civil Environ. Eng., Iowa Univ
Arora JS (1989) Introduction to optimum design. McGraw-Hill, New York
Deb K (1991) Optimal design of a welded beam via genetic algorithms. AIAA J 29(11):2013–2015
Coello CAC (2000) Use of a self-adaptive penalty approach for engineering optimization problems. Comput Ind 41(2):113–127
Coello CAC, Montes EM (2002) Constraint-handling in genetic algorithms through the use of dominance-based tournament selection. Adv Eng Inform 16(3):193–203
Zheng H, Zhou Y (2013) A cooperative coevolutionary cuckoo search algorithm for optimization problem. J Appl Math 2013:912056
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Agrawal, A., Tripathi, S. Particle swarm optimization with adaptive inertia weight based on cumulative binomial probability. Evol. Intel. 14, 305–313 (2021). https://doi.org/10.1007/s12065-018-0188-7
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12065-018-0188-7