Abstract
An efficient survey of numerous traditional metaheuristic algorithms (MAs) has been investigated in this paper. Among successful MAs, differential evolution (DE) and particle swarm optimization (PSO) have been widely recognized to solve complex optimization problems and received much attention from many researchers. Therefore, DE and PSO are chosen in the present study and an extensive survey of their recent-past variants with hybrids has been inspected again. After this an advanced DE (ADE) and PSO (APSO) with their hybrid (AHDEPSO) are proposed for unconstrained optimization problems. In ADE a novel mutation strategy, crossover probability and random nature selection scheme (to avoid premature convergence) as well as in APSO novel gradually varying parameters (to avoid stagnation) are introduced. Hence, ADE and APSO affords different convergence characteristics to the solution space. Also to balance between exploration and exploitation, in AHDEPSO population is divided (multi-population approach) and merged with others in a pre-defined way. Thus, AHDEPSO achieves better solutions and it is expected to obtain productive solutions with an increasing success rate at each cycle. To verify the performance of all 3 proposed algorithms i.e. ADE, APSO, and AHDEPSO applied to solve 23 basic, 30 IEEE CEC 2017 unconstrained benchmark functions and 3 real-world problems. There are several numerical and graphical analyses have been done to verify the performances of the proposed algorithms robustly. Additionally, statistical and comparative analysis confirms the superiority of the proposed algorithms among traditional DE and PSO with their recent variants and hybrids as well as over many state-of-the-art algorithms. Finally, between 3 proposed algorithms the best one i.e. AHDEPSO is recommended to solve unconstrained optimization problems.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
Many optimization methods have been developed to solve multifaceted optimization problems. However, conventional optimization methods have certain inherent drawbacks like high computational complexity, local optimal stagnation, and derivation of the search space [1]. Also, it is difficult to find the optimal solution in the solving process. Presently, to overcome the drawbacks of conventional optimization methods, a bunch of optimization methods known as metaheuristics algorithms (MAs) have been introduced. The mechanisms to develop MAs are simple as well as based on natural practices and become yet very efficient in solving complex global optimization problems. According to the mechanical differences, the MAs can be categorized into four groups as follows- swarm intelligence algorithms, evolutionary algorithms, physics-based algorithms, and human behavior based algorithms. Some recent past instances of these algorithms are depicted in Fig. 1 and reviewed briefly as follows.
Genetic algorithm (GA), the foundation for many evolutionary algorithms, was defined by David Goldberg in 1989 [2]. It is taken as the search algorithms build on the mechanics of natural genetics and selection, simulated by the natural process of evolution like selection, mutation, and crossover. Later in 1995, inspired by the flocking of bird Eberhart and Kennedy proposed particle swarm optimization (PSO) [3]. PSO has acquired immense popularity amongst researchers because of its simplicity and effectiveness in plenty of scientific and industrial applications. In PSO particles get updated by themselves with the initial velocity, also all the individuals learn from the others and adapt themselves by trying to emulate the behavior of the fittest individuals. On the other hand, differential evolution (DE) is one of the most popular algorithms in evolutionary metaheuristics algorithm discovered by Storn and Price in 1997 [4]. DE work on two phases; initialization and evolution. In first phase, population is generated randomly and in the second phase, the same generated population undergo mutation, crossover and selection processes. Due some advantages like easy implementation and strong global search capability, DE has come out to be a well-liked choice among researchers for solving various optimization problems in different sectors. Then in 1998, photosynthetic learning algorithm (PLA) came into existence by Murase and Wadano [5]. It utilizes the rules governing the conversion of carbon molecules from one substance to another in the Benson-Calvin cycle (comprises of three phases i.e. carhoxylation, reduction & regeneration of the acceptor) and Photorespiration reactions. Further in 2000, de Castro and Zuben developed clonal selection algorithm (CSA) [6] which is related to artificial immune system that described a general learning strategy. The principal immune aspects of CSA taken into consider were: maintenance of the memory cells, selection and cloning of the most stimulated cells, death of non-stimulated cells, affinity maturation, and re-selection of the clones.
With the help of analogy of the music performance process, Geem et al. [7] devised harmony search (HS) algorithm. In this algorithm a new vector is produced after taking all existing vectors instead of considering only two (parents) as in the genetic algorithm, and HS does not need the setting of initial values of decision variables and it performed well in many combinatorial or continuous problems. In 2003, Eusuff and Lansey proposed shuffled frog leaping algorithm (SFLA) [8]. It mimics the cooperative behavior of frogs displayed while they search for food in a swamp and uses memetic evolution in the form of infection of ideas from one individual to another in a local search. Also, a shuffling technique permits for the exchange of information between local searches to move toward a global optimum. Again, in 2004 BeeHive algorithm (Beehive) devised by Wedde et al. [9], it is inspired by the communicative and evaluative procedures of honey bees. In this algorithm, bee agents pass through network regions called foraging zones and their information on network state is delivered for updating the local routing tables. Further, Pinto et al. in 2005 proposed wasp swarm optimization (WSO) [10]. It is bidding algorithm where the wasps take the role of bidders trying to acquire finite resources and had been designed specifically with reference to logistics system optimization. Then, Du et al. used theories of small-world phenomenon and constructed SWOA small-world optimization algorithm (SWOA) [11] algorithm in 2006 where local shortcuts search and random long-range search operator are employed to solve optimization problems. Later in 2006, Mehrabian and Lucas proposed a general-purpose optimization algorithm that is inspired by weed colonization named as invasive weed optimization (IWO) [12]. In this algorithm, invasive weed reproduces quickly by building seeds and increase their population. Moreover, their behavior becomes different with time as the colony becomes dense leaving lesser opportunity of life for the ones with lesser fitness. Again in 2007, Karaboga and Basturk presented artificial bee colony (ABC) [13] which contains three groups of bees: employed bees, onlookers and scouts with three steps cycle: employed bees send through the food sources then their nectar amounts are measured; onlookers select the food sources after sharing the information of employed bees then determining the nectar amount of the foods and lastly determining the scout bees then direct them onto possible food sources.
Moving in the year of 2008, Havens et al. invented roach infestation optimization (RIO) [14] whose main aim is to look into the effect of adapting the PSO with the social behavior of cockroaches. In this algorithm cockroaches makes an effort to search the darkest place, fitness of a cockroach is proportional to the darkness of its location. They communicated with each other with a predefined probability. At a certain point of time cockroaches become hungry and leave the darkness to search for food. Another population-based metaheuristic algorithm, biogeography based optimization (BBO) was proposed by Simon in 2008 [15]. It is inspired by geographical distribution of biological organisms and enclosed two main operator’s migration (support to update each individual by sharing the characteristic of individuals) and mutation (helps to enhance diversity and the changes for a good solution). Later in 2009, Yang & Deb devised cuckoo search (CS) algorithm [16] in which the brood parasitic behavior of cuckoo species simulated along with the lévy flight action of birds and fruit flies. There are three steps in CS, (i) one egg laid at a time, and cuckoo leaves its egg in an any random nest, (ii)high-quality eggs nests having can survive, (iii) the number of host nests is constant, and the egg can be noticed by the host bird with a probability (pa) belongs to [0, 1]. Further, in the area of swarm intelligence Yang discovered firefly algorithm (FA) in 2009 [17], inspired by the behavior of flashing features of fireflies. The two main purposes of such flashes are attracting mating partners and warning against predators. The flashing can be formulated as an objective function which needs to be optimized. Also in same year, Rashedi et al. proposed gravitational search algorithm (GSA) [18]. It is based on the law of Gravity in which agents are considered as objects and their performance is measured by their masses. Each agent has four properties (determined by fitness function value) position, inertial mass, active gravitational mass, and passive gravitational. Then in 2010, bat algorithm (BA) was proposed by Yang [19]. Its main motivation is the echolocation behavior of bats, where bats can find their prey and differentiate different kinds of insects even in complete darkness.
Further, Rao et al. introduced teaching-learning based optimization (TLBO) in the year 2011 [20]. It is training based model in which population comprises of two phase i.e. Teacher phase (learning from the teacher) and learner phase (learning by the interaction between learners). Then in the year 2012, water cycle algorithm (WCA) was discovered by Eskandar et al. [21]. Their elementary concepts are inspired from nature and rely on the observation of water cycle process. In WCA, raindrop is considering as initial population, best raindrop (best individual) chosen as sea, good raindrops taken as river and left raindrop chosen as streams which flow to the rivers and sea. Again in 2012, Gandomi and Alavi devised an algorithm named as krill herd (KH) [22], which is based on the herding behavior of krill. Its three main actions are: movement influence by the presence of other individuals, foraging activity and random diffusion, that’s determined by the time-dependent position of an individuals. Later in 2013, social spider optimization (SSO) was introduced by Cuevas et al. [23], which is inspired by simulation of cooperative behavior of social-spiders. In this algorithm, individual spiders (solutions) are simulated by the biological laws of the cooperative colony. Also, males and females are two search agents (spiders) in SSO, depending on which each individual is conducted by different evolutionary operators. Again, spider monkey optimization (SMO) was discovered by Bansal et al. in 2014 [24], which is inspired by intelligent foraging behavior of fission–fusion social structure based animals. It has two control parameters i.e. global leader limit and Local leader limit that helps local and global leaders to take appropriate decisions. In same year 2014, Mirjalili et al. proposed grey wolf optimizer (GWO) [25], it mimics the leadership hierarchy and hunting approach of grey wolves. It employed four types of grey wolves (Alpha, beta, delta, and omega) for simulating the leadership hierarchy and three steps (searching for prey, encircling prey, and attacking prey) for hunting. Later in 2015, inspired by the shallow water wave theory, Zheng devised an algorithm called as water wave optimization (WWO) [26] in which three operators; propagation (helps to make high fitness waves search small areas and low fitness waves explore), refraction (helps waves to escape search stagnation) and breaking (enables an intensive search around a promising area.) have been implemented depending upon phenomena of water flow. Then, Mirjalili proposed moth- flame optimization (MFO) [27] in same year 2015, inspired by the navigation technique (transverse orientation) of moths. In MFO, a mathematical model of spiral flying path of moths around artificial lights (flames) is developed in which moths are considered as candidate solutions and problem’s variables are the position of moths in the space.
In the year 2016, Mirjalili and Lewisa proposed whale optimization algorithm (WOA) [28]. This metaheuristic inspired by the social behavior of humpback whales and it consists of three steps; encircling prey, bubble-net attacking method and search for prey. Later in 2016, a swarm intelligence based technique, dragonfly algorithm (DA) proposed by Mirjalili [29]. It is inspired by static and dynamic swarming behaviours of dragonflies and it has two phases: exploration and exploitation, which modeled while dragonflies search food, navigate and avoid enemies in a swarm. Further in 2017, grasshopper optimization algorithm (GOA) was modeled mathematically by Saremi et al. [30], it mimics the behavior of grasshopper swarms and designed to simulate repulsion force (help to explore the search space) and attraction forces (help to exploit promising regions) in between the grasshoppers. Later in 2018, Pierezan and Dos inspired by the Canis latrans species and introduced a population based metaheuristic named as coyote optimization algorithm (COA) [31]. It especially focuses on the social structure and experiences exchange by the coyotes and has two parameters: number of packs and the number of coyotes per pack. In recent year 2019, search and rescue optimization (SAR) was developed by Shabani et al. [32], inspired by search and rescue operations of the humans being and it consists of two phases: social phase (selects the search direction based on the position of clue) and individual phase (searches around the best clue). Then early in 2020, based on sense of smell and movement mechanism of bear, Marzbali presented bear smell search algorithm (BSSA) [33]. It has two mechanisms: olfactory bulb mechanism that connected to the brain, employed mesh mechanism to move the next position.
The primary advantage of these algorithms is their use of the “trial-and-error” principle in searching for solutions. Thus, these algorithms were successfully applied to solve global optimization problems. Among successful MAs, DE and PSO have been widely recognized to solve complex optimization problems and received much attention of many researchers [34,35,36,37,38,39]. Therefore, DE and PSO are chosen in the present study. However, some shortcomings of DE and PSO cause limitation in applying them in complex optimization environments. Therefore, avoiding shortcomings of these algorithms many variants and their hybridization are introduced in the literature.
Although, as claimed on No-Free-Lunch(NFL) theorem [40], a large number of MAs are introduced in the literature but they couldn’t able to solve variety of problems. Moreover, a method may have suitable results for some problems, but not for others. Thus, there is a need to introduce some effective algorithms to solve a wider range of problems. This is the motivation of this study to present novel variants of DE and PSO with their hybridization.
Moreover, after extensive vigorous literature review on different variants of DE and PSO with their hybridization (section 2 related work), following points are analyzed and motivated from them.
-
(i).
In DE mutation and crossover strategy with their associate control parameters utilized to produce the global best solution which is beneficial for improving the convergence behavior. Therefore, in DE most appropriate strategies and their associated parameter values are considered a vital research study.
-
(ii).
The performance of PSO greatly depends on its parameters like acceleration coefficients and inertia weight which guide particles to the optimum and balancing diversity respectively. Hence, many researchers have tried to modify the control parameter of PSO to achieve better accuracy and higher speed.
-
(iii).
Hybrid algorithms (by combining the advantages of different algorithms) have aroused interest of the researchers due to its effectiveness for complex optimization problems. Since DE and PSO have complementary properties therefore their hybrids have gained prominence recently. To best of our knowledge, finding ways to combine DE and PSO is still an open problem.
Inspired/motivated by the above observations and our survey of literature, the following plans of action (major contributions) have been outlined for solving complex unconstrained optimization problems.
-
(i).
Developed an advanced differential evolution (ADE) where novel mutation strategy and crossover probability along with slightly changed selection scheme are familiarized.
-
(ii).
Suggested an advanced particle swarm optimization (APSO) which consists of novel gradually varying (decreasing and/or increasing) parameters.
-
(iii).
Designed an advanced hybrid algorithm (AHDEPSO) by hybridizing advanced DE and PSO as well as based on multi-population approach.
The rest of this paper is organized as follows: Section 2 reviews the related work on different and hybrid variants of DE and PSO. Section 3 describes the proposed algorithms. The proposed algorithms are verified on a wide set of benchmark functions and real-world engineering applications in Section 4. Section 5 concludes this study with future works.
2 Related work
During past decade, development of several powerful MAs to solve high-dimensional optimization problems has become a popular study area. Among successful MAs, DE and PSO have been widely recognized to solve complex optimization problems and received much attention of many researchers. The basics of original DE and PSO are presented as follows.
2.1 Differential evolution (DE)
DE is an evolutionary approach proposed by Storn and Price [4]. The key idea behind DE is to use vector differences for perturbing vector population. In D-dimensional search space it initializes a population randomly of np individuals within the lower and upper boundaries (xl, xu). After initialization, the DE is conducted by the three main operations defined as follows.
Mutation: for each target vector (\( {x}_{i,j}^t \)) a mutant vector (\( {v}_{i,j}^t \)) at iteration t is generated as follows.
where r1, r2, r3 ∈ {1, 2,...,np} are randomly chosen integers with r1 ≠ r2 ≠ , r3 ≠ i, and F denotes the scaling vector that employed to control the amplification of differential variation.
Crossover: a trial vector \( \left({u}_{i,j}^t\right) \) produce by combining target (\( {x}_{i,j}^t \)) and mutant (\( {v}_{i,j}^t \)) vector as follows.
where i ∈ [1, np], j ∈ [1, D] and rand ∈ U[0, 1] (uniformly distributed random number between 0 & 1), Cr ∈ [0, 1] denotes crossover rate that controls how many components are inherited from the mutant vector.
Again mutation, crossover and selection operators allowed for offspring repeatedly up-to predefined stopped criteria.
2.2 Particle swarm optimization (PSO)
PSO was originally proposed by Eberhart and Kennedy in 1995 [3]. It simulates social and/or group behaviors in animals, insects and humans. In classical PSO, swarm flies in the D-dimensional search space to seek for global optimum. Each of the ith swarm particles has its own position (xi = (xi, 1, xi, 2, …, xi, D)) and velocity vi = (vi, 1, vi, 2, …, vi, D). During the evolution, each particle tracks its individual best pbesti = (pbesti, 1, pbesti, 2, …, pbesti, D) and global best gbestj = (gbest1, gbest2, …, gbestD), velocity and position of the ith particle are updated as follows at each iteration.
where t is iteration index, \( {v}_{i,j}^t \) is velocity of ith particle in D-dimension at tth iteration, c1 is cognitive acceleration coefficient, c2 is social acceleration coefficient, r1, r2 are two uniform random numbers in the range between [0, 1] and w is the inertia weight.
Due to some deficiencies of DE (low local exploitation ability and loss of diversity) and PSO (easily get stuck at a local optimal solution region and low convergence rate) cause limitation in applying them in typical optimization problems. Therefore, avoiding shortcomings of these algorithms many variants and their hybridization are introduced in the literature. Some recent past variants of DE and PSO as well as their hybrids are reviewed effectively as follows and illustrated in Fig. 2.
2.3 DE variants
DE has remarkable performance and become a powerful optimizer in the field of real world problems. However, it has few issues such as convergence rate and local exploitation ability. In order to overcome its shortcomings, lots of robust and effective DE has been designed in the literature. A detailed survey of DE variants can be found in [34, 35]. Moreover, a briefed survey of significant DE variants has been summarized as follows.
As DE is effective in solving in difficult search problem, Joshi and Sanderson in 1997 [41] applied DE approach to solve the minimal representation problem in multisensor fusion. In 1998, Cheng and Hwang developed DE algorithm (DEA) [42], it represents the continuous parameters by floating-point numbers rather than by binary bit-strings. Moreover, it is applied to the design of optimal PID controller. Later in 1999, Lee et al. proposed modified DE (MDE) [43], it employed a local search to improve computational efficiency as well as modified heuristic constraints to lessen the search space size. And it is applied to the continuous methyl methacrylate-vinyl acetate (MMA-VA) copolymerization reactor problem. Then in 2000, Kyprianou et al. [44] employed DE to identify the optimal parameter values of a highly nonlinear dynamic system Freudenberg hydromount model. Further, Ruzek and Kvasnicka have pointed out the practical applicability of the DE in the problem of the kinematic location of the earthquake hypocenter in 2001 [45]. It is found that the sensitivity of the DE essentially retains the favorable properties over most of the admissible range. Then Chen et al. proposed improved differential evolution (IDEP) [46] in 2002. It employed flip operation (to adjust the prior-knowledge-violating networks), also Levenberg–Marquardt descent and random perturbation strategy are adopted to speed up the convergence of DE and prevent from being locally trapped. Also, it is applied in the modeling chemical curves with the increasing monotonicity constraint in network training. Later in 2003 Ilonen et al. [47] used DE to analyzed the train feedforward multilayer perceptron neural networks for reaching its optima. Because DE has no major restrictions apply to the error function as well as on the regularization methods. Proceeding in 2004, Kapadi and Gudi [48] analysed the computational aspect of DE with augmented Lagrangian including the dynamic penalty method. Then it applied on fed-batch fermentation processes involving multiple feeds as it provides a path to obtain the feasible optimal environment in fermentation broth and avoid inhibition. In 2005, Rane et al. [49] worked on the process of recrystallization using cellular automata (CA). Where DE is employed to search for the value of nucleation rate, providing an acceptable matching between the theoretical and experimentally observed values of fraction-recrystallized. Further, Babu and Angira in 2006 developed Modified DE (MDE) [50]. It accomplished by taking single array in MDE as compared to two arrays in traditional DE, which reduces the memory and computational efforts. It is applied to solve the non-linear chemical engineering problems. Next year in 2007, Chang et al. proposed robust searching hybrid DE (RSHDE) [51]. It comprises of two schemes; multi-direction search and search space reduction scheme to enhance the search ability in the initial stages. RSHDE is used to solve the capacitor placement problem in distribution systems.
In 2008, Noman and Iba [52] have investigated the potential of the DE for solving economic load dispatch (ELD) problems in power systems. Where DE enhances the way of satisfying the power balance constraint and other boundary constrained by using a reflection mechanism. Further, Das and Konar presented automatic fuzzy clustering DE (AFDE) in 2009 [53]. It incorporates the two novel parameter tuning strategies to escape from stagnation and/or premature convergence. Moreover, it is applied to the fuzzy clustering task in the intensity space of an image. Later in the year 2010, Amjady and Sharifzadeh invented modified DE (MDE) [54]. This framework generated by owning a new mutation operator and selection mechanism which is inspired from GA, PSO and simulated annealing (SA). It is applied on non-convex economic dispatch problem considering valve loading effect. Then in 2011 Uyar et al. [55] proposed a novel way of employing DE to short-term electrical power generation scheduling problem. This problem is divided into two sub problem, where DE is applied with binary decision variable in a way that it lower the cost scheduling of power generators, satisfying some operational constraints. Proceeding in year 2012, Santos et al. [56] designed a new chaotic DE optimization approach based on Ikeda map (CDEK) to tune the crossover rate and mutation factor. It is applied to the identification of a thermal process. Later in 2013, Tsai et al. presented improved DE algorithm (IDEA) [57] based on the cost and time models on cloud computing environment. It combined the Taguchi method (to exploit the better individuals on micro space) and DE (has powerful global exploration capability). Moreover, IDEA used to optimize the task scheduling and resource allocation on cloud computing environment. Further, Baskan and Ceylan proposed Modified DE (MODE) [58] in 2014. It developed mutation strategy consideration rate (MSCR) and local search operator that enhanced the convergence rate of DE without being trapped in bad local optimum. This algorithm deals with determining the optimal link capacity expansions for a given road network. Guo and Yang [59] developed DE utilizing eigenvector-based crossover operator in 2015. It introduced a rotationally invariant crossover (based on eigenvectors of covariance matrix) and a new parameter P (eigenvector ratio) to control the ratio between the binomial & eigenvector-based crossover and to preserve the population diversity respectively. Moreover, it is used to apply in real world optimization problem. Ayala et al. invented beta DE (BDE) in 2015 [60]. It applies beta probability distribution in tuning F and CR parameters as it is flexible for modelling data. BDE is employed to select the thresholds for segmenting the images. Further in 2015, Chen et al. [61] employed DE in human detection approach based on histograms of oriented gradients (HOG) feature and termed as HOG-SVM-DE. Here DE is applied instead of scanning the detection windows in sliding fashion so as to achieve fast and accurate detection.
Further in 2016, Do et al. devised a modified DE (mDE) [62]. In this framework, best individual based mutation and elitist selection techniques instigated with modified scale and crossover factor to escalate exploitation ability and/or convergence speed of DE. Then it is applied for form-finding of tensegrity structures. In the same year Sethanan and Pitakaso proposed modified DE algorithms [63], where two additional steps included i.e. reincarnation and survival process in order to improve the solution quality. This algorithm is then used to determine the routes for raw milk collection from a dairy factory. Then, Basu developed quasi-oppositional DE (QODE) [64] in 2016. It adds quasi-oppositional based learning (QOBL) for population initialization and also for generation jumping. Moreover, QODE is used to solve reactive power dispatch problem of a power system. Proceeding in 2017, Vivekanandan and Iyengar proposed modified DE [65], where mutation strategy (DE/rand/2-wt/exp) is employed. Also, modified-DE-based feature selection is adapted to perform feature selection for cardio vascular disease. Further, Suresh and Lal invented Modified DE (MDE) [66] in 2017, it excesses DE for exploration phase whereas cuckoo search (CS) for exploitation phase as it ensured an increase in the convergence rate and to avoiding premature convergence. And MDE is employed for enhancing the contrast and brightness of satellite images. Later Sakr et al. proposed modified DE algorithm (MDEA) [67] in 2017, it comprises of self-adaptive scaling factor that dynamically adopts global and local searches to eliminate local optima trapping. It is implemented for solving optimal reactive power management (ORPM) problem.
Then in 2018, Qiu et al. designed minimax DE (MMDE) [68] where a novel bottom-boosting mechanism is introduced to maintain the reliability and identified the promising solutions. Also it applied partial-regeneration strategy and mutation operator (DE/current/1) to provide in-depth exploration over solution space. Then, it is applied to the robust optimal design of a two-link robotic manipulator. Continuing in same year, Yuzgec and Eser proposed another DE variant named as Chaotic based DE (CDE) [69]. To maintain the diversity in the initial population it includes four chaotic systems like as Lorenz, Rossler, Chua and Mackey-Glass functions for selection of the candidates from population in the mutation, crossover operations. Also it is used for the optimization of baker’s yeast drying process. Again in 2018, Buba and Lee [70] proposed DE approach to optimize the urban transit network design problem (UTNDP). Identical point mutation and uniform route crossover with 0/1 crossover masks are used in this algorithm to increase the diversity and noisy random vectors. Early in 2019, a new DE variant direction averaged DE (daDE) is proposed by Yang et al. [71]. It created a modified mutation rule which utilizing the information of the current and the former individuals altogether. daDE is employed to solve the quantum state and gate preparation problems. Further in 2019, Awad et al. proposed a new DE algorithm named as DEa-AR [72]. It uses arithmetic recombination crossover and scaling factor (based on Laplace distribution). Additionally, an archive strategy is incorporated to consider the inferior individual’s information to find new good solutions. DEa-AR is propounded to solve the contemporary stochastic optimal power flow OARPD problems. Later in 2019, DE with biological based mutation operator (DEHeO) is proposed by Prabha and Yadav [73]. In this algorithm, a mutation operator (hemostatic operator influenced by hemostasis biological phenomenon) is introduced which gives promising solutions and helps in enhancing the diversity in earlier stages thereby avoiding stagnation during later stages. It is used to apply in real world optimization problems. Recently in 2020, Li et al. developed enhanced adaptive DE (EJADE) [74]. In EJADE, a sorting mechanism is evolved to rationally assign CR values for each individual according to their fitness values. Moreover, a dynamic population reduction strategy is employed to speed up the convergence rate and maintain the diversity. It is used to apply in Photovoltaic systems optimization. Further, Hu et al. proposed Boltzmann annealing DE (BADE) [75] in 2020. It introduced an annealing strategy into the DE algorithm that allows exploring more searching space. Additionally, different strategies are employed at different stages of annealing (high and low temperature) to accelerate the convergence. Also, it is employed to optimize the inversion problem in the directional resistivity logging-while-drilling(DRLWD) measurements.
2.4 PSO variants
PSO has attracted the attention to solve many complex optimization problems due to its efficient search ability and simplicity. However, the main drawback of the PSO is that it may easily get stuck at a local optimal solution region. Therefore, accelerating the convergence speed and avoiding the local optimal solutions are the two critical issues in PSO. To overcome such issues various different modifications of the PSO has been proposed in the literature. A comprehensive survey on the variants of PSO can be found in [36, 37]. Furthermore, briefed reviews of some noteworthy PSO variants are summarized as follows.
Zhenya et al. proposed a modified version of PSO in 1998 [76], where each particle’s uses the best current performance of its neighbours to replace the best previous one. Also to accelerate the search procedure, a not accumulative rate of change replaces to the accumulative one. And it is applied to train the fuzzy neural network problem. Then in 1999, Eberhart and Hu used PSO to analyze the human tremor (essential tremor and Parkinson’s disease) [77]. It is used as to evolve a neural network weights and to evolve the network structure indirectly. Further, Naka and Fukuyama proposed hybrid PSO (HPSO) in 2000 [78]. It replaced the agent position of low evaluation values by high evaluation values using the selection procedure of evolutionary algorithm. Moreover, HPSO can estimate the load and distributed generation output values at each node considering nonlinear characteristics of distribution systems. Abido developed PSO based power system stabilizers (PSOPSS) in 2001 [79]. It incorporated an annealing procedure to make uniform and local search in the initial and later stages respectively. Additionally, feasibility check procedure also imposed in order to prevent the particles not to go outside the feasible search space. It is used as in order to search for optimal settings of PSS parameters. Later in 2002, Al-kazemi and Mohan developed multi-phase PSO (MPPSO) [80]. It evolves multiple groups of particles to change the direction of the search in the different phases that helps to explore the search space, enhancing population diversity, and preventing premature convergence. Also, MPPSO is employed for training multilayer feedforward neural networks (MFNN) problem. Further, Gaing proposed binary PSO (BPSO) in 2003 [81]. In BPSO the trajectories are changes with the probability in which a coordinate will take on binary value (0 or 1). Also, it combined with the lambda-iteration method for solving unit commitment (unit-scheduled and economic dispatch (ED)) problems. Proceeding in year 2004 [82], Pang et al. developed modified PSO, where position and velocity of the particles are presented by fuzzy matrices and operators of the classical PSO are redefined. It is used to solve traveling salesman problem (TSP). Then in 2005, Esmin et al. invented hybrid PSO with mutation (HPSOM) [83]. It incorporated mutation process of GA into PSO which helps to allow the search to escape from local optima and search in different zones of the search space. HPSOM is then applied to solve the power loss reduction problem. Meissner et al. proposed optimized PSO (OPSO) [84] in 2006. It comprises of subswarms and superswarm. The subswarms is used to find a solution to a given optimization problem, while the superswarm is employed to optimize their parameters. This algorithm is applied to resolve the neural network training problem. Later in year 2007, He and Wang invented co-evolutionary PSO (CPSO) [85]. It consists of multiple swarm (for searching the good solutions) and single swarm (for evolving the suitable penalty factors). Furthermore, CPSO is implement in parallel and applied to solve the welded beam design, tension/compression string design and pressure vessel design problems. Zhang et al. proposed improved PSO (IPSO) in 2008 [86], it combines the PSO with the two-point crossover (to redefine the model of the original PSO) and shift mutation operators (to search neighbourhood when a particle gets stagnate). Also, a fast fitness computation method based on matrix is devised to improve the algorithm speed. Moreover, it is used to solve the large scale flow shop scheduling problem. Further in 2009 [87], Meneses et al. presented PSO with random keys (PSORK). In which the position vector’s information is decoded by the random keys (RK), so that positions need not to be rounded or truncated. And, it is employed to solve nuclear reactor reloading problem. Proceeding in year 2010, Azadani et al. proposed constrained PSO (CPSO) [88]. It initializes and updates the particles under uniform distribution for the faster convergence. Also, CPSO is applied to the multi-product and multi-area electricity market dispatch problem. In 2011, Kang and He proposed a novel discrete PSO (DPSO) [89]. It utilizes the characteristics of discrete variable to update the position. Moreover, variable neighbourhood descent algorithm and migration mechanisms are submerged in DPSO to speed up the convergence and maintain diversity. It is employed to solve meta-task assignment in heterogeneous computing systems. Later in 2012, Kar et al. proposed craziness based PSO (CRPSO) [90]. It introduced a craziness operator to make sure that the particle has a predefined craziness probability in order to maintain its diversity. CRPSO is designed to solve the digital finite impulse response (FIR) band stop filter design problem. Proceeding in year 2013, Lim and Isa developed the two-layer PSO with intelligent division of labor (TLPSO-IDL) [91]. It performes the evolutions sequentially on the current and memory swarm. A new learning mechanism is also proposed for current swarm to improve its exploration. Meanwhile an intelligent division of labor (IDL) module is invented for memory swarm to evolve adaptively by allocating different tasks to each swarm member. Additionally, an elitist-based perturbation (EBP) module is considered to prevent stagnation in local optima. Further, it is applied to solve the gear train design problem. Then in 2014, based on particle positions Zhang et al. proposed a novel parameter mechanism for classical PSO. [92]. It used the concept of overshoot and the peak time of a transition process, which provides new way to analyses the particle trajectories. Moreover, this algorithm is applied to solve the antenna array pattern synthesis problem. Basu developed modified PSO (MPSO) in 2015 [93]. The Gaussian random variables are introduced in velocity term to improve the search efficiency and obtaining the global optimum without impairing the convergence speed and the structure of PSO. Also, MPSO is used to apply in solve non-convex economic dispatch problems.
Then in 2016, Eddaly et al. [94] proposed hybrid combinatorial PSO (HCPSO). It introduced an iterative local search algorithm based on probabilistic perturbation, sequentially to PSO for enhancing the solutions quality. This algorithm is used for solving flowshop scheduling problem. Proceeding in same year, Zhang et al. [95] proposed adaptive inertia weight-chaos PSO (AIW-CPSO). It introduced an adaptive inertia weight so as to enhance the local optimization ability. Additionally, the logical self-mapping chaotic search is carried out in order to make the PSO to jump out of local optima. Further, it is employed for extracting the features of Brillouin scattering spectra. Also in 2016, Ngo et al. proposed extraordinariness PSO (EPSO) [96]. It contained extraordinary motion concept (movement strategy) for particles where they can proceed to a target that may be global best, local bests, or even the worst individual. EPSO is used to apply in engineering design problems. Later in 2017, Li et al. developed partitioned and cooperative quantum-behaved PSO (SCQPSO) [97]. It introduced auxiliary swarms and partitioned search space to enhance the population diversity. Also the cooperative theory is considered to improve the particle global search ability. Moreover, it is used to apply in medical image segmentation problem. Proceeding in same year, Phung et al. proposed discrete PSO (DPSO) [98]. It includes three techniques deterministic initialization, random mutation (to avoid the collapse situation and keep the balance between exploration and exploitation), and edge exchange (to compare each valid combination of the swapping mechanism for edges) to improve the accuracy of DPSO. Also, this algorithm solved the inspection path planning (IPP) problem. Qin et al. developed improved orthogonal design PSO (IODPSO) in 2017 [99]. It employed the tent chaotic map for the acceleration coefficients adaptation to improve global search capability. Further, IODPSO is used to solve the single-area and multi-area economic load problems.
Later in year 2018, direction aware PSO algorithm with sensitive swarm leader (DAPSO-SSL) is proposed by Mishra et al. [100]. It incorporated the basic human nature qualities like awareness, maturity, relationship and leadership to swarm leader and individual particles, so that particles can utilize previous knowledge of the best and current fittest individuals. DAPSO-SSL is applied to the community detection problem of big data networks. Proceeding to year 2018, Li et al. proposed stochastic gradient PSO (SGPSO) [101]. It combines stochastic gradient with the randomness of particle swarm search to overcome the problems of premature convergence and poor accuracy of standard PSO. It is used to generate a feasible solution of entry trajectory planning for hypersonic glide vehicles. Then Tian and Shi developed modified PSO (MPSO) in 2018 [102]. It utilized logistic map to distribute the particle uniformly so as to improve to initial population quality. Sigmoid like inertia weight and wavelet mutation is considered to achieve better swarm diversity. Additionally, an auxiliary velocity-position update mechanism is employed to global best particle in order to guarantee the convergence. It is applied to the image segmentation problem. In 2019 Parouha proposed modified time-varying PSO (MTVPSO) [103], It introduced a linearly decreased inertia weight and novel acceleration coefficients which improve the global search capability and diversity of the population. MTVPSO is used to apply in nonconvex/nonsmooth economic load dispatch problems. Further, Hosseini et al. [104] proposed hunter-Attack fractional-order PSO (HAFPSO) in 2019, where fractional-order derivatives and hunter-attack strategy are used to accelerate convergence and avoid stagnation respectively. It is used to apply in Optimum power amplifier design problem. Early in 2019, Dash and Patra developed mutation-based self-regulating and self-perception PSO (MSRSP-PSO) [105]. It incorporates self-regulation and self-perception behaviour of the global particle and dynamic adaption in learning. Also mutation operator is performed in global particle. This algorithm is suitably applied for tracking single as well as multiple objects tracking. Further, Non-inertial opposition-based PSO (NOPSO) [106] is proposed by Lanlan et al. in 2020. It has a non-inertial velocity update formula and opposition based learning strategy (to accelerate the convergence speed) also an adaptive elite mutation strategy (to avoid trapping into local optimum) is introduced. NOPSO is applied in the deep learning problem. Proceeding in same year, novel multi-swam PSO (NMSPSO) [107] is suggested by Xiong et al. It developed the three scheme i.e. novel information exchange strategy (for information transfer between sub-swarms), novel leaning strategy (for speed up the convergence) and novel mutation strategy (for better exploration). Also, it is used to solve the real world applications problems. Then, Motion-encoded PSO (MPSO) [108] is proposed early in 2020 by Phung and Ha. The motion-encoded approach preserves the important qualities of the swarm including the cognitive and social coherence so as to obtain better solution. Moreover, it is used to solve the problem of optimal search for a moving target using UAVs.
2.5 DE and PSO hybrid variants
Hybrid strategy is one of the main research directions to improve the performance of single algorithm. Different optimization algorithms have different search behaviors and advantages. Yet, to overcome individual shortcomings, such as premature convergence or stacking at some local optima, hybrid techniques are now more favored over their individual effort. Therefore, in order to enhance the performance of DE and PSO, lots of their hybrid algorithms are presented in the literature. A systematic survey on hybrid variants of DE and PSO can be found in [38, 39]. Likewise, a briefed review of some notable hybrids of DE and PSO are summarized as follows.
Hendtlass in 2001 proposed SDEA [109], where each individual follows traditional PSO and at the time of searching, individual move from a poorer region to a better region with help of DE time to time. Also, it is applied on unconstrained global optimization problems. Further in 2003, Zhang and Xie devised a hybrid algorithm DEPSO [110]. It has a bell-shaped DE mutation to control population diversity and retain the self-organized particle swarm dynamics. Moreover, it is solved through unconstrained and constrained optimization problems. Then Talbi and Batouche developed DEPSO [111] in 2004. This framework follows the iterative scheme as; DE is employed on even iteration (to enhance diversity) and PSO is applied on odd iteration (to extricate the swarm from unwanted fluctuations). Also, it is applied to solve multimodal image problem. Another hybridization of PSO and DE (DEPSO) was delivered by Hao et al. in 2005 [112], in which position of particle is updated partially through DE (extracting its differential information) and partially by PSO (extracting its memory data) due to which swarm’s diversity and enhancement of local and global search ability is maintained. It is applied to solve the unconstrained global optimization problems. Later in 2008, Niu and Li utilized the concept of parallel mechanism and proposed PSODE [113], in which one population’s individual upgraded by PSO whereas other evolved by DE, this interaction of two populations leads to maintain the diversity. To check its effectiveness applied to solve unimodal and multimodal problems. In 2009, Wang and Cai proposed a hybrid multi-swarm particle swarm optimization (HMPSO) [114], this hybrid model splits the swarm into several sub-swarms and PSO used as search engine for each sub-swarm as well as personal best of each particle is improved by applying DE. Moreover, it is used to solve on constrained optimization problems. Caponio et al. developed a super-fit memetic differential evolution (SFMDE) [115] in 2009. This framework synergized DE with PSO (for super-fit individual) as well as used nelder mead and rosenbrock algorithm as local searcher for measuring quality of super fit individual compared to other individual. Moreover, it is employed for solving optimal control drive design for a direct current (DC) motor and design of a digital filter for image processing purposes. Further, Liu et al. incorporated DE (as it has strong search ability) in PSO (to overcome the stagnation and speed up the convergence) and proposed PSO-DE [116] in 2010. Additionally, it is used to solve welded beam design, tension compression spring design, pressure vessel design, speed reducer design, three-bar truss design problems. Xin et al. proposed DEPSO [117] in year 2010. This hybrid model adopted statistical learning strategy for an individual who leads to the adaptation of evolution methods according to the relative success ratio of alternative methods (DE and PSO). And then applied to solve global numerical optimization problems. In 2011, Pant et al. developed DE-PSO [118]. It utilized the strength of both algorithm (DE and PSO) and executed through alternating phases i.e. initiated by DE if trial vector better than the corresponding point, then it is added in population otherwise enters the PSO phase to generate a candidate solution. DEPSO is further applied to solve unconstrained global optimization problems. Epitropakis et al. in the 2012 [119] framed a hybrid algorithm in which after each evolution step of PSO, the social and cognitive experience evolved with DE that helps to enhance convergence. Further, this hybrid is employed to solve multimodal functions problems. Later in 2013, Nwankwor et al. [120] developed hybrid particle swarm differential evolution (HPSODE). It initiated with DE till the trial vector is generated otherwise PSO is activated further to generate a new candidate solution. Also, HPSODE is used to apply on optimal well placement problems.
Then Sahu et al. [121] in 2014 utilized the advantages of DE (maintaining diversity) and PSO (memory mechanism) and proposed DEPSO. This hybrid algorithm is used to optimize the gains of fuzzy PID controllers employed in the control areas. Further, in year 2014, HPSO-DE initiated by Yu et al. [122]. It has an adaptive mutation to improve the current population position from local optima and balanced diversity efficiently. Then this hybrid algorithm is used to solve unconstrained global optimization problems. Later in 2015, Seyedmahmoudian et al. [123] proposed DEPSO by employing DE that adds on diversity to traditional PSO. Also detrimental effects of the random coefficients are reduced by DE in parallel with PSO. It is a reliable and system independent technique to track the MPP of PV system under partial shading conditions. Parouha and Das proposed DPD [124] in 2015. It is based on tri-population scheme (inferior, mid and superior group), in which DE is executed in the inferior and superior groups, while PSO is employed in the mid-group. Moreover, elitism and non-redundant search concept are included in DPD cycle to maintain diversity and escape local optima effectively. This hybrid is investigated on the engineering design problems. In year 2016, Tang et al. [125] proposed hybrid of DE and PSO namely HNTVPSORBSADE, where a nonlinear time varying PSO (to update the velocities and positions of particles) and a ranking based self-adaptive DE (to avoid stagnation) are introduced which resulted the exploration and exploitation dynamically. This framework is used for solving mobile robot global path planning problem. Further, Memory based DE (MBDE) is proposed by Parouha and Das [126] in 2016. It employed two operators: swarm mutation and swarm crossover (based on concept of PSO) for DE to direct knowledge and improve the solution quality. Also MBDE is applied on continuous optimization problems. Then, DE-PSO-DE is proposed in 2016 by Parouha and Das [127], in which the population is divided into three groups (A, B, & C) and executed in parallel manner. Additionally, elitism (to retain the best obtained values) and non-redundant search (to improve the solution quality) are evolved in DE-PSO-DE. It is employed for solving economic load dispatch problems. A year later in 2017, Famelis et al. devised DE-PSO [128] where to enhance diversity, DE is merged with a velocity-update rule of PSO in DE-PSO. It is applied on Multimodal optimization problems. Mao et al. in 2018 designed DEMPSO [129], in which DE is added first to lessen the search space and then acquired populations used modified PSO (MPSO) as an initial population to speed up the convergence rate. DEMPSO is performed to solve the numerical solution of the forward kinematics of a 3-RPS parallel manipulator. Tang et al. also proposed SAPSO–mSADE [130] in 2018. It integrated self-adaptive PSO (SAPSO) to balance global and local search ability of particles and modified self-adaptive DE (mSADE) to evolve the personal best positions and reduce potential stagnation. It is applied to solve the tension compression spring design and three bar truss design problems. Early in 2019, Too et al. developed binary particle swarm optimization differential evolution (BPSODE) [131]. It inherits the strength of binary PSO (BPSO) and binary DE (BDE) which is computed on sequence. Additionally, the dynamic inertia weight and dynamic crossover rate are introduced to track the optimal solution and balance diversity. And this hybrid algorithm is used to tackle feature selection problems in EMG signals classification. Recently in 2020, Dash et al. proposed HDEPSO [132]. In HDEPSO, three DE operations (mutation, modified crossover and selection) are fused with the best particles of PSO for enhancing global searching ability. Moreover, it is developed to solve the effectiveness of the sharp edge FIR filter (SEFIRF) problem. Since diversity of quantum PSO (QPSO) decline rapidly this shows its inadequacy so Zhao et al. developed improved QPSO [133] in 2020, in which DE is familiarized to improve its diversity and convergence rate. It is employed to solve economic environmental (EED) problem of the microgrid.
3 Proposed methodology
In this section proposed advanced differential evolution (ADE), advanced particle swarm optimization (APSO) and advanced hybrid DEPSO (AHDEPSO) have been described in detail as follows.
3.1 Advanced differential evolution (ADE)
As per earlier extensive studies performance of the DE mostly depends on the following features.
-
(i).
mutation strategy: as it improves the local search ability and convergence rate.
-
(ii).
crossover operator: since it increases the population diversity.
-
(iii).
control parameters (F and Cr): as they improve the search and/or exploration and exploitation ability.
Moreover, above mentioned features are very essential for efficiency of DE because they determine cooperation mechanism among the different individuals. Thus, new DE mutation strategies and crossover operators as well as adjusting control parameters will surly helpful to improve its robustness. Motivated by above observations and to refrain from shortcomings, advanced DE (ADE) is proposed in this paper. Where modified mutation strategy and crossover rate as well as altered selection scheme are introduced like so.
where \( {x}_{i,j}^t \): target vector, \( {v}_{i,j}^t \): mutant vector, rand (0, 1): uniformly distributed random number between 0 & 1, bestj: best vector and F: Scalar factor and it is given as follows.
when F composed of a higher values series, then it is beneficial to the global search and when F composed of a lower values series, then it is beneficial to the local search.
In Eq. (7) Cr is set as\( 2-\mathit{\exp}\left({\left(\frac{t_{max}-t}{t_{max}-0.6t}\right)}^{10}\right) \). It assured of individual diversity in early stage which improves global search ability. Further reduce degree of difference among individuals which accelerate convergence rate in later stage.
where f (·): fitness function values and p: random value in (0, 1]. In this selection each pioneer vectors gets chance to survive and share its observed information with other vectors in the next steps. It implies searching capabilities are more enriched. Moreover, it is advantageous for stabilizing essential exploration and exploitation trends to encourage ADE for converging to better quality solutions. The pseudocode of the proposed ADE is presented below.
3.2 Advanced particle swarm optimization (APSO)
Based on pros and cons along with existing assessments of PSO, it is essential to strike a good trade-off between global and local search to find an optimal solution. Preferably, PSO needs strong exploration ability (particles can rove entire search space instead of clustering around the current best solution) and boost exploitation capability (particles can explore in a local region) at early and later phase of the evolution respectively. In velocity update equation of the PSO, inertia weight (w) and acceleration coefficient (c1and c2) are important factors to satisfy the above requirement with the following concerning concept.
-
(i).
a large and small values of w assists exploration and exploitation respectively.
-
(ii).
c1& c2 values facilitate exploitation and exploration of the search area based on ensuing strategies.
acceleration coefficient | tactics | |
---|---|---|
c1(cognitive acceleration coefficient) | c2 (social acceleration coefficient) | |
large | small | exploration |
slightly large | slightly small | exploitation |
slightly small | slightly large | convergence |
small | large | jumping out |
Considering all of the concerns like advantages, disadvantages and parameter influences of the PSO, an advanced particle swarm optimization (APSO) is introduced in this study. It relies on novel gradually varying (decreasing and/or increasing) parameters (w, c1and c2) stated as follows.
where, wi and wf: initial and final values of w; c1i and c1f: initial and final values of c1; c2i and c2f: initial and final values of c2; t and tmax: iteration index and maximum number of iteration. Hence the velocity and position of the ith particle are updated by the following equations in the proposed APSO.
The pseudocode of the proposed APSO is presented below.
3.3 Advanced hybrid DEPSO (AHDEPSO)
Introductory reviews and results showed that hybrid algorithm improve the performances of DE & PSO, because they have complimentary properties. Therefore, an advanced hybrid algorithm of advanced DE and PSO (AHDEPSO) are proposed to further improve the solution quality. Basically, AHDEPSO is based on relating superior capability of the proposed ADE and APSO.
In AHDEPSO, entire population is sorted according to the fitness function value and divided into two sub-populations i.e. pop1 (best half) and pop2 (rest half). Since pop1 and pop2 contains best and rest half of the main population which implies good global and local search capability respectively. In order to maintain local and global search capability, applying proposed ADE (due to its good local search ability) and APSO (because of its virtuous global search capability) on the respective sub-population ( pop1 and pop2). Evaluating both sub-population then better solution obtained in pop1 (by using ADE) and pop2 (by using APSO) are named as best and gbest separately. If best is less than gbest then pop2 is merged with pop1 thereafter merged population evaluated by ADE (as it mitigate the potential stagnation). Otherwise, pop1 is merged with pop2 afterward merged population evaluated by APSO (as it established to guide better movements). Finally, reporting the optimal solution, if stoppings criteria met then stop otherwise returns to sorting process of population. Continue this whole process until get desire optimal solution. The flowchart of AHDEPSO is demonstrated in Fig. 3 and the pseudocode described below.
4 Experimental results and discussions
In this section considered optimization problems with their experimental results are discussed as follows.
4.1 Test suite (TS) and real world problems (RWPs)
In order to evaluate performance of the proposed ADE, APSO and AHDEPSO algorithm the following unconstrained test suite (TS) and real world problems (RWPs) are considered to solve.
-
(i).
TS-1: 23 basic benchmark functions
-
(ii).
TS-2: IEEE CEC 2017
-
(iii).
RWP-1: Gear train design problem
$$ \operatorname{Minimize}\ f(x)={\left\{\frac{1}{6.931}-\frac{T_d{T}_b}{T_a{T}_f}\right\}}^2={\left\{\frac{1}{6.931}-\frac{x_1{x}_2}{x_3{x}_4}\right\}}^2;\mathrm{subject}\ \mathrm{to}:12\le {x}_i\le 600,i=1,2,3,4. $$ -
(iv).
RWP-2: Frequency modulation sound parameter identification problem
$$ y(t)={a}_1\sin \left({w}_1t\mathrm{\varTheta}+{a}_2\sin \left({w}_2t\mathrm{\varTheta}+{a}_3\sin \left({w}_2t\mathrm{\varTheta}\right)\right)\right) $$
where: \( \mathrm{\varTheta}=\frac{2\pi }{100} \), and −6.4 ≤ ai, wi ≤ 6.35, i = 1, 2, 3. \( \operatorname{Minimize}\ f\left({a}_1,{w}_1,{a}_2,{w}_2,{a}_3,{w}_3\right)={\sum}_{t=0}^{100}{\left(y(t)-{y}_0(t)\right)}^2 \)
-
(v).
RWP-3: The spread spectrum radar poly-phase code design problem
$$ \operatorname{Minimize}\ f(x)=\operatorname{Max}\left\{{f}_1(X),\dots, {f}_{2m}(X)\right\},X=\left\{\left({x}_1,\dots, {x}_n\right)\in {R}^n\left|0\le {x}_j\le 2\pi \right.,j=1,2,\dots, n\right\}\&m=2n-1 $$
with: \( {f}_{2i-1}(x)={\sum}_{j=1}^n\mathit{\cos}\left({\sum}_{k=\left|2i-j-1\right|+1}^j{x}_k\right)\ i=1,2,\dots, n; \)
The description of TS-1 is listed in Table 1 which consists of three groups’ unimodal (f1-f7), multimodal (f8-f13) and fixed-dimension (f14-f23) function. Also, TS-2 cited in Table 2 which consists of unimodal (h1-h3), multimodal (h4-h10), hybrid (h11-h20) and composition (h20-h30) functions. Moreover, the detailed summary of TS-2 and 3 RWPs are given in [134, 135] respectively.
Simulations were conducted on Intel (R) Core (TM) i5–2350 M CPU @ 2.30GHz, RAM: 4.00 GB, Operating System: Microsoft Windows 10, C-free Standard 4.0. An extensive analysis has been carried out to decide the values of parameters wi, wf, c1i, c1f, c2i and c2f used in proposed AHDEPSO. For this the values of (wi, wf), (c1i, c2f) and (c1f, c2i) varies from (0.1–0.9, 0.1–0.9), (0.1–0.9, 0.1–0.9) and (2.1–2.9, 2.1–2.9) respectively with one step length. The success rate (defined below) of total 81 combinations of these parameters are checked using proposed AHDEPSO with population size (30), stopping criteria (500 iterations) and independent run (30) on test suite TS-1 and TS-2 (30D).
where a run is declaired as a ‘successful run’ if |f(x) − f(x∗)| ≤ ∈, where f(x) is the known global minima and f(x∗) is the obtained minima. In this study ∈ is fixed at 0.0001.
The success rate on best 10 combinations of (wi, wf) and (c1i & c2f, c1f & c2i) are presented in Figs. 4 and 5 respectively. From these figures, it is clearly noticeable that the highest success rate can be found at (0.4, 0.9) in case of inertia weight and (0.5, 2.5) in case of acceleration coefficients. Hence, wi = 0.4, wf = 0.9, c1i = 0.5, c1f = 2.5, c2i = 2.5 and c2f = 0.5 have been recommended to use in proposed AHDEPSO. The overall best values in each table are highlighted with boldface letters of the corresponding algorithms. In all experiments, for fair comparison common parameters such as population size, stopping criteria and independent run are set the same or minimum of comparative algorithms. The results of the comparative algorithms are directly taken from the original references. The simulation result analysis on TS-1, TS-2 and RWPs with comparative experiments are presented below.
4.2 Numerical and graphical analysis
The simulation result analysis on TS-1, TS-2 and RWPs with comparative experiments are presented below.
-
i).
On TS-1: 23 basic unconstrained benchmark functions
The produced result by proposed algorithms on TS-1 is compared with traditional algorithms (PSO [3] & DE [4]), DE variants (JADE [136] & SHADE [137]), PSO variants (HEPSO [138] & RPSOLF [139]) and hybrid variants (FAPSO [140] & PSOSCALF [141]). The parameters of all above compared and proposed algorithms are listed in Table 3. The comparative experimental results in terms of mean, std. (standard deviation) and ranking of the objective function values are presented in Table 4 of 30 independent runs.
It should be noted that from Table 4, the mean objective function values of the proposed ADE, APSO and AHDEPSO algorithms are better and/or equal in comparison of above listed traditional algorithms, DE variants, PSO variants and hybrid variants. As per the experimental results shown in Table 4 the following comparison results (among non-proposed algorithms) are summarized as follows for TS-1 cases (i). Unimodal function (f1-f7): proposed AHDEPSO obtained better results in all seven functions (f1-f7), Suggested ADE and APSO obtained better results for f1, f2, f3 and f6 functions and marginally similar for the rest functions. (ii). Multimodal function (f8-f13): proposed AHDEPSO obtained better results for all six function (f8-f13) and similar for f8 (on DE, JADE and PSOSCALF), f9 (on RPSOLF, FAPSO & PSOSCALF) and f11 (on RPSOLF & PSOSCALF). Suggested ADE attained better results for f8, f9, f11 and marginally similar/inferior for the rest functions whereas, APSO obtained better result for f9 and slightly inferior for the rest. (iii). Fixed-dimension function (f14-f23): proposed AHDEPSO and ADE exhibits best performance on all functions meanwhile APSO obtained marginally better or equal results compared to other algorithms. Moreover, all algorithms are individually ranked (as ‘1’for the best and ‘2’ for subsequent performer and so on) in Table 4 based on mean result values. From this table it is concluded that AHDEPSO, ADE and APSO ranked 1st, 2nd and 3rd sequentially. Also, average and overall rank of proposed algorithms Vs others are presented in Table 4. It is clear that (from ranking) performances of proposed algorithms are superior to others. Eventually, proposed ADE, APSO and AHDEPSO produce less std. (it may 0.00E+00) for most of the cases on TS-1 which describes their stability. Furthermore, superiority of proposed algorithms is statistically validated over other algorithms through one-tailedt-test (with 98 degree of freedom (df) at 5% significance level) and Wilcoxon Signed Rank (WSR) test (at 5% significance level). The details of these tests can be found in [142]. The results of t-test and WSR test on TS-1 are reported in Table 5. From Table 5, it can be seen that proposed algorithms has both ‘a (significantly better than other)’ or ‘a+ (highly significance with other)’ sign (in case of t-test) and performs better or equally (in case WSR test) in most of consequence. Also, the p values as reported in Table 5 of the proposed algorithms are less with others which conclude that simulations are reliable for the majority of runs.
The convergence speed of proposed and comparative algorithms is compared over 8 (f1, f5, f6, f7, f8, f9, f10 and f11) typical 30-D TS-1. All plotted convergence graphs (objective function values Vs iterations) are separately presented in Fig. 6a–h. From this figures it can be concluded that proposed ADE, APSO and AHDEPSO converge much faster than other algorithms in all cases. Also, total of 690 runs (30 runs for each TS-1 with 30 population size) optimal solutions are illustrated in Fig. 7. It confers that the proposed algorithms score the highest optimum solutions. Apart from this the computational time of proposed and compared algorithms on each TS-1 are computed and presented in Fig. 8. From this figure, it can be observed that the proposed algorithms take lesser time to achieve the best value for the entire TS-1.
As a whole, above numerical, statistical and graphical result analysis shows that proposed ADE, APSO and AHDEPSO performs very competitive and/or equally with other compared algorithms. However, among three proposed algorithms AHDEPSO is superior i.e. ranking order of proposed algorithms to solve TS-1 is AHDEPSO>ADE > APSO.
-
ii).
On TS-2: IEEE CEC 2017 unconstrained benchmark functions
Further, the produced result by proposed ADE, APSO and AHDEPSO on TS-2 is compared with DE variants (MPEDE [143] & EFADE [144]), PSO variants (HEPSO [138] & CSPSO [145]) and hybrid variants (HPSODE [120] & PSOJADE [146]). The parameters of all above compared and proposed algorithms are listed in Table 6 for TS-2. The relative experimental results in terms of mean error, std. (standard deviation) and ranking of the objective function values are presented in Tables 7, 8 and 9 (for 10D, 30D & 50D TS-2) of 30 independent runs.
From Tables 7, 8 and 9, it should be noted that the mean values of the proposed ADE, APSO and AHDEPSO algorithms are better and/or equal in comparison of all compared algorithms in test suites. As per the experimental results shown in Table 7 following comparison results are summarized for 10DTS-2 cases (i). Unimodal function (h1-h3): proposed ADE, APSO and AHDEPSO exhibits the best performance in all three functions (h1-h3) with similar for h1 (on MPEDE, EFADE, HPSODE & PSOJADE), h2 (on EFADE) and h3 (on MPEDE, EFADE, CSPSO, HPSODE & PSOJADE). (ii). Multimodal function (h4-h10): proposed AHDEPSO outperformed for all functions and equally for h4 (on EFADE & PSOJADE). h5 (on HEPSO), h6 (on MPEDE, CSPSO, HPSODE & PSOJADE) and h9 (on EFADE). Suggested component ADE execute better result in four functions (h4, h6, h7 and h8) whereas APSO gives best result for f4 as well as for other functions they perform similar or marginally inferior. (iii). Hybrid function (h11-h20): proposed AHDEPSO execute better result for five functions (h13, h14, h17, h19 and h20) similar for h15 (on PSOJADE) and marginally inferior with HEPSO (for h11) and EFADE (for h12, h16 and h20). At the same time, ADE and APSO obtained better results for h20 and marginally similar/inferior for the rest function. (iv). Composition function (h21-h30): proposed AHDEPSO exhibits the best performance on six functions (h21, h22, h23, h25, h29 and h30) and equally with HEPSO (for h29) as well as marginally inferior with HEPSO (for h26, h27 and h28) and CSPSO (for h24). Meanwhile, ADE and APSO perform marginally better/similar or slightly inferior in few functions with other algorithms.
Further, as per the experimental results shown in Table 8 the following comparison results are précised as follows for 30DTS-2 cases (i). Unimodal function (h1-h3): proposed AHDEPSO exhibits the best performance in all three functions (h1-h3) similar with EFADE (for h2) and CSPSO (for h3). Suggested ADE shown the best performance on h3 function and perform equal or marginally inferior with other algorithms. Whereas, APSO perform slightly inferior in all three functions with other algorithms. (ii). Multimodal function (h4-h10): proposed AHDEPSO execute better result for five functions (h5, h6, h7, h8 and h9) and equally with HEPSO (for h4) and PSOJADE (for h6 and h10). Suggested ADE execute better result in two functions (h6 and h7) and marginally inferior with others. Whereas, APSO execute similar result or marginally inferior for other functions. (iii). Hybrid function (h11-h20): proposed AHDEPSO execute better result for six functions (h13, h14, h16, h18, h19 and h20) marginally inferior with HEPSO (for h11 and h17) and PSOJADE (for h12) EFADE (for h15). At the same time, ADE obtained better results for h20 and APSO marginally similar/inferior for the rest. (iv). Composition function (h21-h30): proposed AHDEPSO execute the better performance on nine functions (h21, h22, h23, h24, h25, h26, h27, h28 and h29) and equally with EFADE, HEPSO, CSPSO and PSOJADE (for h22), HEPSO (for h23 and h28) as well as marginally inferior with PSOJADE (for h30). Meanwhile, ADE perform better in three functions (h22, h23 and h28) also APSO execute better result in one function (for h22) and marginally better/similar or slightly inferior in few functions with other algorithms.
Then, according to the experimental results given in Table 9 the following comparison results are précised as follows for 50DTS-2 cases (i). Unimodal function (h1-h3): proposed AHDEPSO exhibits the best performance in all three functions (h1-h3) and similar with EFADE (for h2). Suggested ADE execute the best performance on h3 function and gives equal or marginally inferior result with other algorithms. Meanwhile, APSO perform slightly inferior in all three functions with other algorithms. (ii). Multimodal function (h4-h10): proposed AHDEPSO execute better result for six functions (h4, h6, h7, h8, h9 and h10) and equally with PSOJADE (for h6) as well as marginally inferior with HPSODE (for h5). ADE executes better result in two functions (for h6 and h7). Moreover, APSO execute similar or marginally inferior for other functions. (iii). Hybrid function (h11-h20): proposed AHDEPSO outperformed in five functions (h16, h17, h18 h19 and h20) and marginally inferior with HEPSO (for h11), PSOJADE (for h12 and h13) and EFADE (for h14 and h15). Meanwhile, ADE obtained better results only for h20 and at the same time APSO is marginally similar/inferior for the rest. (iv). Composition function (h21-h30): proposed AHDEPSO exhibit the best performance on eight functions (h21, h23, h24, h25, h26, h27, h29 and h30) and marginally inferior with CSPSO (for h22) and EFADE (for h28). Meanwhile, ADE and APSO perform marginally better/similar or slightly inferior in few functions with other algorithms
Moreover, all algorithms are individually ranked in Tables 7, 8 and 9 based on mean error values. From these tables, it is concluded that AHDEPSO, ADE and APSO ranked 1st, 2nd and 5th (in case of 10D TS-2) 1st, 2nd and 3rd (in case of 30D & 50D TS-2) individually. Also, average and overall rank of proposed algorithms Vs others are presented in Tables 7, 8 and 9. It is clear that (from ranking) performances of proposed algorithms are superior to others. Ultimately, proposed ADE, APSO and AHDEPSO produces less std. for most of the functions on TS-2 which defines their stability. Furthermore, superiority of proposed ADE, APSO and AHDEPSO are statistically validated over others through one-tailedt-test (with 98 degree of freedom (df) at 5% significance level) and WSR test (at 5% significance level). The results of t-test and WSR test on TS-2 are reported in Table 10(10D TS-2), Table 11(30D TS-2) and Table 12(50D TS-2). From these tables it can be clearly seen that proposed algorithms has both ‘a’ or ‘a+’ sign (in case of t-test) and performs better or equally (in case WSR test) in maximum circumstances. As well as, the reported less p values in Tables 10, 11 and 12 concluded reliable results for the majority of runs of the proposed algorithms in TS-2 case.
The convergence speed of proposed ADE, APSO and AHDEPSO algorithms with others are analyzed on 10, 30 & 50D TS-2. For this one function from each category of TS-2 (h3, h9, h20 & h29) is taken. The convergence graph for all such function can visualize on Fig. 9a-l for TS-2. It can be observed that from these figures ADE, APSO and AHDEPSO makes better convergence than other algorithms. Apart from this, average error values of proposed algorithms for 10, 30 & 50DTS-2 have been analyzed through box plots. The respective box plots figures presented in Fig. 10a-c. From these figures, it is clearly visible that average error value of proposed algorithms is better than others.
Furthermore, to check the performance of the proposed algorithms ECD (empirical cumulative distribution) test [142] is applied on the set of 10, 30, & 50D TS-2. Whereas, SP/SPbest Vs empirical distribution graph is plotted in Fig. 11a-c over all TS-2 functions. These figures confirm that the proposed algorithms have higher performance over other comparative algorithms. Besides, success rate distributions for 10, 30 & 50DTS-2 are evaluated and presented in Fig. 12. Graphical representation shows that proposed ADE, APSO and AHDEPSO are competitively higher than others. Moreover, the spider charts (where closer distribution of the algorithm is to the circle’s center for better performance of the algorithms) are employed on 10, 30 & 50DTS-2 to check the performance differences more intuitively among all algorithms. These charts can visualized in Fig. 13a-c. It can be observed from these figures that the proposed algorithms perform competitively on most of the functions. Additionally, significance level of proposed with other algorithms for 10, 30 & 50DTS-2 is evaluated through Bonferroni–Dunn’s [147] procedure. It is applied as a post hoc procedure to calculate the critical difference (CD). In this test, solid and dotted lines are representing threshold for the control algorithm (here AHDEPSO) on two prevalent significance levels of 0.05 and 0.1. Also, it demonstrates that performance of two algorithms is significantly different if the difference in average ranking of methods is larger than CD. The Bonferroni–Dunn’s bar chart of different algorithms on average rankings (obtained by Friedman test) for TS-2 is reported in Fig. 14. Here it can be observed that among all compared algorithms AHDEPSO significantly outperformed than others at both levels of significance. Also the average execution time of proposed algorithm with others of 30 independent runs for 10, 30 & 50DTS-2 has been reported in Fig. 15a-c through box plots. From corresponding figures, it is clearly visible that average execution time of proposed algorithms is less than others.
In general, from all above result analysis it can be declare that proposed ADE, APSO and AHDEPSO are performing better and/or equally with others. However, among three proposed algorithms AHDEPSO have larger competence.
-
iii).
On RWPs: Real world problems
The results of proposed ADE, APSO and AHDEPSO algorithms on 3 RWPs are compared with traditional algorithm (PSO [3], DE [4], & ABC [13]), DE variants (JADE [136], SaDE [148], & CoDE [149]), PSO variants (SLPSO [150], HCLPSO [151], & MSPSO [152]), and hybrid variants (MBDE [126], DEPSO-2S [135], & DPD [142]). The parameters of all compared and proposed algorithms are listed in Table 13. The experimental comparative results in terms of best, worst, mean, std. (standard deviation), p values and average ranking of the objective function values are presented in Table 4. From Table 14, the proposed ADE, APSO and AHDEPSO produce better and/or equally results in terms of best, worst, and mean case of all RWPs.
Additionally, less std. and p values of proposed algorithms on most cases imply their better stability and reliable results for the majority of runs respectively. Also, in order to analyze the performance all algorithms are ranked depending on their mean value and reported in Table 14. From this table, it can be seen that AHDEPSO and ADE ranked 1st on all RWPs. Whereas, APSO secured 1st, 2nd and 4th rank on RWP-1, RWP-2 and RWP-3 successively. Hence, based on ranking results the proposed algorithms perform better than others. Moreover, the convergence graphs of all proposed and compared algorithms is plotted and presented in Fig. 16a-c. In these figures it can be clearly visualized that the proposed algorithms converge faster than others. Hence, proposed algorithms are computationally efficient.
All in all, from all above numerical, statistical and graphical result analysis it can be proclaim that proposed ADE, APSO and AHDEPSO are performing very competitive and/or equally with other compared algorithms. However, among three proposed algorithms AHDEPSO has greater efficiency.
4.3 Complexity analysis
Some complexity analysis of the proposed algorithms is given as follows.
-
i).
Algorithm complexity
According to the guidelines of CEC test suite the algorithm complexity has been investigated on 10, 30 & 50D TS-2. Firstly, T0 time (in seconds) executed through subsequent program.
After this computing T1 time on and h18 (TS-2 function). Further, T2 time (evaluated 5 times) for all algorithms to completely execution of the both functions on 200,000 evaluations and store its mean value denoted as \( {\hat{T}}_2 \). Thereafter calculated \( \frac{{\hat{T}}_2-{T}_1}{T_0} \) for each algorithm and reported in Table 15. Also, complexity chart of proposed algorithms with others are presented in Fig. 17. It can be being observed from Table 15 and Fig. 17, the time complexity of proposed algorithms is significantly less than other comparative algorithms.
-
ii).
Time complexity
According to the pseudo-code, AHDEPSO has the following time complexity.
-
a).
Initialization of np-population requires O(np.D) time.
-
b).
Evaluation and sorting population according to fitness function values needs O(tmax × np) time.
-
c).
Division of population into pop1 and pop2 procedure requires O(tmax × np) time.
-
d).
Evaluation of pop1 (by ADE) and pop2 (by APSO) takes O(tmax × \( \frac{np}{2} \)×\( \frac{np}{2} \)) = O(tmax × \( \frac{np^2}{4} \)) time.
-
e).
Merging pop and implementation of algorithm requires O(tmax × np × np)= O(tmax × np2) time.
Therefore, the total time complexity of AHDEPSO for maximum number of iterations is
-
iii).
Space complexity
The space complexity of proposed AHDEPSO algorithm is the maximum amount of space that used by above algorithm. Thus, the total space complexity of proposed AHDEPSO algorithm is O(max(np, \( np, np,\frac{np^2}{4} \),np2) × D) = O(np2 × D).
5 Conclusion with future works
In this study, magnificent survey of various recent-past traditional algorithms with DE and PSO variants as well as their hybrids have been examined along with their applied fields. After this, an advanced DE (ADE, to avoid premature convergence) and PSO (APSO, to avoid stagnation) as well as their hybrid (AHDEPSO, to balance between exploration and exploitation) has been proposed for unconstrained optimization problems. The briefed summary of these proposed algorithms are given as follows.
-
(i).
The novel mutation strategy, crossover probability and altered selection schemes of ADE will provide high and low population diversity at start and end of the algorithm respectively.
-
(ii).
The novel gradually varying (decreasing and/or increasing) parameters of APSO can well-balanced exploration and exploitation capabilities and promotes particles to search high quality solution.
-
(iii).
The AHDEPSO yields guaranteed convergence and diversifying solutions due to different convergence characteristics of ADE and APSO as well as based on multi-population approach as well as the divided population is merged with other in a pre-defined manner.
Further, the effectiveness of the proposed algorithms tested on TS-1 (basic unconstrained benchmark function) and TS-2 (IEEE CEC 2017 functions) along with 3 RWPs (real world problems). The numerical, statistical, and graphical analysis of the proposed algorithms is compared against traditional DE and PSO with their recent variants and hybrids as well as over many state-of-the-art algorithms. The comparative results shapes that the proposed algorithms become more robust and effective. Thus, it is conclusive that the proposed algorithms can be treated as a vital alterative in the field of EAs. Moreover, in the view of feasibilities, superiorities and solution optimality among proposed algorithms AHDEPSO outperformed. In addition, the effectiveness of the proposed algorithms can be tested by some more complicated real-world applications and new EAs will be developed in future.
References
Simpson AR, Dandy GC, Murphy LJ (1994) Genetic algorithms compared to other techniques for pipe optimization. J Water Resour Plan Manag 20:423–443
Goldberg DE (1989) Genetic algorithms in search, optimization, and machine learning. Addison- Wesley Publishing Company
Kennedy J, Eberhart RC (1995) Particle swarm optimization. In: Proceeding of IEEE international conference on neural networks, pp 1942–1948
Storn R, Price K (1997) Differential evolution - a simple and efficient heuristic for global optimization over continuous spaces. J Glob Optim 11:341–359
Murase H, Wadano A (1998) Photosynthetic algorithm for machine learning and TSP. IFAC Proceedings 31:19–24
de Castro LN, von Zuben FJ (2000) The clonal selection algorithm with engineering applications. In: Proceedings of the genetic and evolutionary computation conference, Las Vegas, Nevada, USA, pp 36–39
Geem ZW, Kim JH, Loganathan GV (2001) A new heuristic optimization algorithm: harmony search. Simulation 76(2):60–68
Eusuff M, Lansey KE (2003) Optimization of water distribution network design using the shuffled frog leaping algorithm. J Water Resour Plan Manag 129(3):210–225
Wedde HF, Farooq M, Zhang Y (2004) BeeHive: an efficient fault-tolerant routing algorithm inspired by honey bee behavior. Springer, Berlin, pp 83–94
Pinto P, Runkler TA, Sousa JM (2005) Wasp swarm optimization of logistic systems, Adaptive and Natural Computing Algorithms, pp 264–267
Du H, Wu X, Zhuang J, (2006)Small-world optimization algorithm for function optimization, Advances in Natural Computation, pp 264–273
Mehrabian AR, Lucas C (2006) A novel numerical optimization algorithm inspired from weed colonization. Ecol Inform 1(4):355–366
Karaboga D, Basturk B (2007) A powerful and efficient algorithm for numerical function optimization: artificial bee colony algorithm. `J Glob Optim 39(3):459–471
Havens TC, Spain CJ, Salmon NG, Keller JM (2008) Roach infestation optimization. In: Proceedings of the IEEE Swarm Intelligence Symposium, pp 1–7
Simon D (2008)Biogeography-based optimization. IEEE Trans Evol Comput 12(6):702–713
Yang XS, Deb S (2009) Cuckoo search via Lévy flights, In: Proceedings of world congress on Nature & Biologically Inspired Computing, Coimbatore, India, pp 210–214
Yang X (2009) Firefly algorithms for multimodal optimization, stochastic algorithms: foundations and applications, vol 5792. Springer, Berlin Heidelberg, pp 169–178
Rashedi E, Nezamabadi-pour H, Saryazdi S (2009) A gravitational search algorithm. Inf Sci 179(13):2232–2248
Yang XS (2010) A new metaheuristic bat-inspired algorithm, In: Proceedings of the fourth international workshop on nature inspired cooperative strategies for optimization (NICSO 2010), Berlin, Heidelberg. 65–74
Rao RV, Savsani VJ, Vakharia DP (2011)Teaching-learning-based optimization: a novel method for constrained mechanical design optimization problems. Comput Aided Des 43(3):303–315
Eskandar H, Sadollah A, Bahreininejad A, Hamdi M (2012) Water cycle algorithm – a novel metaheuristic optimization method for solving constrained engineering optimization problems. Comput Struct 110-111:151–166
Gandomi AH, Alavi AH (2012) Krill herd: a new bio-inspired optimization algorithm. Commun Nonlinear Sci Numer Simul 17(12):4831–4845
Cuevas E, Cienfuegos M, Zaldívar D, Pérez-Cisneros M (2013) A swarm optimization algorithm inspired in the behavior of the social-spider. Expert Syst Appl 40(16):6374–6384
Bansal JC, Sharma H, Jadon SS, Clerc M (2014) Spider monkey optimization algorithm for numerical optimization. Memetic Computing 6(1):31–47
Mirjalili S, Mirjalili SM, Lewis A (2014) Grey wolf optimizer. Adv Eng Softw 69:46–61
Zheng YJ (2015) Water wave optimization: a new nature-inspired metaheuristic. Comput Oper Res 55:1–11
Mirjalili S (2015)Moth-flame optimization algorithm: a novel nature-inspired heuristic paradigm. Knowl-Based Syst 89:228–249
Mirjalili S, Lewis A (2016) The whale optimization algorithm. Adv Eng Softw 95:51–67
Mirjalili S (2016) Dragonfly algorithm: a new meta-heuristic optimization technique for solving single-objective, discrete and multi-objective problems. Neural Comput Applic 27(4):1053–1073
Saremi S, Mirjalili S, Lewis A (2017) Grasshopper optimisation algorithm: theory and application. Adv Eng Softw 105:30–47
Pierezan J, Dos Santos Coelho L (2018) Coyote optimization algorithm: a new metaheuristic for global optimization problems, IEEE Congress on Evolutionary Computation, pp 1–8
Shabani A, Asgarian B, Gharebaghi SA, Salido MA, Giret A (2019) A new optimization algorithm based on search and rescue operations. Math Probl Eng 2019:1–23
Marzbali AG (2020) A novel nature-inspired meta-heuristic algorithm for optimization: bear smell search algorithm. Soft Comput:1–33
Neri F, Tirronen V (2010) Recent advances in differential evolution: a survey and experimental analysis. Artif Intell Rev 33(1–2):61–106
Das S, Suganthan PN (2011) Differential evolution: a survey of the state-of-the-art. IEEE Trans Evol Comput 15(1):4–31
Thangaraj R, Pant M, Abraham A, Bouvry P (2011) Particle swarm optimization: hybridization perspectives and experimental illustrations. Appl Math Comput 217(12):5208–5226
Sengupta S, Basak S, Peters RA (2019) Particle swarm optimization: a survey of historical and recent developments with hybridization perspectives. Mach Learn Knowl Extract 1(1):157–191
Das S, Abraham A, Konar A (2008) Particle swarm optimization and differential evolution algorithms: technical analysis, applications and hybridization perspectives. Adv Comput Intell Ind Syst 5:1–38
Xin B, Chen J, Zhang J, Fang H, Peng Z (2012) Hybridizing differential evolution and particle swarm optimization to design powerful optimizers: a review and taxonomy. IEEE Trans Syst Man Cybern 42(5):744–767
Wolpert DH, Macready WG (1997) No free lunch theorems for optimization. IEEE Trans Evol Comput 1(1):67–82
Joshi R, Sanderson AC (1997) Minimal representation multisensor fusion using differential evolution. In: Proceedings of IEEE International Symposium on Computational Intelligence in Robotics and Automation CIRA’97. Towards New Computational Principles for Robotics and Automation, pp 266–273
Cheng S, Hwang C (1998) Designing pid controllers with a minimum IAE criterion by a differential evolution algorithm. Chem Eng Commun 170(1):83–115
Lee MH, Han C, Chang KS (1999) Dynamic optimization of a continuous polymer reactor using a modified differential evolution algorithm. Ind Eng Chem Res 38(12):4825–4831
Kyprianou A, Giacomin J, Worden K, Heidrich M, Bocking J (2000) Differential evolution based identification of automotive hydraulic engine mount model parameters. Proc Inst Mech Eng Part D-J Automob Eng 214(3):249–264
Ruzek B, Kvasnicka M (2001) Differential evolution algorithm in the earthquake hypocenter location. Pure Appl Geophys 158(4):667–693
Chen C, Chen D, Cao G (2002) An improved differential evolution algorithm in training and encoding prior knowledge into feedforward networks with application in chemistry. Chemom Intell Lab Syst 64(1):27–43
Ilonen J, Kamarainen J-K, Lampinen J (2003) Differential evolution training algorithm for feed-forward neural networks. Neural Process Lett 17(1):93–105
Kapadi MD, Gudi RD (2004) Optimal control of fed-batch fermentation involving multiple feeds using differential evolution. Process Biochem 39(11):1709–1721
Rane TD, Dewri R, Ghosh S, Chakraborti N, Mitra K (2005) Modeling the recrystallization process using inverse cellular automata and genetic algorithms: studies using differential evolution. J Phase Equilib Diffus 26(4):311–321
Babu BV, Angira R (2006) Modified differential evolution (MDE) for optimization of non-linear chemical processes. Comput Chem Eng 30(6–7):989–1002
Chang CF, Wong JJ, Chiou JP, Su CT (2007) Robust searching hybrid differential evolution method for optimal reactive power planning in large-scale distribution systems. Electr Power Syst Res 77(5–6):430–437
Noman N, Iba H (2008) Differential evolution for economic load dispatch problems. Electr Power Syst Res 78(8):1322–1331
Das S, Konar A (2009) Automatic image pixel clustering with an improved differential evolution. Appl Soft Comput 9(1):226–236
Amjady N, Sharifzadeh H (2010) Solution of non-convex economic dispatch problem considering valve loading effect by a new modified differential evolution algorithm. Int J Electr Power Energy Syst 32(8):893–903
Uyar AS, Turkay B, Keles A (2011) A novel differential evolution application to short-term electrical power generation scheduling. Int J Electr Power Energy Syst 33(6):1236–1242
Dos Santos GS, Luvizotto LGJ, Mariani VC, Coelho L d S (2012) Least squares support vector machines with tuning based on chaotic differential evolution approach applied to the identification of a thermal process. Expert Syst Appl 39(5):4805–4812
Tsai JT, Fang JC, Chou JH (2013) Optimized task scheduling and resource allocation on cloud computing environment using improved differential evolution algorithm. Comput Oper Res 40(12):3045–3055
Baskan O, Ceylan H (2014) Modified differential evolution algorithm for the continuous network design problem. Procedia Soc Behav Sci 111:48–57
Guo SM, Yang CC (2015) Enhancing differential evolution utilizing eigenvector-based crossover operator. IEEE Trans Evol Comput 19(1):31–49
Ayala HVH, dos Santos FM, Mariani VC, Coelho L d S (2015) Image thresholding segmentation based on a novel beta differential evolution approach. Expert Syst Appl 42(4):2136–2142
Chen N, Chen WN, Zhang J (2015) Fast detection of human using differential evolution. Signal Process 110:155–163
Do DTT, Lee S, Lee J (2016) A modified differential evolution algorithm for tensegrity structures. Compos Struct 158:11–19
Sethanan K, Pitakaso R (2016) Differential evolution algorithms for scheduling raw milk transportation. Comput Electron Agric 121:245–259
Basu M (2016)Quasi-oppositional differential evolution for optimal reactive power dispatch. Int J Electr Power Energy Syst 78:29–40
Vivekanandan T, Sriman Narayana Iyengar NC (2017) Optimal feature selection using a modified differential evolution algorithm and its effectiveness for prediction of heart disease. Comput Biol Med 90:125–136
Suresh S, Lal S (2017) Modified differential evolution algorithm for contrast and brightness enhancement of satellite images. Appl Soft Comput 61:622–641
Sakr WS, EL-Sehiemy RA, Azmy AM (2017) Adaptive differential evolution algorithm for efficient reactive power management. Appl Soft Comput 53:336–351
Qiu X, Xu JX, Xu Y, Tan KC (2018) A new differential evolution algorithm for minimax optimization in robust design. IEEE Trans Cybern 48(5):1355–1368
Yuzgec U, Eser M (2018) Chaotic based differential evolution algorithm for optimization of baker’s yeast drying process. Egypt Inform J 19(3):151–163
Buba AT, Lee LS (2018) A differential evolution for simultaneous transit network design and frequency setting problem. Expert Syst Appl 106:277–289
Yang X, Li J, Peng X (2019) An improved differential evolution algorithm for learning high-fidelity quantum controls. Sci Bull 64(19):1402–1408
Awad NH, Ali MZ, Mallipeddi R, Suganthan PN (2019) An efficient Differential Evolution algorithm for stochastic OPF based active-reactive power dispatch problem considering renewable generators. Appl Soft Comput 76:455–458
Prabha S, Yadav R (2019) Differential evolution with biological-based mutation operator. Eng Sci Technol Int J 23(2):253–263
Li S, Gu Q, Gong W, Ning B (2020) An enhanced adaptive differential evolution algorithm for parameter extraction of photovoltaic models. Energy Convers Manag 205:112443
Hu L, Hua W, Lei W, Xiantian Z (2020) A modified Boltzmann annealing differential evolution algorithm for inversion of directional resistivity logging-while-drilling measurements. J Pet Sci Eng 188:106916
Zhenya H, Chengjian W, Luxi Y, Xiqi G, Susu Y, Eberhart RC, Shi Y 1998 Extracting rules from fuzzy neural network by particle swarm optimisation. In: Proceedings of IEEE International Conference on Evolutionary Computation. IEEE World Congress on Computational Intelligence, pp, 74-77
Eberhart RC, Xiaohui H (1999) Human tremor analysis using particle swarm optimization. In: Proceedings of the Congress on Evolutionary Computation-CEC99, pp, 1927–1930
Naka S, Genji T, Yura T, Fukuyama Y, Hayashi N (2002) Distribution state estimation considering nonlinear characteristics of practical equipment using hybrid particle swarm optimization. In: Proceedings of International Conference on Power System Technology, PowerCon, pp 1083–1088
Abido AA (2001) Particle swarm optimization for multimachine power system stabilizer design. 2001 Power Engineering Society Summer Meeting, Conference Proceedings, pp 1346–1351
Al-kazemi B, Mohan CK (2002) Training feedforward neural networks using multi-phase particle swarm optimization. In: Proceedings of the 9th International Conference on Neural Information Processing, pp 2615–2619
Gaing ZL (2003) Discrete particle swarm optimization algorithm for unit commitment. IEEE Power Engineering Society General Meeting, pp 418–424
Pang W, Wang K, Zhou C, Dong L 2004 Fuzzy discrete particle swarm optimization for solving traveling salesman problem. In: Proceeding of the Fourth International Conference on Computer and Information Technology
Esmin AAA, Lambert-Torres G, Zambroni de Souza AC (2005) A hybrid particle swarm optimization applied to loss power minimization. IEEE Trans Power Syst 20(2):859–866
Meissner M, Schmuker M, Schneider G (2006) Optimized particle swarm optimization (OPSO) and its application to artificial neural network training. BMC Bioinformatics 7(1):125
He Q, Wang L (2007) An effective co-evolutionary particle swarm optimization for constrained engineering design problems. Eng Appl Artif Intell 20(1):89–99
Zhang C, Sun J, Zhu X, Yang Q (2008) An improved particle swarm optimization algorithm for flowshop scheduling problem. Inf Process Lett 108(4):204–209
Meneses AA d M, Machado MD, Schirru R (2009) Particle Swarm Optimization applied to the nuclear reload problem of a Pressurized Water Reactor. Prog Nucl Energy 51(2):319–326
Azadani EN, Hosseinian SH, Moradzadeh B (2010) Generation and reserve dispatch in a competitive market using constrained particle swarm optimization. Int J Electr Power Energy Syst 32(1):79–86
Kang Q, He H (2011) A novel discrete particle swarm optimization algorithm for meta-task assignment in heterogeneous computing systems. Microprocess Microsyst 35(1):10–17
Kar R, Mandal D, Mondal S, Ghoshal SP (2012) Craziness based particle swarm optimization algorithm for FIR band stop filter design. Swarm Evol Comput 7:58–64
Lim WH, Mat Isa NA (2013)Two-layer particle swarm optimization with intelligent division of labor. Eng Appl Artif Intell 26(10):2327–2348
Zhang W, Ma D, Wei JJ, Liang HF (2014) A parameter selection strategy for particle swarm optimization based on particle positions. Expert Syst Appl 41(7):3576–3584
Basu M (2015) Modified particle swarm optimization for nonconvex economic dispatch problems. Int J Electr Power Energy Syst 69:304–312
Eddaly M, Jarboui B, Siarry P (2016) Combinatorial particle swarm optimization for solving blocking flowshop scheduling problem. J Comput Des Eng 3(4):295–311
Zhang Y, Zhao Y, Fu X, Xu J (2016) A feature extraction method of the particle swarm optimization algorithm based on adaptive inertia weight and chaos optimization for Brillouin scattering spectra. Opt Commun 376:56–66
Ngo TT, Sadollah A, Kim JH (2016) A cooperative particle swarm optimizer with stochastic movements for computationally expensive numerical optimization problems. J Comput Sci 13:68–82
Li Y, Bai X, Jiao L, Xue Y (2017)Partitioned-cooperative quantum-behaved particle swarm optimization based on multilevel thresholding applied to medical image segmentation. Appl Soft Comput 56:345–356
Phung MD, Quach CH, Dinh TH, Ha Q (2017) Enhanced discrete particle swarm optimization path planning for UAV vision-based surface inspection. Autom Constr 81:25–33
Qin Q, Cheng S, Chu X, Lei X, Shi Y (2017) Solving non-convex/non-smooth economic load dispatch problems via an enhanced particle swarm optimization. Appl Soft Comput 59:229–242
Mishra KK, Bisht H, Singh T, Chang V (2018) A direction aware particle swarm optimization with sensitive swarm leader. Big Data Research 14:57–67
Li Z, Hu C, Ding C, Liu G, He B (2018) Stochastic gradient particle swarm optimization based entry trajectory rapid planning for hypersonic glide vehicles. Aerosp Sci Technol 76:176–186
Tian D, Shi Z (2018) MPSO: modified particle swarm optimization and its applications. Swarm Evolut Comput 41:49–68
Parouha RP (2019)Nonconvex/nonsmooth economic load dispatch using modified time-varying particle swarm optimization. Comput Intell 35(4):717–744
Hosseini SA, Hajipour A, Tavakoli H (2019) Design and optimization of a CMOS power amplifier using innovative fractional-order particle swarm optimization. Appl Soft Comput 85:1–10
Dash PP, Patra D (2019) Mutation based self-regulating and self-perception particle swarm optimization for efficient object tracking in a video. Measurement 144:311–327
Lanlan K, Ruey SC, Wenliang C, Yeh C (2020)Non-inertial opposition-based particle swarm optimization and its theoretical analysis for deep learning applications. Appl Soft Comput 88:1–10
Xiong H, Qiu B, Liu J (2020) An improved multi-swarm particle swarm optimizer for optimizing the electric field distribution of multichannel transcranial magnetic stimulation. Artif Intell Med 104:1–14
Phung MD, Ha QP (2020)Motion-encoded particle swarm optimization for moving target search using UAVs. Appl Soft Comput 97:106705
Hendtlass T (2001) A combined swarm differential evolution algorithm for optimization problems, in: proceedings of 14th international conference on industrial and engineering applications of artificial intelligence and expert systems. Lect Notes Comput Sci 2070:11–18
Zhang WJ, Xie XF (2003) DEPSO: hybrid particle swarm with differential evolution operator, proceedings of the IEEE international conference on systems, man and cybernetics Washington DC. USA:3816–3821
Talbi H, Batouche M (2004) Hybrid particle swarm with differential evolution for multimodal image registration. In: Proceedings of the IEEE International Conference on Industrial Technology. 3, 1567–1573
Hao ZF, Gua G-H, Huang H (2007) A particle swarm optimization algorithm with differential evolution, Proceedings of Sixth International Conference on Machine Learning and Cybernetics. 1031–1035
Niu B, Li L (2008) A novel PSO-DE-based hybrid algorithm for global optimization. Lect Notes Comput Sci 5227:156–163
Wang Y, Cai Z (2009) A hybrid multi-swarm particle swarm optimization to solve constrained optimization problems. Front Comput Sci 3:38–52
Caponio A, Neri F, Tirronen V (2009) Superfit control adaption in memetic differential evolution frameworks. Soft Comput 13(8–9):811–831
Liu H, Cai Z, Wang Y (2010) Hybridizing particle swarm optimization with differential evolution for constrained numerical and engineering optimization. Appl Soft Comput 10(2):629–640
Xin B, Chen J, Peng Z, Pan F (2010) An adaptive hybrid optimizer based on particle swarm and differential evolution for global optimization. Sci China Inf Sci 53(5):980–989
Pant M, Thangaraj R, Abraham A (2011) DE-PSO: A new hybrid meta-heuristic for solving global optimization problems. New Math Nat Comput 7(3):363–381
Epitropakis MG, Plagianakos VP, Vrahatis MN (2012) Evolving cognitive and social experience in particle swarm optimization through differential evolution: a hybrid approach. Inf Sci 216:50–92
Nwankwor E, Nagar AK, Reid DC (2012) Hybrid differential evolution and particle swarm optimization for optimal well placement. Comput Geosci 17(2):249–268
Sahu BK, Pati S, Panda S (2014) Hybrid differential evolution particle swarm optimisation optimised fuzzy proportional–integral derivative controller for automatic generation control of interconnected power system. IET Gener Transm Distrib 8(11):1789–1800
Yu X, Cao J, Shan H, Zhu L, Guo J (2014) An adaptive hybrid algorithm based on particle swarm optimization and differential evolution for global optimization. Sci World J 2014:215472
Seyedmahmoudian M, Rahmani R, Mekhilef S, Than Oo AM, Stojcevski A, Soon TK, Ghandhari AS (2015) Simulation and hardware implementation of new maximum power point tracking technique for partially shaded PV system using hybrid DEPSO method. Trans Sustain Energy 6(3):850–862
Parouha RP, Das KN (2015) An efficient hybrid technique for numerical optimization and applications. Comput Ind Eng 83:193–216
Tang B, Zhu Z, Luo J (2016) Hybridizing particle swarm optimization and differential evolution for the Mobile robot global path planning. Int J Adv Robot Syst 13(3):1–17
Parouha RP, Das KN (2016) A robust memory based hybrid differential evolution for continuous optimization problem. Knowl-Based Syst 103:118–131
Parouha RP, Das KN (2016) DPD: An intelligent parallel hybrid algorithm for economic load dispatch problems with various practical constraints. Expert Syst Appl 63:295–309
Famelis IT, Alexandridis A, Tsitouras C (2017) A highly accurate differential evolution–particle swarm optimization algorithm for the construction of initial value problem solvers. Eng Optim 50(8):1364–1379
Mao B, Xie Z, Wang Y, Handroos H, Wu H (2018) A hybrid strategy of differential evolution and modified particle swarm optimization for numerical solution of a parallel manipulator. Math Probl Eng:1–9
Tang B, Xiang K, Pang M (2018) An integrated particle swarm optimization approach hybridizing a new self-adaptive particle swarm optimization with a modified differential evolution. Neural Comput & Applic:1–35
Too J, Abdullah AR, Saad NM (2019) Hybrid binary particle swarm optimization differential evolution-based feature selection for EMG signals classification. Axioms 8(3):1–17
Dash J, Dam B, Swain R (2019) Design and implementation of sharp edge FIR filters using hybrid differential evolution particle swarm optimization. AEU Int J Electron Commun 114:1–61
Zhao X, Zhang Z, Xie Y, Meng J (2020)Economic-environmental dispatch of microgrid based on improved quantum particle swarm optimization. Energy 195:1–39
Awad N, Ali M, Liang J, Qu B, Suganthan P (2016) Problem definitions and evaluation criteria for the CEC 2017 special session and competition on single objective real-parameter numerical optimization, Technical Report
El Dor A, Clerc M, Siarry P (2012) Hybridization of differential evolution and particle swarm optimization in a new algorithm DEPSO-2S. Swarm Evolut Comput 7269:57–65
Zhang J, Sanderson C (2009) JADE: adaptive differential evolution with optional external archive. IEEE Trans Evol Comput 13(5):945–958
Tanabe R, Fukunaga A (2013)Success-history based parameter adaptation for differential evolution. In: IEEE Congress on Evolutionary Computation. 71–78
Mahmoodabadi MJ, Mottaghi ZS, Bagheri A (2014) High exploration particle swarm optimization. J Inf Sci 273:101–111
Yan B, Zhao Z, Zhou Y, Yuan W, Li J, Wu J, Cheng D (2017) A particle swarm optimization algorithm with random learning mechanism and levy flight for optimization of atomic clusters. Comput Phys Commun 219:79–86
Xia X, Gui L, He G, Xie C, Wei B, Xing Y, Tang Y (2018) A hybrid optimizer based on firefly algorithm and particle swarm optimization algorithm. J Comput Sci 26:488–500
Chegini SN, Bagheri A, Najafi F (2018) A new hybrid PSO based on sine cosine algorithm and levy flight for solving optimization problems. Appl Soft Comput 73:697–726
Das KN, Parouha RP (2015) An ideal tri-population approach for unconstrained optimization and applications. Appl Math Comput 256:666–701
Wu G, Mallipeddi R, Suganthan PN, Wang R, Chen H (2016) Differential evolution with multipopulation based ensemble of mutation strategies. Inf Sci 329:329–345
Mohamed AW, Suganthan PN (2018)Real-parameter unconstrained optimization based on enhanced fitness-adaptive differential evolution algorithm with novel mutation. Soft Comput 22(10):3215–3235
Meng A, Chen Y, Yin H, Chen S (2014) Crisscross optimization algorithm and its application. Knowl-Based System 67:218–229
Du S-Y, Liu Z-G(2019) Hybridizing particle swarm optimization with JADE for continuous optimization. Multimed Tools Appl:1–18
Zar JH (1999) Biostatistical analysis. Prentice Hall, Englewood Cliffs
Qin AK, Huang VL, Suganthan PN (2009) Differential evolution algorithm with strategy adaptation for global numerical optimization. IEEE Trans Evol Comput 13(2):398–417
Wang Y, Cai ZZ, Zhang QF (2011) Differential evolution with composite trial vector generation strategies and control parameters. IEEE Trans Evol Comput 15(1):55–66
Li C, Yang S, Nguyen TT (2012) A self-learning particle swarm optimizer for global optimization problems. IEEE Trans Syst Man Cybern 42(3):627–646
Lynn N, Suganthan P (2015) Heterogeneous comprehensive learning particle swarm optimization with enhanced exploration and exploitation. Swarm Evolut Comput 24:11–24
Xuewen X, Ling G, Hui ZZ (2018) A multi-swarm particle swarm optimization algorithm based on dynamical topology and purposeful. Appl Soft Comput 67:126–140
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Parouha, R.P., Verma, P. A systematic overview of developments in differential evolution and particle swarm optimization with their advanced suggestion. Appl Intell 52, 10448–10492 (2022). https://doi.org/10.1007/s10489-021-02803-7
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10489-021-02803-7