1 Introduction

Many optimization methods have been developed to solve multifaceted optimization problems. However, conventional optimization methods have certain inherent drawbacks like high computational complexity, local optimal stagnation, and derivation of the search space [1]. Also, it is difficult to find the optimal solution in the solving process. Presently, to overcome the drawbacks of conventional optimization methods, a bunch of optimization methods known as metaheuristics algorithms (MAs) have been introduced. The mechanisms to develop MAs are simple as well as based on natural practices and become yet very efficient in solving complex global optimization problems. According to the mechanical differences, the MAs can be categorized into four groups as follows- swarm intelligence algorithms, evolutionary algorithms, physics-based algorithms, and human behavior based algorithms. Some recent past instances of these algorithms are depicted in Fig. 1 and reviewed briefly as follows.

Fig. 1
figure 1

Traditional metaheuristics

Genetic algorithm (GA), the foundation for many evolutionary algorithms, was defined by David Goldberg in 1989 [2]. It is taken as the search algorithms build on the mechanics of natural genetics and selection, simulated by the natural process of evolution like selection, mutation, and crossover. Later in 1995, inspired by the flocking of bird Eberhart and Kennedy proposed particle swarm optimization (PSO) [3]. PSO has acquired immense popularity amongst researchers because of its simplicity and effectiveness in plenty of scientific and industrial applications. In PSO particles get updated by themselves with the initial velocity, also all the individuals learn from the others and adapt themselves by trying to emulate the behavior of the fittest individuals. On the other hand, differential evolution (DE) is one of the most popular algorithms in evolutionary metaheuristics algorithm discovered by Storn and Price in 1997 [4]. DE work on two phases; initialization and evolution. In first phase, population is generated randomly and in the second phase, the same generated population undergo mutation, crossover and selection processes. Due some advantages like easy implementation and strong global search capability, DE has come out to be a well-liked choice among researchers for solving various optimization problems in different sectors. Then in 1998, photosynthetic learning algorithm (PLA) came into existence by Murase and Wadano [5]. It utilizes the rules governing the conversion of carbon molecules from one substance to another in the Benson-Calvin cycle (comprises of three phases i.e. carhoxylation, reduction & regeneration of the acceptor) and Photorespiration reactions. Further in 2000, de Castro and Zuben developed clonal selection algorithm (CSA) [6] which is related to artificial immune system that described a general learning strategy. The principal immune aspects of CSA taken into consider were: maintenance of the memory cells, selection and cloning of the most stimulated cells, death of non-stimulated cells, affinity maturation, and re-selection of the clones.

With the help of analogy of the music performance process, Geem et al. [7] devised harmony search (HS) algorithm. In this algorithm a new vector is produced after taking all existing vectors instead of considering only two (parents) as in the genetic algorithm, and HS does not need the setting of initial values of decision variables and it performed well in many combinatorial or continuous problems. In 2003, Eusuff and Lansey proposed shuffled frog leaping algorithm (SFLA) [8]. It mimics the cooperative behavior of frogs displayed while they search for food in a swamp and uses memetic evolution in the form of infection of ideas from one individual to another in a local search. Also, a shuffling technique permits for the exchange of information between local searches to move toward a global optimum. Again, in 2004 BeeHive algorithm (Beehive) devised by Wedde et al. [9], it is inspired by the communicative and evaluative procedures of honey bees. In this algorithm, bee agents pass through network regions called foraging zones and their information on network state is delivered for updating the local routing tables. Further, Pinto et al. in 2005 proposed wasp swarm optimization (WSO) [10]. It is bidding algorithm where the wasps take the role of bidders trying to acquire finite resources and had been designed specifically with reference to logistics system optimization. Then, Du et al. used theories of small-world phenomenon and constructed SWOA small-world optimization algorithm (SWOA) [11] algorithm in 2006 where local shortcuts search and random long-range search operator are employed to solve optimization problems. Later in 2006, Mehrabian and Lucas proposed a general-purpose optimization algorithm that is inspired by weed colonization named as invasive weed optimization (IWO) [12]. In this algorithm, invasive weed reproduces quickly by building seeds and increase their population. Moreover, their behavior becomes different with time as the colony becomes dense leaving lesser opportunity of life for the ones with lesser fitness. Again in 2007, Karaboga and Basturk presented artificial bee colony (ABC) [13] which contains three groups of bees: employed bees, onlookers and scouts with three steps cycle: employed bees send through the food sources then their nectar amounts are measured; onlookers select the food sources after sharing the information of employed bees then determining the nectar amount of the foods and lastly determining the scout bees then direct them onto possible food sources.

Moving in the year of 2008, Havens et al. invented roach infestation optimization (RIO) [14] whose main aim is to look into the effect of adapting the PSO with the social behavior of cockroaches. In this algorithm cockroaches makes an effort to search the darkest place, fitness of a cockroach is proportional to the darkness of its location. They communicated with each other with a predefined probability. At a certain point of time cockroaches become hungry and leave the darkness to search for food. Another population-based metaheuristic algorithm, biogeography based optimization (BBO) was proposed by Simon in 2008 [15]. It is inspired by geographical distribution of biological organisms and enclosed two main operator’s migration (support to update each individual by sharing the characteristic of individuals) and mutation (helps to enhance diversity and the changes for a good solution). Later in 2009, Yang & Deb devised cuckoo search (CS) algorithm [16] in which the brood parasitic behavior of cuckoo species simulated along with the lévy flight action of birds and fruit flies. There are three steps in CS, (i) one egg laid at a time, and cuckoo leaves its egg in an any random nest, (ii)high-quality eggs nests having can survive, (iii) the number of host nests is constant, and the egg can be noticed by the host bird with a probability (pa) belongs to [0, 1]. Further, in the area of swarm intelligence Yang discovered firefly algorithm (FA) in 2009 [17], inspired by the behavior of flashing features of fireflies. The two main purposes of such flashes are attracting mating partners and warning against predators. The flashing can be formulated as an objective function which needs to be optimized. Also in same year, Rashedi et al. proposed gravitational search algorithm (GSA) [18]. It is based on the law of Gravity in which agents are considered as objects and their performance is measured by their masses. Each agent has four properties (determined by fitness function value) position, inertial mass, active gravitational mass, and passive gravitational. Then in 2010, bat algorithm (BA) was proposed by Yang [19]. Its main motivation is the echolocation behavior of bats, where bats can find their prey and differentiate different kinds of insects even in complete darkness.

Further, Rao et al. introduced teaching-learning based optimization (TLBO) in the year 2011 [20]. It is training based model in which population comprises of two phase i.e. Teacher phase (learning from the teacher) and learner phase (learning by the interaction between learners). Then in the year 2012, water cycle algorithm (WCA) was discovered by Eskandar et al. [21]. Their elementary concepts are inspired from nature and rely on the observation of water cycle process. In WCA, raindrop is considering as initial population, best raindrop (best individual) chosen as sea, good raindrops taken as river and left raindrop chosen as streams which flow to the rivers and sea. Again in 2012, Gandomi and Alavi devised an algorithm named as krill herd (KH) [22], which is based on the herding behavior of krill. Its three main actions are: movement influence by the presence of other individuals, foraging activity and random diffusion, that’s determined by the time-dependent position of an individuals. Later in 2013, social spider optimization (SSO) was introduced by Cuevas et al. [23], which is inspired by simulation of cooperative behavior of social-spiders. In this algorithm, individual spiders (solutions) are simulated by the biological laws of the cooperative colony. Also, males and females are two search agents (spiders) in SSO, depending on which each individual is conducted by different evolutionary operators. Again, spider monkey optimization (SMO) was discovered by Bansal et al. in 2014 [24], which is inspired by intelligent foraging behavior of fission–fusion social structure based animals. It has two control parameters i.e. global leader limit and Local leader limit that helps local and global leaders to take appropriate decisions. In same year 2014, Mirjalili et al. proposed grey wolf optimizer (GWO) [25], it mimics the leadership hierarchy and hunting approach of grey wolves. It employed four types of grey wolves (Alpha, beta, delta, and omega) for simulating the leadership hierarchy and three steps (searching for prey, encircling prey, and attacking prey) for hunting. Later in 2015, inspired by the shallow water wave theory, Zheng devised an algorithm called as water wave optimization (WWO) [26] in which three operators; propagation (helps to make high fitness waves search small areas and low fitness waves explore), refraction (helps waves to escape search stagnation) and breaking (enables an intensive search around a promising area.) have been implemented depending upon phenomena of water flow. Then, Mirjalili proposed moth- flame optimization (MFO) [27] in same year 2015, inspired by the navigation technique (transverse orientation) of moths. In MFO, a mathematical model of spiral flying path of moths around artificial lights (flames) is developed in which moths are considered as candidate solutions and problem’s variables are the position of moths in the space.

In the year 2016, Mirjalili and Lewisa proposed whale optimization algorithm (WOA) [28]. This metaheuristic inspired by the social behavior of humpback whales and it consists of three steps; encircling prey, bubble-net attacking method and search for prey. Later in 2016, a swarm intelligence based technique, dragonfly algorithm (DA) proposed by Mirjalili [29]. It is inspired by static and dynamic swarming behaviours of dragonflies and it has two phases: exploration and exploitation, which modeled while dragonflies search food, navigate and avoid enemies in a swarm. Further in 2017, grasshopper optimization algorithm (GOA) was modeled mathematically by Saremi et al. [30], it mimics the behavior of grasshopper swarms and designed to simulate repulsion force (help to explore the search space) and attraction forces (help to exploit promising regions) in between the grasshoppers. Later in 2018, Pierezan and Dos inspired by the Canis latrans species and introduced a population based metaheuristic named as coyote optimization algorithm (COA) [31]. It especially focuses on the social structure and experiences exchange by the coyotes and has two parameters: number of packs and the number of coyotes per pack. In recent year 2019, search and rescue optimization (SAR) was developed by Shabani et al. [32], inspired by search and rescue operations of the humans being and it consists of two phases: social phase (selects the search direction based on the position of clue) and individual phase (searches around the best clue). Then early in 2020, based on sense of smell and movement mechanism of bear, Marzbali presented bear smell search algorithm (BSSA) [33]. It has two mechanisms: olfactory bulb mechanism that connected to the brain, employed mesh mechanism to move the next position.

The primary advantage of these algorithms is their use of the “trial-and-error” principle in searching for solutions. Thus, these algorithms were successfully applied to solve global optimization problems. Among successful MAs, DE and PSO have been widely recognized to solve complex optimization problems and received much attention of many researchers [34,35,36,37,38,39]. Therefore, DE and PSO are chosen in the present study. However, some shortcomings of DE and PSO cause limitation in applying them in complex optimization environments. Therefore, avoiding shortcomings of these algorithms many variants and their hybridization are introduced in the literature.

Although, as claimed on No-Free-Lunch(NFL) theorem [40], a large number of MAs are introduced in the literature but they couldn’t able to solve variety of problems. Moreover, a method may have suitable results for some problems, but not for others. Thus, there is a need to introduce some effective algorithms to solve a wider range of problems. This is the motivation of this study to present novel variants of DE and PSO with their hybridization.

Moreover, after extensive vigorous literature review on different variants of DE and PSO with their hybridization (section 2 related work), following points are analyzed and motivated from them.

  1. (i).

    In DE mutation and crossover strategy with their associate control parameters utilized to produce the global best solution which is beneficial for improving the convergence behavior. Therefore, in DE most appropriate strategies and their associated parameter values are considered a vital research study.

  2. (ii).

    The performance of PSO greatly depends on its parameters like acceleration coefficients and inertia weight which guide particles to the optimum and balancing diversity respectively. Hence, many researchers have tried to modify the control parameter of PSO to achieve better accuracy and higher speed.

  3. (iii).

    Hybrid algorithms (by combining the advantages of different algorithms) have aroused interest of the researchers due to its effectiveness for complex optimization problems. Since DE and PSO have complementary properties therefore their hybrids have gained prominence recently. To best of our knowledge, finding ways to combine DE and PSO is still an open problem.

Inspired/motivated by the above observations and our survey of literature, the following plans of action (major contributions) have been outlined for solving complex unconstrained optimization problems.

  1. (i).

    Developed an advanced differential evolution (ADE) where novel mutation strategy and crossover probability along with slightly changed selection scheme are familiarized.

  2. (ii).

    Suggested an advanced particle swarm optimization (APSO) which consists of novel gradually varying (decreasing and/or increasing) parameters.

  3. (iii).

    Designed an advanced hybrid algorithm (AHDEPSO) by hybridizing advanced DE and PSO as well as based on multi-population approach.

The rest of this paper is organized as follows: Section 2 reviews the related work on different and hybrid variants of DE and PSO. Section 3 describes the proposed algorithms. The proposed algorithms are verified on a wide set of benchmark functions and real-world engineering applications in Section 4. Section 5 concludes this study with future works.

2 Related work

During past decade, development of several powerful MAs to solve high-dimensional optimization problems has become a popular study area. Among successful MAs, DE and PSO have been widely recognized to solve complex optimization problems and received much attention of many researchers. The basics of original DE and PSO are presented as follows.

2.1 Differential evolution (DE)

DE is an evolutionary approach proposed by Storn and Price [4]. The key idea behind DE is to use vector differences for perturbing vector population. In D-dimensional search space it initializes a population randomly of np individuals within the lower and upper boundaries (xl, xu). After initialization, the DE is conducted by the three main operations defined as follows.

Mutation: for each target vector (\( {x}_{i,j}^t \)) a mutant vector (\( {v}_{i,j}^t \)) at iteration t is generated as follows.

$$ {v}_{i,j}^t={x}_{r_1}^t+F\left({x}_{r_2}^t-{x}_{r_3}^t\right) $$
(1)

where r1, r2, r3 ∈ {1, 2,...,np} are randomly chosen integers with r1 ≠ r2 ≠ , r3 ≠ i, and F denotes the scaling vector that employed to control the amplification of differential variation.

Crossover: a trial vector \( \left({u}_{i,j}^t\right) \) produce by combining target (\( {x}_{i,j}^t \)) and mutant (\( {v}_{i,j}^t \)) vector as follows.

$$ {u}_{i,j}^t=\left\{\begin{array}{c}\ {v}_{i,j}^t;\mathrm{if}\ \mathit{\operatorname{rand}}\le {C}_r\kern1.5em \\ {}{x}_{i,j\kern0.5em }^t;\mathrm{otherwise}\kern2.75em \end{array}\right. $$
(2)

where i ∈ [1, np], j ∈ [1, D] and rand ∈ U[0, 1] (uniformly distributed random number between 0 & 1), Cr ∈ [0, 1] denotes crossover rate that controls how many components are inherited from the mutant vector.

$$ \mathrm{Selection}:\kern2.75em {x}_{i,j}^{t+1}=\left\{\begin{array}{c}{u}_{i,j}^t;\mathrm{if}\ f\left({u}_{i,j}^t\right)\le f\left({x}_{i,j}^t\right)\\ {}{x}_{i,j}^t;\mathrm{Otherwise}\kern3.25em \end{array}\right. $$
(3)

Again mutation, crossover and selection operators allowed for offspring repeatedly up-to predefined stopped criteria.

2.2 Particle swarm optimization (PSO)

PSO was originally proposed by Eberhart and Kennedy in 1995 [3]. It simulates social and/or group behaviors in animals, insects and humans. In classical PSO, swarm flies in the D-dimensional search space to seek for global optimum. Each of the ith swarm particles has its own position (xi = (xi, 1, xi, 2, …, xi, D)) and velocity vi = (vi, 1, vi, 2, …, vi, D). During the evolution, each particle tracks its individual best pbesti = (pbesti, 1, pbesti, 2, …, pbesti, D) and global best gbestj = (gbest1, gbest2, …, gbestD), velocity and position of the ith particle are updated as follows at each iteration.

$$ {v}_{i,j}^{t+1}=w{v}_{i,j}^t+{c}_1{r}_1\left({pbest}_{i,j}-{x}_{i,j}^t\right)+{c}_2{r}_2\ \left({gbest}_j-{x}_{i,j}^t\right) $$
(4)
$$ {x}_{i,j}^{t+1}={x}_{i,j}^t+{v}_{i,j}^{t+1} $$
(5)

where t is iteration index, \( {v}_{i,j}^t \) is velocity of ith particle in D-dimension at tth iteration, c1 is cognitive acceleration coefficient, c2 is social acceleration coefficient, r1, r2 are two uniform random numbers in the range between [0, 1] and w is the inertia weight.

Due to some deficiencies of DE (low local exploitation ability and loss of diversity) and PSO (easily get stuck at a local optimal solution region and low convergence rate) cause limitation in applying them in typical optimization problems. Therefore, avoiding shortcomings of these algorithms many variants and their hybridization are introduced in the literature. Some recent past variants of DE and PSO as well as their hybrids are reviewed effectively as follows and illustrated in Fig. 2.

Fig. 2
figure 2

DE, PSO & their hybrid variants with application

2.3 DE variants

DE has remarkable performance and become a powerful optimizer in the field of real world problems. However, it has few issues such as convergence rate and local exploitation ability. In order to overcome its shortcomings, lots of robust and effective DE has been designed in the literature. A detailed survey of DE variants can be found in [34, 35]. Moreover, a briefed survey of significant DE variants has been summarized as follows.

As DE is effective in solving in difficult search problem, Joshi and Sanderson in 1997 [41] applied DE approach to solve the minimal representation problem in multisensor fusion. In 1998, Cheng and Hwang developed DE algorithm (DEA) [42], it represents the continuous parameters by floating-point numbers rather than by binary bit-strings. Moreover, it is applied to the design of optimal PID controller. Later in 1999, Lee et al. proposed modified DE (MDE) [43], it employed a local search to improve computational efficiency as well as modified heuristic constraints to lessen the search space size. And it is applied to the continuous methyl methacrylate-vinyl acetate (MMA-VA) copolymerization reactor problem. Then in 2000, Kyprianou et al. [44] employed DE to identify the optimal parameter values of a highly nonlinear dynamic system Freudenberg hydromount model. Further, Ruzek and Kvasnicka have pointed out the practical applicability of the DE in the problem of the kinematic location of the earthquake hypocenter in 2001 [45]. It is found that the sensitivity of the DE essentially retains the favorable properties over most of the admissible range. Then Chen et al. proposed improved differential evolution (IDEP) [46] in 2002. It employed flip operation (to adjust the prior-knowledge-violating networks), also Levenberg–Marquardt descent and random perturbation strategy are adopted to speed up the convergence of DE and prevent from being locally trapped. Also, it is applied in the modeling chemical curves with the increasing monotonicity constraint in network training. Later in 2003 Ilonen et al. [47] used DE to analyzed the train feedforward multilayer perceptron neural networks for reaching its optima. Because DE has no major restrictions apply to the error function as well as on the regularization methods. Proceeding in 2004, Kapadi and Gudi [48] analysed the computational aspect of DE with augmented Lagrangian including the dynamic penalty method. Then it applied on fed-batch fermentation processes involving multiple feeds as it provides a path to obtain the feasible optimal environment in fermentation broth and avoid inhibition. In 2005, Rane et al. [49] worked on the process of recrystallization using cellular automata (CA). Where DE is employed to search for the value of nucleation rate, providing an acceptable matching between the theoretical and experimentally observed values of fraction-recrystallized. Further, Babu and Angira in 2006 developed Modified DE (MDE) [50]. It accomplished by taking single array in MDE as compared to two arrays in traditional DE, which reduces the memory and computational efforts. It is applied to solve the non-linear chemical engineering problems. Next year in 2007, Chang et al. proposed robust searching hybrid DE (RSHDE) [51]. It comprises of two schemes; multi-direction search and search space reduction scheme to enhance the search ability in the initial stages. RSHDE is used to solve the capacitor placement problem in distribution systems.

In 2008, Noman and Iba [52] have investigated the potential of the DE for solving economic load dispatch (ELD) problems in power systems. Where DE enhances the way of satisfying the power balance constraint and other boundary constrained by using a reflection mechanism. Further, Das and Konar presented automatic fuzzy clustering DE (AFDE) in 2009 [53]. It incorporates the two novel parameter tuning strategies to escape from stagnation and/or premature convergence. Moreover, it is applied to the fuzzy clustering task in the intensity space of an image. Later in the year 2010, Amjady and Sharifzadeh invented modified DE (MDE) [54]. This framework generated by owning a new mutation operator and selection mechanism which is inspired from GA, PSO and simulated annealing (SA). It is applied on non-convex economic dispatch problem considering valve loading effect. Then in 2011 Uyar et al. [55] proposed a novel way of employing DE to short-term electrical power generation scheduling problem. This problem is divided into two sub problem, where DE is applied with binary decision variable in a way that it lower the cost scheduling of power generators, satisfying some operational constraints. Proceeding in year 2012, Santos et al. [56] designed a new chaotic DE optimization approach based on Ikeda map (CDEK) to tune the crossover rate and mutation factor. It is applied to the identification of a thermal process. Later in 2013, Tsai et al. presented improved DE algorithm (IDEA) [57] based on the cost and time models on cloud computing environment. It combined the Taguchi method (to exploit the better individuals on micro space) and DE (has powerful global exploration capability). Moreover, IDEA used to optimize the task scheduling and resource allocation on cloud computing environment. Further, Baskan and Ceylan proposed Modified DE (MODE) [58] in 2014. It developed mutation strategy consideration rate (MSCR) and local search operator that enhanced the convergence rate of DE without being trapped in bad local optimum. This algorithm deals with determining the optimal link capacity expansions for a given road network. Guo and Yang [59] developed DE utilizing eigenvector-based crossover operator in 2015. It introduced a rotationally invariant crossover (based on eigenvectors of covariance matrix) and a new parameter P (eigenvector ratio) to control the ratio between the binomial & eigenvector-based crossover and to preserve the population diversity respectively. Moreover, it is used to apply in real world optimization problem. Ayala et al. invented beta DE (BDE) in 2015 [60]. It applies beta probability distribution in tuning F and CR parameters as it is flexible for modelling data. BDE is employed to select the thresholds for segmenting the images. Further in 2015, Chen et al. [61] employed DE in human detection approach based on histograms of oriented gradients (HOG) feature and termed as HOG-SVM-DE. Here DE is applied instead of scanning the detection windows in sliding fashion so as to achieve fast and accurate detection.

Further in 2016, Do et al. devised a modified DE (mDE) [62]. In this framework, best individual based mutation and elitist selection techniques instigated with modified scale and crossover factor to escalate exploitation ability and/or convergence speed of DE. Then it is applied for form-finding of tensegrity structures. In the same year Sethanan and Pitakaso proposed modified DE algorithms [63], where two additional steps included i.e. reincarnation and survival process in order to improve the solution quality. This algorithm is then used to determine the routes for raw milk collection from a dairy factory. Then, Basu developed quasi-oppositional DE (QODE) [64] in 2016. It adds quasi-oppositional based learning (QOBL) for population initialization and also for generation jumping. Moreover, QODE is used to solve reactive power dispatch problem of a power system. Proceeding in 2017, Vivekanandan and Iyengar proposed modified DE [65], where mutation strategy (DE/rand/2-wt/exp) is employed. Also, modified-DE-based feature selection is adapted to perform feature selection for cardio vascular disease. Further, Suresh and Lal invented Modified DE (MDE) [66] in 2017, it excesses DE for exploration phase whereas cuckoo search (CS) for exploitation phase as it ensured an increase in the convergence rate and to avoiding premature convergence. And MDE is employed for enhancing the contrast and brightness of satellite images. Later Sakr et al. proposed modified DE algorithm (MDEA) [67] in 2017, it comprises of self-adaptive scaling factor that dynamically adopts global and local searches to eliminate local optima trapping. It is implemented for solving optimal reactive power management (ORPM) problem.

Then in 2018, Qiu et al. designed minimax DE (MMDE) [68] where a novel bottom-boosting mechanism is introduced to maintain the reliability and identified the promising solutions. Also it applied partial-regeneration strategy and mutation operator (DE/current/1) to provide in-depth exploration over solution space. Then, it is applied to the robust optimal design of a two-link robotic manipulator. Continuing in same year, Yuzgec and Eser proposed another DE variant named as Chaotic based DE (CDE) [69]. To maintain the diversity in the initial population it includes four chaotic systems like as Lorenz, Rossler, Chua and Mackey-Glass functions for selection of the candidates from population in the mutation, crossover operations. Also it is used for the optimization of baker’s yeast drying process. Again in 2018, Buba and Lee [70] proposed DE approach to optimize the urban transit network design problem (UTNDP). Identical point mutation and uniform route crossover with 0/1 crossover masks are used in this algorithm to increase the diversity and noisy random vectors. Early in 2019, a new DE variant direction averaged DE (daDE) is proposed by Yang et al. [71]. It created a modified mutation rule which utilizing the information of the current and the former individuals altogether. daDE is employed to solve the quantum state and gate preparation problems. Further in 2019, Awad et al. proposed a new DE algorithm named as DEa-AR [72]. It uses arithmetic recombination crossover and scaling factor (based on Laplace distribution). Additionally, an archive strategy is incorporated to consider the inferior individual’s information to find new good solutions. DEa-AR is propounded to solve the contemporary stochastic optimal power flow OARPD problems. Later in 2019, DE with biological based mutation operator (DEHeO) is proposed by Prabha and Yadav [73]. In this algorithm, a mutation operator (hemostatic operator influenced by hemostasis biological phenomenon) is introduced which gives promising solutions and helps in enhancing the diversity in earlier stages thereby avoiding stagnation during later stages. It is used to apply in real world optimization problems. Recently in 2020, Li et al. developed enhanced adaptive DE (EJADE) [74]. In EJADE, a sorting mechanism is evolved to rationally assign CR values for each individual according to their fitness values. Moreover, a dynamic population reduction strategy is employed to speed up the convergence rate and maintain the diversity. It is used to apply in Photovoltaic systems optimization. Further, Hu et al. proposed Boltzmann annealing DE (BADE) [75] in 2020. It introduced an annealing strategy into the DE algorithm that allows exploring more searching space. Additionally, different strategies are employed at different stages of annealing (high and low temperature) to accelerate the convergence. Also, it is employed to optimize the inversion problem in the directional resistivity logging-while-drilling(DRLWD) measurements.

2.4 PSO variants

PSO has attracted the attention to solve many complex optimization problems due to its efficient search ability and simplicity. However, the main drawback of the PSO is that it may easily get stuck at a local optimal solution region. Therefore, accelerating the convergence speed and avoiding the local optimal solutions are the two critical issues in PSO. To overcome such issues various different modifications of the PSO has been proposed in the literature. A comprehensive survey on the variants of PSO can be found in [36, 37]. Furthermore, briefed reviews of some noteworthy PSO variants are summarized as follows.

Zhenya et al. proposed a modified version of PSO in 1998 [76], where each particle’s uses the best current performance of its neighbours to replace the best previous one. Also to accelerate the search procedure, a not accumulative rate of change replaces to the accumulative one. And it is applied to train the fuzzy neural network problem. Then in 1999, Eberhart and Hu used PSO to analyze the human tremor (essential tremor and Parkinson’s disease) [77]. It is used as to evolve a neural network weights and to evolve the network structure indirectly. Further, Naka and Fukuyama proposed hybrid PSO (HPSO) in 2000 [78]. It replaced the agent position of low evaluation values by high evaluation values using the selection procedure of evolutionary algorithm. Moreover, HPSO can estimate the load and distributed generation output values at each node considering nonlinear characteristics of distribution systems. Abido developed PSO based power system stabilizers (PSOPSS) in 2001 [79]. It incorporated an annealing procedure to make uniform and local search in the initial and later stages respectively. Additionally, feasibility check procedure also imposed in order to prevent the particles not to go outside the feasible search space. It is used as in order to search for optimal settings of PSS parameters. Later in 2002, Al-kazemi and Mohan developed multi-phase PSO (MPPSO) [80]. It evolves multiple groups of particles to change the direction of the search in the different phases that helps to explore the search space, enhancing population diversity, and preventing premature convergence. Also, MPPSO is employed for training multilayer feedforward neural networks (MFNN) problem. Further, Gaing proposed binary PSO (BPSO) in 2003 [81]. In BPSO the trajectories are changes with the probability in which a coordinate will take on binary value (0 or 1). Also, it combined with the lambda-iteration method for solving unit commitment (unit-scheduled and economic dispatch (ED)) problems. Proceeding in year 2004 [82], Pang et al. developed modified PSO, where position and velocity of the particles are presented by fuzzy matrices and operators of the classical PSO are redefined. It is used to solve traveling salesman problem (TSP). Then in 2005, Esmin et al. invented hybrid PSO with mutation (HPSOM) [83]. It incorporated mutation process of GA into PSO which helps to allow the search to escape from local optima and search in different zones of the search space. HPSOM is then applied to solve the power loss reduction problem. Meissner et al. proposed optimized PSO (OPSO) [84] in 2006. It comprises of subswarms and superswarm. The subswarms is used to find a solution to a given optimization problem, while the superswarm is employed to optimize their parameters. This algorithm is applied to resolve the neural network training problem. Later in year 2007, He and Wang invented co-evolutionary PSO (CPSO) [85]. It consists of multiple swarm (for searching the good solutions) and single swarm (for evolving the suitable penalty factors). Furthermore, CPSO is implement in parallel and applied to solve the welded beam design, tension/compression string design and pressure vessel design problems. Zhang et al. proposed improved PSO (IPSO) in 2008 [86], it combines the PSO with the two-point crossover (to redefine the model of the original PSO) and shift mutation operators (to search neighbourhood when a particle gets stagnate). Also, a fast fitness computation method based on matrix is devised to improve the algorithm speed. Moreover, it is used to solve the large scale flow shop scheduling problem. Further in 2009 [87], Meneses et al. presented PSO with random keys (PSORK). In which the position vector’s information is decoded by the random keys (RK), so that positions need not to be rounded or truncated. And, it is employed to solve nuclear reactor reloading problem. Proceeding in year 2010, Azadani et al. proposed constrained PSO (CPSO) [88]. It initializes and updates the particles under uniform distribution for the faster convergence. Also, CPSO is applied to the multi-product and multi-area electricity market dispatch problem. In 2011, Kang and He proposed a novel discrete PSO (DPSO) [89]. It utilizes the characteristics of discrete variable to update the position. Moreover, variable neighbourhood descent algorithm and migration mechanisms are submerged in DPSO to speed up the convergence and maintain diversity. It is employed to solve meta-task assignment in heterogeneous computing systems. Later in 2012, Kar et al. proposed craziness based PSO (CRPSO) [90]. It introduced a craziness operator to make sure that the particle has a predefined craziness probability in order to maintain its diversity. CRPSO is designed to solve the digital finite impulse response (FIR) band stop filter design problem. Proceeding in year 2013, Lim and Isa developed the two-layer PSO with intelligent division of labor (TLPSO-IDL) [91]. It performes the evolutions sequentially on the current and memory swarm. A new learning mechanism is also proposed for current swarm to improve its exploration. Meanwhile an intelligent division of labor (IDL) module is invented for memory swarm to evolve adaptively by allocating different tasks to each swarm member. Additionally, an elitist-based perturbation (EBP) module is considered to prevent stagnation in local optima. Further, it is applied to solve the gear train design problem. Then in 2014, based on particle positions Zhang et al. proposed a novel parameter mechanism for classical PSO. [92]. It used the concept of overshoot and the peak time of a transition process, which provides new way to analyses the particle trajectories. Moreover, this algorithm is applied to solve the antenna array pattern synthesis problem. Basu developed modified PSO (MPSO) in 2015 [93]. The Gaussian random variables are introduced in velocity term to improve the search efficiency and obtaining the global optimum without impairing the convergence speed and the structure of PSO. Also, MPSO is used to apply in solve non-convex economic dispatch problems.

Then in 2016, Eddaly et al. [94] proposed hybrid combinatorial PSO (HCPSO). It introduced an iterative local search algorithm based on probabilistic perturbation, sequentially to PSO for enhancing the solutions quality. This algorithm is used for solving flowshop scheduling problem. Proceeding in same year, Zhang et al. [95] proposed adaptive inertia weight-chaos PSO (AIW-CPSO). It introduced an adaptive inertia weight so as to enhance the local optimization ability. Additionally, the logical self-mapping chaotic search is carried out in order to make the PSO to jump out of local optima. Further, it is employed for extracting the features of Brillouin scattering spectra. Also in 2016, Ngo et al. proposed extraordinariness PSO (EPSO) [96]. It contained extraordinary motion concept (movement strategy) for particles where they can proceed to a target that may be global best, local bests, or even the worst individual. EPSO is used to apply in engineering design problems. Later in 2017, Li et al. developed partitioned and cooperative quantum-behaved PSO (SCQPSO) [97]. It introduced auxiliary swarms and partitioned search space to enhance the population diversity. Also the cooperative theory is considered to improve the particle global search ability. Moreover, it is used to apply in medical image segmentation problem. Proceeding in same year, Phung et al. proposed discrete PSO (DPSO) [98]. It includes three techniques deterministic initialization, random mutation (to avoid the collapse situation and keep the balance between exploration and exploitation), and edge exchange (to compare each valid combination of the swapping mechanism for edges) to improve the accuracy of DPSO. Also, this algorithm solved the inspection path planning (IPP) problem. Qin et al. developed improved orthogonal design PSO (IODPSO) in 2017 [99]. It employed the tent chaotic map for the acceleration coefficients adaptation to improve global search capability. Further, IODPSO is used to solve the single-area and multi-area economic load problems.

Later in year 2018, direction aware PSO algorithm with sensitive swarm leader (DAPSO-SSL) is proposed by Mishra et al. [100]. It incorporated the basic human nature qualities like awareness, maturity, relationship and leadership to swarm leader and individual particles, so that particles can utilize previous knowledge of the best and current fittest individuals. DAPSO-SSL is applied to the community detection problem of big data networks. Proceeding to year 2018, Li et al. proposed stochastic gradient PSO (SGPSO) [101]. It combines stochastic gradient with the randomness of particle swarm search to overcome the problems of premature convergence and poor accuracy of standard PSO. It is used to generate a feasible solution of entry trajectory planning for hypersonic glide vehicles. Then Tian and Shi developed modified PSO (MPSO) in 2018 [102]. It utilized logistic map to distribute the particle uniformly so as to improve to initial population quality. Sigmoid like inertia weight and wavelet mutation is considered to achieve better swarm diversity. Additionally, an auxiliary velocity-position update mechanism is employed to global best particle in order to guarantee the convergence. It is applied to the image segmentation problem. In 2019 Parouha proposed modified time-varying PSO (MTVPSO) [103], It introduced a linearly decreased inertia weight and novel acceleration coefficients which improve the global search capability and diversity of the population. MTVPSO is used to apply in nonconvex/nonsmooth economic load dispatch problems. Further, Hosseini et al. [104] proposed hunter-Attack fractional-order PSO (HAFPSO) in 2019, where fractional-order derivatives and hunter-attack strategy are used to accelerate convergence and avoid stagnation respectively. It is used to apply in Optimum power amplifier design problem. Early in 2019, Dash and Patra developed mutation-based self-regulating and self-perception PSO (MSRSP-PSO) [105]. It incorporates self-regulation and self-perception behaviour of the global particle and dynamic adaption in learning. Also mutation operator is performed in global particle. This algorithm is suitably applied for tracking single as well as multiple objects tracking. Further, Non-inertial opposition-based PSO (NOPSO) [106] is proposed by Lanlan et al. in 2020. It has a non-inertial velocity update formula and opposition based learning strategy (to accelerate the convergence speed) also an adaptive elite mutation strategy (to avoid trapping into local optimum) is introduced. NOPSO is applied in the deep learning problem. Proceeding in same year, novel multi-swam PSO (NMSPSO) [107] is suggested by Xiong et al. It developed the three scheme i.e. novel information exchange strategy (for information transfer between sub-swarms), novel leaning strategy (for speed up the convergence) and novel mutation strategy (for better exploration). Also, it is used to solve the real world applications problems. Then, Motion-encoded PSO (MPSO) [108] is proposed early in 2020 by Phung and Ha. The motion-encoded approach preserves the important qualities of the swarm including the cognitive and social coherence so as to obtain better solution. Moreover, it is used to solve the problem of optimal search for a moving target using UAVs.

2.5 DE and PSO hybrid variants

Hybrid strategy is one of the main research directions to improve the performance of single algorithm. Different optimization algorithms have different search behaviors and advantages. Yet, to overcome individual shortcomings, such as premature convergence or stacking at some local optima, hybrid techniques are now more favored over their individual effort. Therefore, in order to enhance the performance of DE and PSO, lots of their hybrid algorithms are presented in the literature. A systematic survey on hybrid variants of DE and PSO can be found in [38, 39]. Likewise, a briefed review of some notable hybrids of DE and PSO are summarized as follows.

Hendtlass in 2001 proposed SDEA [109], where each individual follows traditional PSO and at the time of searching, individual move from a poorer region to a better region with help of DE time to time. Also, it is applied on unconstrained global optimization problems. Further in 2003, Zhang and Xie devised a hybrid algorithm DEPSO [110]. It has a bell-shaped DE mutation to control population diversity and retain the self-organized particle swarm dynamics. Moreover, it is solved through unconstrained and constrained optimization problems. Then Talbi and Batouche developed DEPSO [111] in 2004. This framework follows the iterative scheme as; DE is employed on even iteration (to enhance diversity) and PSO is applied on odd iteration (to extricate the swarm from unwanted fluctuations). Also, it is applied to solve multimodal image problem. Another hybridization of PSO and DE (DEPSO) was delivered by Hao et al. in 2005 [112], in which position of particle is updated partially through DE (extracting its differential information) and partially by PSO (extracting its memory data) due to which swarm’s diversity and enhancement of local and global search ability is maintained. It is applied to solve the unconstrained global optimization problems. Later in 2008, Niu and Li utilized the concept of parallel mechanism and proposed PSODE [113], in which one population’s individual upgraded by PSO whereas other evolved by DE, this interaction of two populations leads to maintain the diversity. To check its effectiveness applied to solve unimodal and multimodal problems. In 2009, Wang and Cai proposed a hybrid multi-swarm particle swarm optimization (HMPSO) [114], this hybrid model splits the swarm into several sub-swarms and PSO used as search engine for each sub-swarm as well as personal best of each particle is improved by applying DE. Moreover, it is used to solve on constrained optimization problems. Caponio et al. developed a super-fit memetic differential evolution (SFMDE) [115] in 2009. This framework synergized DE with PSO (for super-fit individual) as well as used nelder mead and rosenbrock algorithm as local searcher for measuring quality of super fit individual compared to other individual. Moreover, it is employed for solving optimal control drive design for a direct current (DC) motor and design of a digital filter for image processing purposes. Further, Liu et al. incorporated DE (as it has strong search ability) in PSO (to overcome the stagnation and speed up the convergence) and proposed PSO-DE [116] in 2010. Additionally, it is used to solve welded beam design, tension compression spring design, pressure vessel design, speed reducer design, three-bar truss design problems. Xin et al. proposed DEPSO [117] in year 2010. This hybrid model adopted statistical learning strategy for an individual who leads to the adaptation of evolution methods according to the relative success ratio of alternative methods (DE and PSO). And then applied to solve global numerical optimization problems. In 2011, Pant et al. developed DE-PSO [118]. It utilized the strength of both algorithm (DE and PSO) and executed through alternating phases i.e. initiated by DE if trial vector better than the corresponding point, then it is added in population otherwise enters the PSO phase to generate a candidate solution. DEPSO is further applied to solve unconstrained global optimization problems. Epitropakis et al. in the 2012 [119] framed a hybrid algorithm in which after each evolution step of PSO, the social and cognitive experience evolved with DE that helps to enhance convergence. Further, this hybrid is employed to solve multimodal functions problems. Later in 2013, Nwankwor et al. [120] developed hybrid particle swarm differential evolution (HPSODE). It initiated with DE till the trial vector is generated otherwise PSO is activated further to generate a new candidate solution. Also, HPSODE is used to apply on optimal well placement problems.

Then Sahu et al. [121] in 2014 utilized the advantages of DE (maintaining diversity) and PSO (memory mechanism) and proposed DEPSO. This hybrid algorithm is used to optimize the gains of fuzzy PID controllers employed in the control areas. Further, in year 2014, HPSO-DE initiated by Yu et al. [122]. It has an adaptive mutation to improve the current population position from local optima and balanced diversity efficiently. Then this hybrid algorithm is used to solve unconstrained global optimization problems. Later in 2015, Seyedmahmoudian et al. [123] proposed DEPSO by employing DE that adds on diversity to traditional PSO. Also detrimental effects of the random coefficients are reduced by DE in parallel with PSO. It is a reliable and system independent technique to track the MPP of PV system under partial shading conditions. Parouha and Das proposed DPD [124] in 2015. It is based on tri-population scheme (inferior, mid and superior group), in which DE is executed in the inferior and superior groups, while PSO is employed in the mid-group. Moreover, elitism and non-redundant search concept are included in DPD cycle to maintain diversity and escape local optima effectively. This hybrid is investigated on the engineering design problems. In year 2016, Tang et al. [125] proposed hybrid of DE and PSO namely HNTVPSORBSADE, where a nonlinear time varying PSO (to update the velocities and positions of particles) and a ranking based self-adaptive DE (to avoid stagnation) are introduced which resulted the exploration and exploitation dynamically. This framework is used for solving mobile robot global path planning problem. Further, Memory based DE (MBDE) is proposed by Parouha and Das [126] in 2016. It employed two operators: swarm mutation and swarm crossover (based on concept of PSO) for DE to direct knowledge and improve the solution quality. Also MBDE is applied on continuous optimization problems. Then, DE-PSO-DE is proposed in 2016 by Parouha and Das [127], in which the population is divided into three groups (A, B, & C) and executed in parallel manner. Additionally, elitism (to retain the best obtained values) and non-redundant search (to improve the solution quality) are evolved in DE-PSO-DE. It is employed for solving economic load dispatch problems. A year later in 2017, Famelis et al. devised DE-PSO [128] where to enhance diversity, DE is merged with a velocity-update rule of PSO in DE-PSO. It is applied on Multimodal optimization problems. Mao et al. in 2018 designed DEMPSO [129], in which DE is added first to lessen the search space and then acquired populations used modified PSO (MPSO) as an initial population to speed up the convergence rate. DEMPSO is performed to solve the numerical solution of the forward kinematics of a 3-RPS parallel manipulator. Tang et al. also proposed SAPSO–mSADE [130] in 2018. It integrated self-adaptive PSO (SAPSO) to balance global and local search ability of particles and modified self-adaptive DE (mSADE) to evolve the personal best positions and reduce potential stagnation. It is applied to solve the tension compression spring design and three bar truss design problems. Early in 2019, Too et al. developed binary particle swarm optimization differential evolution (BPSODE) [131]. It inherits the strength of binary PSO (BPSO) and binary DE (BDE) which is computed on sequence. Additionally, the dynamic inertia weight and dynamic crossover rate are introduced to track the optimal solution and balance diversity. And this hybrid algorithm is used to tackle feature selection problems in EMG signals classification. Recently in 2020, Dash et al. proposed HDEPSO [132]. In HDEPSO, three DE operations (mutation, modified crossover and selection) are fused with the best particles of PSO for enhancing global searching ability. Moreover, it is developed to solve the effectiveness of the sharp edge FIR filter (SEFIRF) problem. Since diversity of quantum PSO (QPSO) decline rapidly this shows its inadequacy so Zhao et al. developed improved QPSO [133] in 2020, in which DE is familiarized to improve its diversity and convergence rate. It is employed to solve economic environmental (EED) problem of the microgrid.

3 Proposed methodology

In this section proposed advanced differential evolution (ADE), advanced particle swarm optimization (APSO) and advanced hybrid DEPSO (AHDEPSO) have been described in detail as follows.

3.1 Advanced differential evolution (ADE)

As per earlier extensive studies performance of the DE mostly depends on the following features.

  1. (i).

    mutation strategy: as it improves the local search ability and convergence rate.

  2. (ii).

    crossover operator: since it increases the population diversity.

  3. (iii).

    control parameters (F and Cr): as they improve the search and/or exploration and exploitation ability.

Moreover, above mentioned features are very essential for efficiency of DE because they determine cooperation mechanism among the different individuals. Thus, new DE mutation strategies and crossover operators as well as adjusting control parameters will surly helpful to improve its robustness. Motivated by above observations and to refrain from shortcomings, advanced DE (ADE) is proposed in this paper. Where modified mutation strategy and crossover rate as well as altered selection scheme are introduced like so.

$$ \mathbf{Mutation}:{v}_{i,j}^t={x}_{i,j}^t+F\times \mathit{\operatorname{rand}}\left(0,1\right)\times \left({best}_j-{x}_{i,j}^t\right) $$
(6)

where \( {x}_{i,j}^t \): target vector, \( {v}_{i,j}^t \): mutant vector, rand (0, 1): uniformly distributed random number between 0 & 1, bestj: best vector and F: Scalar factor and it is given as follows.

$$ F={F}_{min}+\left({F}_{max}-{F}_{min}\right)\left(\frac{t_{max}-t}{t_{max}}\right),{F}_{min}\epsilon\ \left[\mathrm{0.1,0.5}\right],{F}_{max}=1. $$

when F composed of a higher values series, then it is beneficial to the global search and when F composed of a lower values series, then it is beneficial to the local search.

$$ \mathbf{Crossover}:{u}_{i,j}^t=\left\{\begin{array}{c}\ {v}_{i,j}^t;\mathrm{if}\ \mathit{\operatorname{rand}}\left(0,1\right)\le {C}_r\ \left(\mathrm{crossover}\ \mathrm{rate}\right)\ \\ {}\kern0.5em {x}_{i,j\kern0.5em }^t;\kern1.5em \mathrm{otherwise}\kern10.5em \end{array}\right. $$
(7)

In Eq. (7) Cr is set as\( 2-\mathit{\exp}\left({\left(\frac{t_{max}-t}{t_{max}-0.6t}\right)}^{10}\right) \). It assured of individual diversity in early stage which improves global search ability. Further reduce degree of difference among individuals which accelerate convergence rate in later stage.

$$ \mathbf{Selection}:{x}_{i,j}^{t+1}=\left\{\ \begin{array}{c}{x}_{i,j}^t;\kern0.5em \mathrm{if}\ f\left({u}_{i,j}^t\right)>f\left({x}_{i,j}^t\right)\ \mathrm{and}\ \mathit{\operatorname{rand}}\ \left(0,1\right)<p\\ {}{u}_{i,j}^t\kern0.5em ;\mathrm{otherwise}\kern13.75em \end{array}\right. $$
(8)

where f (·): fitness function values and p: random value in (0, 1]. In this selection each pioneer vectors gets chance to survive and share its observed information with other vectors in the next steps. It implies searching capabilities are more enriched. Moreover, it is advantageous for stabilizing essential exploration and exploitation trends to encourage ADE for converging to better quality solutions. The pseudocode of the proposed ADE is presented below.

figure a

3.2 Advanced particle swarm optimization (APSO)

Based on pros and cons along with existing assessments of PSO, it is essential to strike a good trade-off between global and local search to find an optimal solution. Preferably, PSO needs strong exploration ability (particles can rove entire search space instead of clustering around the current best solution) and boost exploitation capability (particles can explore in a local region) at early and later phase of the evolution respectively. In velocity update equation of the PSO, inertia weight (w) and acceleration coefficient (c1and c2) are important factors to satisfy the above requirement with the following concerning concept.

  1. (i).

    a large and small values of w assists exploration and exploitation respectively.

  2. (ii).

    c1& c2 values facilitate exploitation and exploration of the search area based on ensuing strategies.

acceleration coefficient

tactics

c1(cognitive acceleration coefficient)

c2 (social acceleration coefficient)

 

large

small

exploration

slightly large

slightly small

exploitation

slightly small

slightly large

convergence

small

large

jumping out

Considering all of the concerns like advantages, disadvantages and parameter influences of the PSO, an advanced particle swarm optimization (APSO) is introduced in this study. It relies on novel gradually varying (decreasing and/or increasing) parameters (w, c1and c2) stated as follows.

$$ w=\left(\frac{w_f-{w}_i}{1-{t}_{max}}\right)\left(t-{t}_{max}\right)+{w}_i;{C}_1=\left(\frac{C_{1f}-{C}_{1i}}{1-{t}_{max}}\right)\left(t-{t}_{max}\right)+{C}_{1i}\ \mathrm{and}\kern0.5em {C}_2=\left(\frac{C_{2f}-{C}_{2i}}{1-{t}_{max}}\right)\left(t-{t}_{max}\right)+{C}_{2i} $$
(9)

where, wi and wf: initial and final values of w; c1i and c1f: initial and final values of c1; c2i and c2f: initial and final values of c2; t and tmax: iteration index and maximum number of iteration. Hence the velocity and position of the ith particle are updated by the following equations in the proposed APSO.

$$ {v}_{i,j}^{t+1}=\left(\left(\frac{w_f-{w}_i}{1-{t}_{max}}\right)\left(t-{t}_{max}\right)+{w}_i\kern0.75em \right){v}_{i,j}^t+\left(\left(\frac{C_{1f}-{C}_{1i}}{1-{t}_{max}}\right)\left(t-{t}_{max}\right)+{C}_{1i}\right){r}_1\left({p}_{best\ i,j}^t-{x}_{i,j}^t\right)+\left(\left(\frac{C_{2f}-{C}_{2i}}{1-{t}_{max}}\right)\left(t-{t}_{max}\right)+{C}_{2i}\right){r}_2\left({g}_{best j}^t-{x}_{i,j}^t\right) $$
(10)
$$ {x}_{i,j}^{t+1}={x}_{i,j}^t+{v}_{i,j}^{t+1}\kern1em $$
(11)

The pseudocode of the proposed APSO is presented below.

figure b

3.3 Advanced hybrid DEPSO (AHDEPSO)

Introductory reviews and results showed that hybrid algorithm improve the performances of DE & PSO, because they have complimentary properties. Therefore, an advanced hybrid algorithm of advanced DE and PSO (AHDEPSO) are proposed to further improve the solution quality. Basically, AHDEPSO is based on relating superior capability of the proposed ADE and APSO.

In AHDEPSO, entire population is sorted according to the fitness function value and divided into two sub-populations i.e. pop1 (best half) and pop2 (rest half). Since pop1 and pop2 contains best and rest half of the main population which implies good global and local search capability respectively. In order to maintain local and global search capability, applying proposed ADE (due to its good local search ability) and APSO (because of its virtuous global search capability) on the respective sub-population ( pop1 and pop2). Evaluating both sub-population then better solution obtained in pop1 (by using ADE) and pop2 (by using APSO) are named as best and gbest separately. If best is less than gbest then pop2 is merged with pop1 thereafter merged population evaluated by ADE (as it mitigate the potential stagnation). Otherwise, pop1 is merged with pop2 afterward merged population evaluated by APSO (as it established to guide better movements). Finally, reporting the optimal solution, if stoppings criteria met then stop otherwise returns to sorting process of population. Continue this whole process until get desire optimal solution. The flowchart of AHDEPSO is demonstrated in Fig. 3 and the pseudocode described below.

Fig. 3
figure 3

Flowchart of AHDEPSO

figure c

4 Experimental results and discussions

In this section considered optimization problems with their experimental results are discussed as follows.

4.1 Test suite (TS) and real world problems (RWPs)

In order to evaluate performance of the proposed ADE, APSO and AHDEPSO algorithm the following unconstrained test suite (TS) and real world problems (RWPs) are considered to solve.

  1. (i).

    TS-1: 23 basic benchmark functions

  2. (ii).

    TS-2: IEEE CEC 2017

  3. (iii).

    RWP-1: Gear train design problem

    $$ \operatorname{Minimize}\ f(x)={\left\{\frac{1}{6.931}-\frac{T_d{T}_b}{T_a{T}_f}\right\}}^2={\left\{\frac{1}{6.931}-\frac{x_1{x}_2}{x_3{x}_4}\right\}}^2;\mathrm{subject}\ \mathrm{to}:12\le {x}_i\le 600,i=1,2,3,4. $$
  4. (iv).

    RWP-2: Frequency modulation sound parameter identification problem

    $$ y(t)={a}_1\sin \left({w}_1t\mathrm{\varTheta}+{a}_2\sin \left({w}_2t\mathrm{\varTheta}+{a}_3\sin \left({w}_2t\mathrm{\varTheta}\right)\right)\right) $$
$$ {y}_0(t)=1.0\sin \left((5.0)t\mathrm{\varTheta}-(1.5)\sin \left((4.8)t\mathrm{\varTheta}+(2.0)\sin \left((4.9)t\mathrm{\varTheta}\right)\right)\right) $$

where: \( \mathrm{\varTheta}=\frac{2\pi }{100} \), and −6.4 ≤ ai, wi ≤ 6.35, i = 1, 2, 3. \( \operatorname{Minimize}\ f\left({a}_1,{w}_1,{a}_2,{w}_2,{a}_3,{w}_3\right)={\sum}_{t=0}^{100}{\left(y(t)-{y}_0(t)\right)}^2 \)

  1. (v).

    RWP-3: The spread spectrum radar poly-phase code design problem

    $$ \operatorname{Minimize}\ f(x)=\operatorname{Max}\left\{{f}_1(X),\dots, {f}_{2m}(X)\right\},X=\left\{\left({x}_1,\dots, {x}_n\right)\in {R}^n\left|0\le {x}_j\le 2\pi \right.,j=1,2,\dots, n\right\}\&m=2n-1 $$

with: \( {f}_{2i-1}(x)={\sum}_{j=1}^n\mathit{\cos}\left({\sum}_{k=\left|2i-j-1\right|+1}^j{x}_k\right)\ i=1,2,\dots, n; \)

$$ {f}_{2i}(x)=0.5+{\sum}_{j=i+1}^n\mathit{\cos}\left({\sum}_{k=\left|2i-j\right|+1}^j{x}_k\right)\ i=1,2,\dots, n-1;{f}_{m+i}(x)=-{f}_i(x),i=1,2,\dots, m. $$

The description of TS-1 is listed in Table 1 which consists of three groups’ unimodal (f1-f7), multimodal (f8-f13) and fixed-dimension (f14-f23) function. Also, TS-2 cited in Table 2 which consists of unimodal (h1-h3), multimodal (h4-h10), hybrid (h11-h20) and composition (h20-h30) functions. Moreover, the detailed summary of TS-2 and 3 RWPs are given in [134, 135] respectively.

Table 1 Test Suite (TS)-1
Table 2 Test suite (TS)-2 (IEEE CEC2017 unconstrained benchmark functions)

Simulations were conducted on Intel (R) Core (TM) i5–2350 M CPU @ 2.30GHz, RAM: 4.00 GB, Operating System: Microsoft Windows 10, C-free Standard 4.0. An extensive analysis has been carried out to decide the values of parameters wiwf, c1i, c1f, c2i and c2f used in proposed AHDEPSO. For this the values of (wi, wf), (c1i, c2f) and (c1f, c2i) varies from (0.1–0.9, 0.1–0.9), (0.1–0.9, 0.1–0.9) and (2.1–2.9, 2.1–2.9) respectively with one step length. The success rate (defined below) of total 81 combinations of these parameters are checked using proposed AHDEPSO with population size (30), stopping criteria (500 iterations) and independent run (30) on test suite TS-1 and TS-2 (30D).

$$ \mathrm{Success}\ \mathrm{rate}\ \left(\mathrm{SR}\right)=\frac{\mathrm{Number}\ \mathrm{of}\ \mathrm{succesful}\ \mathrm{runs}}{\mathrm{Total}\ \mathrm{nimber}\ \mathrm{of}\ \mathrm{runs}} $$

where a run is declaired as a ‘successful run’ if |f(x) − f(x)| ≤ ∈, where f(x) is the known global minima and f(x) is the obtained minima. In this study is fixed at 0.0001.

The success rate on best 10 combinations of (wiwf) and (c1i & c2f, c1f & c2i) are presented in Figs. 4 and 5 respectively. From these figures, it is clearly noticeable that the highest success rate can be found at (0.4, 0.9) in case of inertia weight and (0.5, 2.5) in case of acceleration coefficients. Hence, wi = 0.4, wf = 0.9, c1i = 0.5, c1f = 2.5, c2i = 2.5 and c2f = 0.5 have been recommended to use in proposed AHDEPSO. The overall best values in each table are highlighted with boldface letters of the corresponding algorithms. In all experiments, for fair comparison common parameters such as population size, stopping criteria and independent run are set the same or minimum of comparative algorithms. The results of the comparative algorithms are directly taken from the original references. The simulation result analysis on TS-1, TS-2 and RWPs with comparative experiments are presented below.

Fig. 4
figure 4

Influence of different inertia weights of proposed AHDEPSO on test suite (TS)-1 and TS-2

Fig. 5
figure 5

Influence of different acceleration coefficients of proposed AHDEPSO on test suite (TS)-1 and TS-2

4.2 Numerical and graphical analysis

The simulation result analysis on TS-1, TS-2 and RWPs with comparative experiments are presented below.

  1. i).

    On TS-1: 23 basic unconstrained benchmark functions

The produced result by proposed algorithms on TS-1 is compared with traditional algorithms (PSO [3] & DE [4]), DE variants (JADE [136] & SHADE [137]), PSO variants (HEPSO [138] & RPSOLF [139]) and hybrid variants (FAPSO [140] & PSOSCALF [141]). The parameters of all above compared and proposed algorithms are listed in Table 3. The comparative experimental results in terms of mean, std. (standard deviation) and ranking of the objective function values are presented in Table 4 of 30 independent runs.

Table 3 Parameter setting for TS-1
Table 4 Simulation results on TS-1

It should be noted that from Table 4, the mean objective function values of the proposed ADE, APSO and AHDEPSO algorithms are better and/or equal in comparison of above listed traditional algorithms, DE variants, PSO variants and hybrid variants. As per the experimental results shown in Table 4 the following comparison results (among non-proposed algorithms) are summarized as follows for TS-1 cases (i). Unimodal function (f1-f7): proposed AHDEPSO obtained better results in all seven functions (f1-f7), Suggested ADE and APSO obtained better results for f1, f2, f3 and f6 functions and marginally similar for the rest functions. (ii). Multimodal function (f8-f13): proposed AHDEPSO obtained better results for all six function (f8-f13) and similar for f8 (on DE, JADE and PSOSCALF), f9 (on RPSOLF, FAPSO & PSOSCALF) and f11 (on RPSOLF & PSOSCALF). Suggested ADE attained better results for f8, f9, f11 and marginally similar/inferior for the rest functions whereas, APSO obtained better result for f9 and slightly inferior for the rest. (iii). Fixed-dimension function (f14-f23): proposed AHDEPSO and ADE exhibits best performance on all functions meanwhile APSO obtained marginally better or equal results compared to other algorithms. Moreover, all algorithms are individually ranked (as ‘1’for the best and ‘2’ for subsequent performer and so on) in Table 4 based on mean result values. From this table it is concluded that AHDEPSO, ADE and APSO ranked 1st, 2nd and 3rd sequentially. Also, average and overall rank of proposed algorithms Vs others are presented in Table 4. It is clear that (from ranking) performances of proposed algorithms are superior to others. Eventually, proposed ADE, APSO and AHDEPSO produce less std. (it may 0.00E+00) for most of the cases on TS-1 which describes their stability. Furthermore, superiority of proposed algorithms is statistically validated over other algorithms through one-tailedt-test (with 98 degree of freedom (df) at 5% significance level) and Wilcoxon Signed Rank (WSR) test (at 5% significance level). The details of these tests can be found in [142]. The results of t-test and WSR test on TS-1 are reported in Table 5. From Table 5, it can be seen that proposed algorithms has both ‘a (significantly better than other)’ or ‘a+ (highly significance with other)’ sign (in case of t-test) and performs better or equally (in case WSR test) in most of consequence. Also, the p values as reported in Table 5 of the proposed algorithms are less with others which conclude that simulations are reliable for the majority of runs.

Table 5 Statistical comparisons of proposed Vs other algorithms for TS-1

The convergence speed of proposed and comparative algorithms is compared over 8 (f1, f5, f6, f7, f8, f9, f10 and f11) typical 30-D TS-1. All plotted convergence graphs (objective function values Vs iterations) are separately presented in Fig. 6a–h. From this figures it can be concluded that proposed ADE, APSO and AHDEPSO converge much faster than other algorithms in all cases. Also, total of 690 runs (30 runs for each TS-1 with 30 population size) optimal solutions are illustrated in Fig. 7. It confers that the proposed algorithms score the highest optimum solutions. Apart from this the computational time of proposed and compared algorithms on each TS-1 are computed and presented in Fig. 8. From this figure, it can be observed that the proposed algorithms take lesser time to achieve the best value for the entire TS-1.

Fig. 6
figure 6

a-h Convergence graph of compared and proposed algorithms for TS-1

Fig. 7
figure 7

Comparison of algorithms in finding the global optimal solution out of 690 runs

Fig. 8
figure 8

Processing times of algorithms for TS-1

As a whole, above numerical, statistical and graphical result analysis shows that proposed ADE, APSO and AHDEPSO performs very competitive and/or equally with other compared algorithms. However, among three proposed algorithms AHDEPSO is superior i.e. ranking order of proposed algorithms to solve TS-1 is AHDEPSO>ADE > APSO.

  1. ii).

    On TS-2: IEEE CEC 2017 unconstrained benchmark functions

Further, the produced result by proposed ADE, APSO and AHDEPSO on TS-2 is compared with DE variants (MPEDE [143] & EFADE [144]), PSO variants (HEPSO [138] & CSPSO [145]) and hybrid variants (HPSODE [120] & PSOJADE [146]). The parameters of all above compared and proposed algorithms are listed in Table 6 for TS-2. The relative experimental results in terms of mean error, std. (standard deviation) and ranking of the objective function values are presented in Tables 7, 8 and 9 (for 10D, 30D & 50D TS-2) of 30 independent runs.

Table 6 Parameter settings for TS-2
Table 7 Simulation results on TS-2 (10D)
Table 8 Simulation results on TS-2 (30D)
Table 9 Simulation results on TS-2 (50D)

From Tables 7, 8 and 9, it should be noted that the mean values of the proposed ADE, APSO and AHDEPSO algorithms are better and/or equal in comparison of all compared algorithms in test suites. As per the experimental results shown in Table 7 following comparison results are summarized for 10DTS-2 cases (i). Unimodal function (h1-h3): proposed ADE, APSO and AHDEPSO exhibits the best performance in all three functions (h1-h3) with similar for h1 (on MPEDE, EFADE, HPSODE & PSOJADE), h2 (on EFADE) and h3 (on MPEDE, EFADE, CSPSO, HPSODE & PSOJADE). (ii). Multimodal function (h4-h10): proposed AHDEPSO outperformed for all functions and equally for h4 (on EFADE & PSOJADE). h5 (on HEPSO), h6 (on MPEDE, CSPSO, HPSODE & PSOJADE) and h9 (on EFADE). Suggested component ADE execute better result in four functions (h4, h6, h7 and h8) whereas APSO gives best result for f4 as well as for other functions they perform similar or marginally inferior. (iii). Hybrid function (h11-h20): proposed AHDEPSO execute better result for five functions (h13, h14, h17, h19 and h20) similar for h15 (on PSOJADE) and marginally inferior with HEPSO (for h11) and EFADE (for h12, h16 and h20). At the same time, ADE and APSO obtained better results for h20 and marginally similar/inferior for the rest function. (iv). Composition function (h21-h30): proposed AHDEPSO exhibits the best performance on six functions (h21, h22, h23, h25, h29 and h30) and equally with HEPSO (for h29) as well as marginally inferior with HEPSO (for h26, h27 and h28) and CSPSO (for h24). Meanwhile, ADE and APSO perform marginally better/similar or slightly inferior in few functions with other algorithms.

Further, as per the experimental results shown in Table 8 the following comparison results are précised as follows for 30DTS-2 cases (i). Unimodal function (h1-h3): proposed AHDEPSO exhibits the best performance in all three functions (h1-h3) similar with EFADE (for h2) and CSPSO (for h3). Suggested ADE shown the best performance on h3 function and perform equal or marginally inferior with other algorithms. Whereas, APSO perform slightly inferior in all three functions with other algorithms. (ii). Multimodal function (h4-h10): proposed AHDEPSO execute better result for five functions (h5, h6, h7, h8 and h9) and equally with HEPSO (for h4) and PSOJADE (for h6 and h10). Suggested ADE execute better result in two functions (h6 and h7) and marginally inferior with others. Whereas, APSO execute similar result or marginally inferior for other functions. (iii). Hybrid function (h11-h20): proposed AHDEPSO execute better result for six functions (h13, h14, h16, h18, h19 and h20) marginally inferior with HEPSO (for h11 and h17) and PSOJADE (for h12) EFADE (for h15). At the same time, ADE obtained better results for h20 and APSO marginally similar/inferior for the rest. (iv). Composition function (h21-h30): proposed AHDEPSO execute the better performance on nine functions (h21, h22, h23, h24, h25, h26, h27, h28 and h29) and equally with EFADE, HEPSO, CSPSO and PSOJADE (for h22), HEPSO (for h23 and h28) as well as marginally inferior with PSOJADE (for h30). Meanwhile, ADE perform better in three functions (h22, h23 and h28) also APSO execute better result in one function (for h22) and marginally better/similar or slightly inferior in few functions with other algorithms.

Then, according to the experimental results given in Table 9 the following comparison results are précised as follows for 50DTS-2 cases (i). Unimodal function (h1-h3): proposed AHDEPSO exhibits the best performance in all three functions (h1-h3) and similar with EFADE (for h2). Suggested ADE execute the best performance on h3 function and gives equal or marginally inferior result with other algorithms. Meanwhile, APSO perform slightly inferior in all three functions with other algorithms. (ii). Multimodal function (h4-h10): proposed AHDEPSO execute better result for six functions (h4, h6, h7, h8, h9 and h10) and equally with PSOJADE (for h6) as well as marginally inferior with HPSODE (for h5). ADE executes better result in two functions (for h6 and h7). Moreover, APSO execute similar or marginally inferior for other functions. (iii). Hybrid function (h11-h20): proposed AHDEPSO outperformed in five functions (h16, h17, h18 h19 and h20) and marginally inferior with HEPSO (for h11), PSOJADE (for h12 and h13) and EFADE (for h14 and h15). Meanwhile, ADE obtained better results only for h20 and at the same time APSO is marginally similar/inferior for the rest. (iv). Composition function (h21-h30): proposed AHDEPSO exhibit the best performance on eight functions (h21, h23, h24, h25, h26, h27, h29 and h30) and marginally inferior with CSPSO (for h22) and EFADE (for h28). Meanwhile, ADE and APSO perform marginally better/similar or slightly inferior in few functions with other algorithms

Moreover, all algorithms are individually ranked in Tables 7, 8 and 9 based on mean error values. From these tables, it is concluded that AHDEPSO, ADE and APSO ranked 1st, 2nd and 5th (in case of 10D TS-2) 1st, 2nd and 3rd (in case of 30D & 50D TS-2) individually. Also, average and overall rank of proposed algorithms Vs others are presented in Tables 7, 8 and 9. It is clear that (from ranking) performances of proposed algorithms are superior to others. Ultimately, proposed ADE, APSO and AHDEPSO produces less std. for most of the functions on TS-2 which defines their stability. Furthermore, superiority of proposed ADE, APSO and AHDEPSO are statistically validated over others through one-tailedt-test (with 98 degree of freedom (df) at 5% significance level) and WSR test (at 5% significance level). The results of t-test and WSR test on TS-2 are reported in Table 10(10D TS-2), Table 11(30D TS-2) and Table 12(50D TS-2). From these tables it can be clearly seen that proposed algorithms has both ‘a’ or ‘a+’ sign (in case of t-test) and performs better or equally (in case WSR test) in maximum circumstances. As well as, the reported less p values in Tables 10, 11 and 12 concluded reliable results for the majority of runs of the proposed algorithms in TS-2 case.

Table 10 Statistical comparisons of proposed Vs other algorithms for TS-2 (10D)
Table 11 Statistical comparisons of proposed Vs other algorithms for TS-2 (30D)
Table 12 Statistical comparisons of proposed Vs other algorithms for TS-2 (50D)

The convergence speed of proposed ADE, APSO and AHDEPSO algorithms with others are analyzed on 10, 30 & 50D TS-2. For this one function from each category of TS-2 (h3, h9, h20 & h29) is taken. The convergence graph for all such function can visualize on Fig. 9a-l for TS-2. It can be observed that from these figures ADE, APSO and AHDEPSO makes better convergence than other algorithms. Apart from this, average error values of proposed algorithms for 10, 30 & 50DTS-2 have been analyzed through box plots. The respective box plots figures presented in Fig. 10a-c. From these figures, it is clearly visible that average error value of proposed algorithms is better than others.

Fig. 9
figure 9

a-l Convergence curves for TS-2 (10D, 30D & 50D)

Fig. 10
figure 10

a-c Average function error values of TS-2 (10D, 30D & 50D)

Furthermore, to check the performance of the proposed algorithms ECD (empirical cumulative distribution) test [142] is applied on the set of 10, 30, & 50D TS-2. Whereas, SP/SPbest Vs empirical distribution graph is plotted in Fig. 11a-c over all TS-2 functions. These figures confirm that the proposed algorithms have higher performance over other comparative algorithms. Besides, success rate distributions for 10, 30 & 50DTS-2 are evaluated and presented in Fig. 12. Graphical representation shows that proposed ADE, APSO and AHDEPSO are competitively higher than others. Moreover, the spider charts (where closer distribution of the algorithm is to the circle’s center for better performance of the algorithms) are employed on 10, 30 & 50DTS-2 to check the performance differences more intuitively among all algorithms. These charts can visualized in Fig. 13a-c. It can be observed from these figures that the proposed algorithms perform competitively on most of the functions. Additionally, significance level of proposed with other algorithms for 10, 30 & 50DTS-2 is evaluated through Bonferroni–Dunn’s [147] procedure. It is applied as a post hoc procedure to calculate the critical difference (CD). In this test, solid and dotted lines are representing threshold for the control algorithm (here AHDEPSO) on two prevalent significance levels of 0.05 and 0.1. Also, it demonstrates that performance of two algorithms is significantly different if the difference in average ranking of methods is larger than CD. The Bonferroni–Dunn’s bar chart of different algorithms on average rankings (obtained by Friedman test) for TS-2 is reported in Fig. 14. Here it can be observed that among all compared algorithms AHDEPSO significantly outperformed than others at both levels of significance. Also the average execution time of proposed algorithm with others of 30 independent runs for 10, 30 & 50DTS-2 has been reported in Fig. 15a-c through box plots. From corresponding figures, it is clearly visible that average execution time of proposed algorithms is less than others.

Fig. 11
figure 11

a-c Empirical distribution of normalized success performance on TS-2 (10D, 30D & 50D)

Fig. 12
figure 12

Success performance of the algorithms over TS-2 (10D, 30D & 50D)

Fig. 13
figure 13

a-c Spider performance charts of different algorithms on TS-2 (10D, 30D & 50D)

Fig. 14
figure 14

Bonferroni Dunn bar chart of different algorithms on average rankings obtained by Friedman test for TS-2 (10D, 30D & 50D)

Fig. 15
figure 15

Average running time of different algorithms for TS-2 (10D, 30D & 50D)

In general, from all above result analysis it can be declare that proposed ADE, APSO and AHDEPSO are performing better and/or equally with others. However, among three proposed algorithms AHDEPSO have larger competence.

  1. iii).

    On RWPs: Real world problems

The results of proposed ADE, APSO and AHDEPSO algorithms on 3 RWPs are compared with traditional algorithm (PSO [3], DE [4], & ABC [13]), DE variants (JADE [136], SaDE [148], & CoDE [149]), PSO variants (SLPSO [150], HCLPSO [151], & MSPSO [152]), and hybrid variants (MBDE [126], DEPSO-2S [135], & DPD [142]). The parameters of all compared and proposed algorithms are listed in Table 13. The experimental comparative results in terms of best, worst, mean, std. (standard deviation), p values and average ranking of the objective function values are presented in Table 4. From Table 14, the proposed ADE, APSO and AHDEPSO produce better and/or equally results in terms of best, worst, and mean case of all RWPs.

Table 13 Parameter setting for real world problems
Table 14 Simulation results on real world problems

Additionally, less std. and p values of proposed algorithms on most cases imply their better stability and reliable results for the majority of runs respectively. Also, in order to analyze the performance all algorithms are ranked depending on their mean value and reported in Table 14. From this table, it can be seen that AHDEPSO and ADE ranked 1st on all RWPs. Whereas, APSO secured 1st, 2nd and 4th rank on RWP-1, RWP-2 and RWP-3 successively. Hence, based on ranking results the proposed algorithms perform better than others. Moreover, the convergence graphs of all proposed and compared algorithms is plotted and presented in Fig. 16a-c. In these figures it can be clearly visualized that the proposed algorithms converge faster than others. Hence, proposed algorithms are computationally efficient.

Fig. 16
figure 16

a-c Convergence graph for real world problems

All in all, from all above numerical, statistical and graphical result analysis it can be proclaim that proposed ADE, APSO and AHDEPSO are performing very competitive and/or equally with other compared algorithms. However, among three proposed algorithms AHDEPSO has greater efficiency.

4.3 Complexity analysis

Some complexity analysis of the proposed algorithms is given as follows.

  1. i).

    Algorithm complexity

According to the guidelines of CEC test suite the algorithm complexity has been investigated on 10, 30 & 50D TS-2. Firstly, T0 time (in seconds) executed through subsequent program.

figure d

After this computing T1 time on and h18 (TS-2 function). Further, T2 time (evaluated 5 times) for all algorithms to completely execution of the both functions on 200,000 evaluations and store its mean value denoted as \( {\hat{T}}_2 \). Thereafter calculated \( \frac{{\hat{T}}_2-{T}_1}{T_0} \) for each algorithm and reported in Table 15. Also, complexity chart of proposed algorithms with others are presented in Fig. 17. It can be being observed from Table 15 and Fig. 17, the time complexity of proposed algorithms is significantly less than other comparative algorithms.

  1. ii).

    Time complexity

Table 15 Algorithm complexity results on TS-2
Fig. 17
figure 17

Algorithm complexity on TS-2 (10D, 30D & 50D)

According to the pseudo-code, AHDEPSO has the following time complexity.

  1. a).

    Initialization of np-population requires O(np.D) time.

  2. b).

    Evaluation and sorting population according to fitness function values needs O(tmax × np) time.

  3. c).

    Division of population into pop1 and pop2 procedure requires O(tmax × np) time.

  4. d).

    Evaluation of pop1 (by ADE) and pop2 (by APSO) takes O(tmax × \( \frac{np}{2} \)×\( \frac{np}{2} \)) = O(tmax × \( \frac{np^2}{4} \)) time.

  5. e).

    Merging pop and implementation of algorithm requires O(tmax × np × np)= O(tmax × np2) time.

Therefore, the total time complexity of AHDEPSO for maximum number of iterations is

$$ O(np.D)+O\left({t}_{max}\times np\right)+O\left({t}_{max}\times np\right)+O\left({t}_{max}\times \frac{np^2}{4}\right)+O\left({t}_{max}\times {np}^2\right)=O\left({t}_{max}\times {np}^2\times D\right) $$
  1. iii).

    Space complexity

The space complexity of proposed AHDEPSO algorithm is the maximum amount of space that used by above algorithm. Thus, the total space complexity of proposed AHDEPSO algorithm is O(max(np, \( np, np,\frac{np^2}{4} \),np2) × D) = O(np2 × D).

5 Conclusion with future works

In this study, magnificent survey of various recent-past traditional algorithms with DE and PSO variants as well as their hybrids have been examined along with their applied fields. After this, an advanced DE (ADE, to avoid premature convergence) and PSO (APSO, to avoid stagnation) as well as their hybrid (AHDEPSO, to balance between exploration and exploitation) has been proposed for unconstrained optimization problems. The briefed summary of these proposed algorithms are given as follows.

  1. (i).

    The novel mutation strategy, crossover probability and altered selection schemes of ADE will provide high and low population diversity at start and end of the algorithm respectively.

  2. (ii).

    The novel gradually varying (decreasing and/or increasing) parameters of APSO can well-balanced exploration and exploitation capabilities and promotes particles to search high quality solution.

  3. (iii).

    The AHDEPSO yields guaranteed convergence and diversifying solutions due to different convergence characteristics of ADE and APSO as well as based on multi-population approach as well as the divided population is merged with other in a pre-defined manner.

Further, the effectiveness of the proposed algorithms tested on TS-1 (basic unconstrained benchmark function) and TS-2 (IEEE CEC 2017 functions) along with 3 RWPs (real world problems). The numerical, statistical, and graphical analysis of the proposed algorithms is compared against traditional DE and PSO with their recent variants and hybrids as well as over many state-of-the-art algorithms. The comparative results shapes that the proposed algorithms become more robust and effective. Thus, it is conclusive that the proposed algorithms can be treated as a vital alterative in the field of EAs. Moreover, in the view of feasibilities, superiorities and solution optimality among proposed algorithms AHDEPSO outperformed. In addition, the effectiveness of the proposed algorithms can be tested by some more complicated real-world applications and new EAs will be developed in future.