1 Introduction

Differential Evolution algorithm, proposed by Storn and Price [1, 2] is well known for its exploration capability using three control parameters as mutation, crossover and population size (NP). In the recent years, there are various research work done that showcase the capability of DE in solving diverse optimization problems such as image segmentation [3, 4], pattern recognition [5, 6], and functions optimization [7, 8]. In all these papers, the range of recommended values of control parameters are given. The values of control parameters determine the efficiency of Differential evolution algorithm to find the optimal solution for a given problem. But to choose the right parameter values by trial and error is often time-consuming approach. The tuning of these control parameters is simple but specific to the problems and these tuned parameters varies during the evolution process [9].

Several variants of Differential evolution algorithm are proposed in literature suggesting the influence of these parameters. Liu et al. [10] proposed Fuzzy adaptive differential evolution algorithm (fADE). In this algorithm, new adaptive mutation and crossover parameters based on Fuzzy logic have been introduced. Another novel version of Differential evolution algorithm JADE was proposed by Zhang et al. [11]. This algorithm proposes a new mutation strategy by adaptive control parameter with optional external archive. Wang et al. [12] roposed another variant of Differential evolution algorithm by designing a new adaptive mutation strategy and named it as IMSaDE. This algorithm improves “DE/rand/2” mutation strategy. Alswaitti et al. [13] proposed a novel variance-based crossover strategy for improving the convergence rate. Ramadas et al. [14] proposed a new mutation strategy and named it as FSDE. FSDE added two parameters for controlling mutation viz., variable and constant parameter. However, there still exist the problem of finding the right control parameter values and in most of the cases, it still depends on past experiences.

Particle swarm optimization algorithm is also considered to be the important and effective evolutionary algorithm. This algorithm was proposed by Kennedy and Eberhart [15] and depends on individual best, pbest and global best solution, gbest for guiding the search for global optimal solutions. This algorithm is known for its faster convergence speed, less initialization parameters and ease of implementation in complex optimization problems [16,17,18]. However, the main demerit of PSO is falling into local optima during early evolution stage in comparison to other evolutionary algorithms and suffering from lack of population diversity that often leads to premature convergence during the later evolution stage [19, 20].

To overcome the demerits of both DE and PSO algorithm, efforts have been made to maintain a better balance between exploration and exploitation capability of both the algorithms by hybridizing them, which is a growing area of research and its objective is to mitigate the weakness of individual algorithms. To balance exploration and exploitation capabilities of DE and PSO, a hybrid variant named HCPSODE was proposed by Lin et al. [21]. In this approach, a chaotic map with greater Lyapunov exponent and a new nonlinear approach for reducing inertia weight is introduced. Wang et al. [22] proposed another DE and PSO hybridization strategy that evade the suboptimal solutions proposed in previous approaches. In this approach, a collective mutation strategy is developed for maintaining the diversity and convergence speed. Wang et al. [23] introduced another hybridization approach to maintain the DE global exploration and local exploitation. In this approach, a DE mutation and crossover strategy is improved by adding PSO mutation with no additional resource required. Pérez-González et al. [24] proposed a greenhouse model based on PSO and DE. Ahmadianfar et al. [25] proposed another hybrid variant of DE and PSO with multi strategy that helps in exploring the local and global search capabilities. Dash et al. [26] proposed a hybrid DEPSO for designing optimal FIR filter. In this approach, the DE control parameters are combined with fine-tuned parameters of PSO. Even though these algorithms are based on ensemble approaches of DE and PSO that improve the exploration and exploitation capabilities of both the algorithms, but they require more resources to compute, fine-tuning of complex parameters and may lead to premature convergence.

In this study, an effort has been made to solve the above discussed problems by maintaining a better balance between exploration and exploitation capabilities of both the algorithm by proposing an alternative hybridized variant of Differential evolution and PSO. In our proposed approach, we are using \(DE/rand/2\) mutation strategy and therefore enlarge its search space possibly to the extent that helps in finding more encouraging results and thus avoid premature convergence. During the subsequent phase of evolution process, this value of sigmoid function reduces with the increase of number of iterations. In this scenario, there is a greater probability of operating PSO mutation strategy and thus this sigmoid function helps in improving the precision and convergence speed. Although DE strategy can also perform mutation and it help in convergence speed and precision, but this sigmoid function will ensure high precision and convergence speed of our proposed algorithm. In addition to this, parameters are archived and compared in each iteration during the evaluation process so that parameters are tracked and updated. This mutual cooperation between the self-adaptive mutation approach and parameter archival strategy enhances the performance of our proposed hybrid approach.

This proposed approach is different from past efforts as in previous approaches the original \(DE/rand/1\) mutation strategy is used with complex rules added on several features of hybrid structure of algorithm that results in complex hybrid schemes. Whereas in our approach \(DE/rand/2\) strategy is used with sigmoid function to improve its convergence speed and accuracy. This makes our algorithm more simple, easy to implement and easily extensible. Another difference from previous work is in choosing the mutation strategy. In previous approaches, there has a strong randomness in an evolution process, which results in lack of requisite guidance. On the other hand, the proposed DEPSO approach has better guidance as \(DE/rand/2\) mutation strategy is used in early stage to expand and explore in all regions and PSO is used in later stage to search the promising regions found during the initial stages and thus improving the local exploitation capability.

The proposed approach is tested on 50 and 25-dimensional test functions and compared with conventional DE, PSO and one recently proposed variant of Differential evolution algorithm Forced Strategy Differential Evolution used for data clustering (FSDE) [14] algorithm We have also compared our approach with another hybrid variant of differential evolution and PSO (HDEPSO) [26] Further we have also compared our algorithm with 3 well known variants Self-Adapting Control Parameters in Differential Evolution jDE [24], self-adaptive differential evolution algorithm (SaDE) [27] and JADE: Adaptive Differential Evolution with Optional External Archive [11]. Similar to our work, in all these variants a new mutation strategy is developed based on tuning the control parameters. We have chosen 15 benchmark test functions to evaluate the performance of our proposed approach. We have also performed Friedman's test to validate the statistical performance of our proposed algorithm.

This paper is organized as follows: DE and PSO are briefly described in Sect. 2 and 3 respectively. Our proposed method is detailed out in Sect. 4. Section 5 represent the experimental setting. Section 6 shows the results and discussion section. Finally, Sect. 7 concludes the outcome of this paper.

2 Related work

DE is a well-known evolutionary algorithm and has attracted various researchers in different domain due to its simplicity and efficiency. The values of control parameters determine the efficiency of differential evolution algorithm to find the optimal solution for a given problem. There are various modifications which have been done in classical DE to improve its efficiency. Two different schemes of population initialization approaches have been proposed by Ali et al. [28]. A cluster-based population initialization approach has been proposed by Poikolainen et al. [29]. In this approach, three successive steps of pre-processing were introduced in order to improve the efficiency of DE. Another population initialization approach has been proposed by Sun et al. [30], that was based on novel fluctuant population approach in which population size is adjusted during each run.

Researchers have also worked on mutation strategy of differential evolution algorithm in order to improve its efficiency. Wang et al. [12] proposed another variant of Differential evolution algorithm by adaptive a new mutation strategy and named it as IMSaDE. This algorithm improves “DE/rand/2” mutation strategy. Brest et al. [31] proposed a Self-Adapting Control Parameters in Differential Evolution jDE as a new variant of DE.  Qin et al. [27] proposed a self-adaptive differential evolution algorithm (SaDE). In this approach four different mutation strategies of DE (\(DE/rand/1 , DE/rand-to-best/2, DE/rand/2\) and\(DE/current-to-rand/1\)) are used a candidate pool strategy used for each individual as per the success rate in generating better child generation. Zhang et al. [11] proposed JADE: Adaptive Differential Evolution with Optional External Archive as a novel mutation strategy as\(\mathrm{ldquoDE}/\mathrm{current}-\mathrm{to}-p\mathrm{ bestrdquo}\). In this approach the parameters are adjusted by archival and adaptive strategy. Shao et al. [32] proposed an improved utilization strategy of population information. In this approach best and worst vectors generated during evolution phase are calculated. These vectors are utilized during crossover and mutation strategy to replace the parent vector in order to improve the performance of DE. Annepu et al. [33] proposed a self-adaptive approach to improve the convergence speed based on mutation factor and cross-over probability. Another approach proposed by Zhang et al. [34] in which a novel mutation approach as inter-vehicle strategy and novel single point crossover and route sensitive selection approach is proposed to improve the efficiency.

Hybridization of evolutionary algorithms are superior to standalone algorithm as they have the capability of overcoming the weaknesses of individual algorithms without losing their advantages [35]. Sun et al. [36] hybridizes DE with estimation of distribution algorithm (EDA) in which global parameters values of EDA and DE are utilized to give optimized solutions. Wang et al. [37] proposed a hybrid approach-based DE with Nelder–Mead (NM) simplex search. In this approach NM local search ability is combined with evolutionary search of DE to get the robust and effective results. Guo et al. [38] proposed an enhanced self-adaptive differential evolution (ESADE). In this approach DE is hybridized with Simulated annealing during the selection operation. This hybridization improves the global search ability of DE. The experimental results show the improved performance of ESADE over other algorithms in comparison. Keshk et al. [39] proposed another variant of DE combined with Hidden markov model. In this approach, for the given population HMM is used to compress the information and utilized the model for adjusting DE parameters. The comparison results show that DE-HMM hybridization algorithm is giving better performance as compared with other algorithms in comparison.

There are many hybrid variants which combines DE with PSO. Tian et al. [40] proposed a hybrid variant of DE with PSO that utilizes mutation and crossover operators of DE and present a novel mutation strategy by replacing PSO’s velocity and position parameter. Also, a random neighbor individual is selected for crossover. In this approach, the operator selection is specified for the whole evolutionary cycle. Wang et al. [22] proposed hybrid variant of DE and PSO that ignores the suboptimal solutions proposed in previous approaches. In this approach, sub-population are generated from initial population by both DE and PSO and population with best particles are constituted as the initial population to perform DE operation. As for each iteration, 2 sub-population are generated that increases the space complexity of this algorithm. Dash et al. [26] presented an approach that combines DE with PSO (HDEPSO) in which the best features of both the algorithms are utilized. However, this approach requires more resources to compute and specific number of parameters are fine-tuned which may lead of increase in average execution time of algorithm.

3 Differential evolution algorithm (DE)

Storn and Price [1, 2] first developed Differential evolution algorithm. This algorithm is utilized for solving various continuous and discrete optimization problems associated in different domains. Being a good optimization technique, this work is also inspired by their basic approach. Further, in this section a detailed description about basic terminologies of DE is presented.

3.1 Initiation

Let us suppose that a population \(NP\) with \({X}_{id}^{I}\) as the candidate solution in ith index of generation \(I\). where \(i=\mathrm{1,2},\dots .,NP\). Differential evolution algorithm mainly depends on mutation, crossover and selection operators. We will discuss on these three operators in subsequent sections.

3.2 Mutation

This is a unique operator which makes DE diverse in comparison to other EAs. This operator is used to generate the trial vector \({V}_{id}^{I}\) which is used to generate offspring. So, for generating \({V}_{id}^{I}\),mutation is applied w.r.t each individual vector. In DE this strategy is commonly represented as \(DE/x/y/z\) where x and y are the elementary and variance vector respectively. z is the crossover arrangement. The DE mutation approaches are implemented as below.

$$\begin{gathered} DE/rand/1 : \hfill \\ \quad V_{id}^{I + 1} = X_{r1d}^{I} + F.\left( {X_{r2d}^{I} - X_{r3d}^{I} } \right) \hfill \\ DE/best/1: \hfill \\ \quad V_{id}^{I + 1} = X_{best}^{I} + F.\left( {X_{r1d}^{I} - X_{r2d}^{I} } \right) \hfill \\ DE/current to best /2: \hfill \\ \quad V_{id}^{I + 1} = X_{r1d}^{I} + F.\left( {X_{best}^{I} - X_{r1d}^{I} } \right) + F.\left( {X_{r1d}^{I} - X_{r2d}^{I} } \right) \hfill \\ DE/best/2: \hfill \\ \quad V_{id}^{I + 1} = X_{best}^{I} + F.\left( {X_{r1d}^{I} - X_{r2d}^{I} } \right) + F.\left( {X_{r3d}^{I} - X_{r4d}^{I} } \right) \hfill \\ DE/rand/2: \hfill \\ \quad V_{id}^{I + 1} = X_{r1d}^{I} + F.\left( {X_{r2d}^{I} - X_{r3d}^{I} } \right) + F.\left( {X_{r4d}^{I} - X_{r5d}^{I} } \right) \hfill \\ \end{gathered}$$
(1)

where F is the positive constant known as scaling factor in the range [0, 2], mutant vector is represented as \({V}_{id}^{I}\), mutually exclusive randomly selected integers are represented as \({X}_{r1d}^{I}, {X}_{r2d}^{I}, {X}_{r3d}^{I},{X}_{r4d}^{I},{X}_{r5d}^{I}.\) The random values \({r}_{1}, {r}_{2},{r}_{3},{r}_{4},{r}_{5}\) are chooses such that \({r}_{1}\ne {r}_{2}\ne {r}_{3}\ne {r}_{4}\ne {r}_{5}\ne i\).The best individual selected values is represented as \({X}_{best}^{I}\) for the generation \(I\).

3.3 Crossover

Post mutation, mutation vector \({V}_{id}^{I}\) and parent vectors \({X}_{id}^{I}\) are engaged to produce offspring also known as trail vector \({S}_{id}^{I}.\) This process is called as crossover. Crossover is implemented as below.

$$S_{id}^{I} = \left\{ \begin{gathered} V_{id}^{I} \,if \,rand \le Cr \hfill \\ X_{id}^{I} \,otherwise \hfill \\ \end{gathered} \right.$$
(2)

where \(Cr\) is the crossover rate. The value of \(Cr\) is in range [0, 1]. \(rand\) is a random number and value is in range [0, 1]. This random number guarantee that the preliminary vector \({S}_{id}^{I}\) should at the most get one member from mutation and this helps in avoiding the stagnation during evolutionary process. Although during the evolutionary process, few trail vector's element due to the impact of mutation process may get deviated from the possible solutions space. So, to reset the infeasible solution.

$$S_{id}^{I} = \left\{ \begin{gathered} X_{min} + rand\left( {0,1} \right)*\left( {X_{max} - X_{min} } \right)\,if{ }\,S_{id}^{I} \notin \left[ {X_{max} ,{ }X_{min} } \right] \hfill \\ X_{id}^{I} \,otherwise \hfill \\ \end{gathered} \right.$$
(3)

3.4 Selection

Selection is the process of determining the parent and offspring for the next generation. This process is also applied to find out which individual will take part in mutation process. In this process, DE employ greedy selection approach for evaluation of trail vectors. This process can be implemented by using below equation.

$$X_{id}^{I + 1} = \left\{ \begin{gathered} S_{id}^{I} \,if \,fun\left( {S_{id}^{I} } \right) \le fun\left( {X_{id}^{I} } \right) \hfill \\ X_{id}^{I} \,otherwise \hfill \\ \end{gathered} \right.$$
(4)

From the above equation, if the fitness value of exploratory vector \({S}_{id}^{I}\) is lesser than or equal to\({X}_{id}^{I}\), then \({S}_{id}^{I}\) will replace \({X}_{id}^{I} .\) The above steps is repeated until a stopping criteria is met.

4 Particle swarm optimization (PSO)

Kennedy and Eberhart [15] first proposed PSO in 1995. PSO is stochastic search process based on velocity-position scheme. In PSO, each individual particle can be represented as a point in dimension \(D\) of search space. From the below equation, ith particle position is denoted as.

$$X_{i} = X_{i1} ,X_{i2} \ldots ,X_{iD}$$
(5)

And velocity is signified as:

$$Y_{i} = Y_{i1} ,Y_{i2} , \ldots Y_{iD}$$
(6)

In an evolution process each particle calculate its individual best position also known as \(pbest\) and global best position known as \(gbest\) and accordingly update its own position and velocity as per the equation below.

$$Y_{id}^{t + 1} = Y_{id}^{t} + c_{1} *r_{1} *\left( {pbest - X_{id}^{t} } \right) + c_{2} *r_{2} *\left( {gbest - X_{id}^{t} } \right)$$
(7)
$$X_{id}^{t + 1} = X_{id}^{t} + Y_{id}^{t}$$
(8)

where \({c}_{1} and {c}_{2}\), are the learning coefficients. \({r}_{1} and {r}_{2}\), are in the range [0, 1] as a regular random number. \(t\) and \(t+1\) are the iterations in search process. \(d\in D\) is the dimension \(d\) in search space. The velocity \({Y}_{id}^{t}\) is within the velocity limits as \({Y}_{id}^{t} \in \left[-{Y}_{max}, {Y}_{max}\right]\)

5 The proposed algorithms

We have introduced a novel hybridization approach by utilizing the merits of both Differential evolution (DE) and Particle swarm optimization (PSO) algorithm. Thus, accuracy and efficiency in optimization process can be improved by hybridizing local search ability of PSO and global search ability of DE. By combining the benefits of both the algorithm, a new mutations strategy is proposed, and the technique is named as DEPSO. We have used binary particle swarm optimization (BPSO) where the velocity with the value of \({X}_{id}, pbest\) and \(gbest\) is updated within range of [0, 1].

In our approach the main operator is DE, and PSO is used to improve the search capability. This helps to speed up the search and increase the accuracy of our algorithm. This method uses the DE mutation strategy in early stage to expand and explore in all regions and PSO is used in later stage to search the regions found during the initial stage and thus avoid the local optima and improve the local exploitation capability. As an outcome, this procedure has programmed stability among global and local searching.

In our approach, we have used a unique selection probability function for DE and PSO mutation strategies.

$$sig = \frac{1}{{1 + 2e^{{ - \left( {I_{max} /I} \right)^{\sigma } }} }}$$
(9)

In this equation, \({I}_{max}\) is the maximum number of generation and \(I\) is the current generation set for the evolution process.\(\sigma\) is the positive constant value set during the evaluation process. The detail of the sigmoid function and values are set experimentally in Sect. 6.1.1. The value of sigmoid function is large during the initial phase of evolution and probability of rand () is also large. In our proposed approach, we are using DE mutation strategy and therefore enlarge its search space possibly to the extent that to in finding more encouraging results and thus to evade the early convergence. During the subsequent phase of evolution process, as the number of iterations increases, this value of sigmoid function reduces. In this scenario, there is a greater probability of operating PSO mutation strategy and thus this sigmoid function helps in improving the precision and convergence speed. Although DE strategy can also perform mutation and it help in convergence speed and accuracy, but this sigmoid function will ensure high precision and convergence speed of our proposed algorithm.

The proposed method initially starts with choosing and initializing the parameters of both DE and PSO. Randomly generated control parameters like F, Cr and inertia weight and sigmoid function is calculated. As a next step, perform mutation operation based on the sigmoid function. \(DE/rand/2\) mutation strategy is used for calculation. PSO mutation strategy is also used if rand value is greater than or equal to rand function. In the next step crossover is applied and corresponding trial vector is chosen based on crossover rate. Finally, selection is applied, and global best and individual best is selected, and corresponding control parameters are updated. The step by step explanation of our proposed approach is explained below. Selection of mutation strategy based on sigmoid function is done to speed up the convergence and maintain the population diversity. Pseudo code for our proposed algorithm is represented below.

Step 1: Set the NP, Iteration, generation. Randomly generate population

Step 2: Randomly generate control params like F, Cr, inertia weights

Step 3: while number of iterations reaches max

For i = 1:NP

%Compute sigmoid function using Eq. (9)

\({\varvec{s}}{\varvec{i}}{\varvec{g}}=\boldsymbol{ }\frac{1}{1+{2{\varvec{e}}}^{{-\left({{\varvec{I}}}_{{\varvec{m}}{\varvec{a}}{\varvec{x}}}/{\varvec{I}}\right)}^{{\varvec{\sigma}}}}}\)

% Mutation

Choose the random number using rand [0, 1]

If rand \(\le\) sig

% \({\varvec{D}}{\varvec{E}}/{\varvec{r}}{\varvec{a}}{\varvec{n}}{\varvec{d}}/2\) mutation strategy

\({\varvec{V}}_{{{\varvec{id}}}}^{{{\varvec{I}} + 1}} = {\varvec{X}}_{{{\varvec{r}}1{\varvec{d}}}}^{{\varvec{I}}} + {\varvec{F}}.\left( {{\varvec{X}}_{{{\varvec{r}}2{\varvec{d}}}}^{{\varvec{I}}} - {\varvec{X}}_{{{\varvec{r}}3{\varvec{d}}}}^{{\varvec{I}}} } \right) + {\varvec{F}}.\left( {{\varvec{X}}_{{{\varvec{r}}4{\varvec{d}}}}^{{\varvec{I}}} - {\varvec{X}}_{{{\varvec{r}}5{\varvec{d}}}}^{{\varvec{I}}} } \right)\)

Else

% PSO mutation strategy

\(\user2{ Y}_{{{\varvec{id}}}}^{{{\varvec{t}} + 1}} = {\varvec{Y}}_{{{\varvec{id}}}}^{{\varvec{t}}} + {\varvec{c}}_{1} \user2{*r}_{1} \user2{*}\left( {{\varvec{pbest}} - {\varvec{X}}_{{{\varvec{id}}}}^{{\varvec{t}}} } \right) + {\varvec{c}}_{2} \user2{*r}_{2} \user2{*}\left( {{\varvec{gbest}} - {\varvec{X}}_{{{\varvec{id}}}}^{{\varvec{t}}} } \right)\)

End If

Step 5: % Crossover

For i1 = 1: Ds

\({\varvec{S}}_{{{\varvec{id}}}}^{{\varvec{I}}} = \left\{ {\begin{array}{*{20}c} {{\varvec{V}}_{{{\varvec{id}}}}^{{\varvec{I}}} } & {\user2{if rand} \le {\varvec{Cr}}} \\ {{\varvec{X}}_{{{\varvec{id}}}}^{{\varvec{I}}} } & {{\varvec{otherwise}}} \\ \end{array} } \right.\)

End For

Step 6: % Selection

\({\varvec{X}}_{{{\varvec{id}}}}^{{{\varvec{I}} + 1}} = \left\{ {\begin{array}{*{20}c} {{\varvec{S}}_{{{\varvec{id}}}}^{{\varvec{I}}} } & {\user2{if fun}\left( {{\varvec{S}}_{{{\varvec{id}}}}^{{\varvec{I}}} } \right) \le {\varvec{fun}}\left( {{\varvec{X}}_{{{\varvec{id}}}}^{{\varvec{I}}} \user2{ }} \right)} \\ {{\varvec{X}}_{{{\varvec{id}}}}^{{\varvec{I}}} \user2{ }} & {{\varvec{otherwise}}} \\ \end{array} } \right.\)

End For

End While

Step 7: Repeat steps 3 to 6 until any stopping conditions is achieved

6 Experimental results

6.1 Experimental settings

In order to evaluate the performance of our proposed approach, 10 standard benchmark functions listed in Appendix are utilized in our experimentation. Further 11 standard benchmark function are used to compare our proposed algorithm with 4 well known variants of Differential evolution algorithm. The experiments are instigated using MATLAB with 2.11 GHz, Intel ® Core™ i7-8650U and 16 GB of RAM system configuration. Various parameters which are used to produce results are shown in Table 1.

Table 1 Different experimental parameters with their setting values

We have performed the experiment on our proposed technique and compared the results with classical DE [1], PSO [15], ABC [41] and FSDE [14], HDEPSO [26] algorithm. To perform our experiment, we have used 25 and 50 as dimensions and population size as 100. Since both the algorithms are stochastic in nature so we have kept maximum number of evaluations as 5,000,000 and 1000 as maximum number of iterations. The FSDE is using VTR function with value as 1.e−015. The scaling factor and crossover rate chosen as 0.6 and 0.8 respectively. The velocity limits are 2, − 2 and inertia weight are 0.99. In the evolution stage setting up the right parameters is very important as performance is dependent on these control parameters, however choosing right parameters is a difficult process. In our approach values are adjusted in real time by monitoring the evolutionary process of each individual. During the process of evolution if any individual stagnates then the values need to be readjusted. In literature there is no guideline to set the values of control parameters, instead the random selection of values by trial and error is considered as the better option [42].

6.1.1 Selection of mutation strategy

In the Eq. 9 the selection probability for mutation strategy and value of constant values used is studied experimentally. The positive value of \(\sigma\) plays an important role in calculating the sigmoid function. The Fig. 1 shows the probability of choosing the DE and PSO strategies. In the figure, to better understand the evolution process and impact of \(\sigma\) during the evolution process, the entire evolution process is divided into initial and final stage. We can also observe that with the increase of \(\sigma\) the chances of selection of DE mutation strategy increases and probability of selection of PSO mutation strategy becomes smaller. From the figure we can observe that during the initial stage of evolution the chances of choosing DE mutation strategy increase with \(\sigma \ge 1.3.\) This ensure the population diversity and helps our proposed algorithm to search the maximum region. As DEPSO should also ensure in improving the local optima and thus convergence speed or accuracy, therefore the participation of PSO in mutation operation should be increased. From the figure we can observe that for final or later stage of evolution the value of \(\sigma \le 1.5\). Therefore, for our experiment to balance the population diversity and convergence speed we have chosen the value of \(\sigma\) as 1.4

Fig. 1
figure 1

Probability of selection of mutation strategy

6.2 Result analysis

The proposed algorithm is evaluated on 10 test benchmark function and then the results are compared with classical DE, PSO, ABC, recently proposed variant of Differential evolution -FSDE and hybrid HDEPSO. The results are tabulated in Tables 2, 3, 4 and 5. To validate the results, we have also conducted Friedman's statistical test on our proposed algorithm DEPSO. The rank and the Friedman's test results are shown in Table 6.

We have performed our experiment on 25-dimensional test functions and results are shown in Tables 2 and 4. When we compare the best values of all the algorithms, our proposed algorithm is showing better results on 7 test functions whereas FSDE is showing better results on only 3 test functions. We have captured the mean and standard deviation results in Table 4. We have also performed Wilcoxon’s rank sum test on the obtained results and indicated performance comparison is added in the Table 4. We have also performed Friedman's test for confirming the validity of our results. The results obtained from our experiment shows that when compared with DE, DEPSO is performing better on 9, worst on 1 function. DE-PSO when compared with PSO, our proposed algorithm is showing better results on 7, worst on 2 and similar results on 1 test functions. DEPSO when compared with ABC, our algorithm is giving better results on 8 and worst result on 2 test functions. When compared with FSDE, our proposed algorithm is showing better results on 7 test functions, worst on 2 and similar on 1 test functions. Comparing HDEPSO with our algorithm shows that our approach is giving better results on all the test functions. From the results we can see that our proposed algorithm is performing better, and this may be due to hybridization of exploration and exploitation of DE and PSO in an effective manner. In our algorithm there is an equilibrium of global exploration and local exploitation during early and later stages of evolution, therefore showing higher convergence speed and robustness.

Similarly, All the algorithms are tested on 50 dimensional functions and the results of best values are shown in Table 3. As shown in Table 3, our proposed algorithm is showing better results on 7 test functions and PSO is showing better results on 2 test functions whereas FSDE is showing better results only on 1 test functions. The mean and standard deviations are shown in Table 5. We have applied Wilcoxon’s rank sum test on the obtained results and performance of our proposed algorithm is compared and indicated in + , − and \(\approx\) as the DE-PSO showing worst, better and similar performance results. We have also applied Friedman's non-parametric test and results are shown in Table 6. This test is performed to showcase the validity of our results. From the results we can see that DEPSO when compared with DE is showing better results on 8 functions, showing worst results on 2 functions. Similarly, DEPSO when compared with PSO is showing better results on 7 functions, worst results on 1 and similar performance on 2 results. DEPSO when compared with ABC, our algorithm is showing better results on 11 results, worst results on 3 and similar results on 1 function. Now when compared with FSDE, our proposed algorithm is showing better results on 7 results, worst results on 1 and similar results on 2 function. So, we are seeing similar results as with 25D test functions, our proposed algorithm is showing better results as compared to other algorithms on both 50 and 25-dimensional test functions. Compared with HDEPSO, our proposed algorithm is showing better results on all the test functions. FSDE is showing slightly better results on 3 benchmark function on 25D and on 50D it is showing only on 1 test function. This is because of their weighted mutation strategy but it is not performing on the rest as the mutation strategy converges slowly. However, our proposed solution is giving better results on 7 test functions on 25D and on 9 on 50D. This is also because of the sigmoid function that we have added to update the position value also help in increasing the velocity and in turn the search performance. Another reason is the updated hybrid mutation strategy of DE and PSO that helps algorithm to achieve population diversity as well as improve the convergence speed.

Figure 2 and 3 shows the best value comparison graphs plotted for DE, PSO, ABC, FSDE, HDEPSO and DEPSO on different benchmark functions having dimension 25 and 50. Y-axis represents the best cost and X-axis represents the number of iterations. From graphs as well, it can be concluded that our algorithms curve is efficient on all the benchmark function. Our proposed algorithm is performing well and superior to other compared algorithm on all the benchmarking problems in terms of convergence rate. From the figures it can also be concluded that the DEPSO has good competency of escaping from local optimal. Plotted curves also show that the function values of the DEPSO rapidly decreases as the number of iterations increases and the mutation operator accelerate the search and help the algorithm to obtain optimal or near optimal solution.

We have also performed statistical non-parametric Friedman test on our proposed approach DEPSO to validate our results. From the results it is evident that our proposed algorithm is showing statistically better results as compared with other algorithms. Table 6 shows the results obtained from Friedman test.

Differential evolution algorithm maintains the constant parameters for the evolution however for each individual dynamic parameter can achieve optimized performance for each stage. Every optimization problem has different attributes and so it is difficult to achieve best parameter values therefore a strategy based on selection of dynamic selection of parameters will improve the performance of differential evolution algorithm. In our proposed approach the values of each individual can be adjusted based on the real time evolutionary status. Therefore, individual evaluation is stagnant if during the evolutionary process if any individual fails to produce better results in successive generations and so the values should be readjusted by random values by the following strategy.

$$F_{i,I + 1} = \left\{ \begin{gathered} F_{i,I} \,if \,ES_{i} < ES_{max} \hfill \\ F_{min} + \left( {F_{max} - F_{min} } \right)*rand\left( {1,D} \right)\,otherwise \hfill \\ \end{gathered} \right.$$
(10)
$$Cr_{i,I + 1} = \left\{ \begin{gathered} Cr_{i,I} \,if ES_{i} < ES_{max} \,if \,ES_{i} < ES_{max} \hfill \\ Cr_{min} + \left( {Cr_{max} - Cr_{min} } \right)*rand\left( {1,D} \right)\,otherwise \hfill \\ \end{gathered} \right.$$
(11)

where \(F_{i,I}\) is the scaling factor \(and Cr_{i,I }\) is the crossover rate for \(X_{id}^{I} { }\) for generation \(I. values of F_{min} and F_{max}\) are 0.5 and 1.0. Values of \(Cr_{max} and Cr_{min}\) are 0.8 and 1.0 respectively. Values of rand (1, D) is in the random values in range [1, D] where D is the size (samples, 2). \(ES_{i}\) is the temp value of individual generation and \(ES_{max}\) is the max or best values obtained from that generation. We update best value only in case of success to save time. if competitor is better than the best one ever, we will update the new best value as the temp value.

In order to handle the problem related to Population diversity and premature convergence, our proposed strategy uses mutation operation to generate new offspring and then each offspring uses crossover with particle/particle as mentioned in Eq. (3) to generate new offspring. These newly generated particles are created with updated position. During the iteration process this new particle update their personal best as compared with all the other newly created offspring.

6.2.1 Comparison with PSO, jDE, SaDE and JADE

We have further compared our DEPSO algorithm with Canonical PSO [43], jDE [31], SaDE [27]] and JADE [11]. The parameter settings for our proposed algorithm is same as mentioned in Table 1 for impartial comparison. For the algorithms in comparison, parameters settings are same as mentioned in the original papers. For this experiment we have used 11 well known benchmark functions. These benchmark functions are same as mentioned in these original variants research papers. The results are calculated by running the algorithm over 50 independent runs for D = 30 and 100. The results obtained are tabulated in Table 7 and 8.

For 30D problems, the performance for JADE is better in comparison to jDE, SaDE and PSO. The jDE and SaDE gets 2 best solutions due to the slow convergence issue. PSO is performing worst and is not giving any best solution or second-best solution on any of the benchmarking functions. This is due to the premature convergence issue with the PSO. JADE is giving better results because it shows high reliability, fast convergence and diversity improvement in mutation strategy. On the other hand, our proposed algorithm DEPSO is giving better results as compared to other algorithms and giving best solution on 9 benchmarking functions. This is because our proposed algorithm is having good diversity due to its adaptive mutation strategy based on sigmoid function.

Similarly, our proposed algorithms DEPSO is giving better results on 100D benchmarking functions in comparison to jDE, SaDE, JADE and PSO. This is due to the mutation strategy based on sigmoid function that provides more diversity and high convergency speed. Other algorithms except JADE are giving no better solution when compared with DEPSO algorithm. JADE with archive is giving better solution on only 2 benchmarking functions when compared with our proposed algorithm. But JADE is giving better results on jDE, SaDE and PSO algorithm. This is due to the adaptive parameter control mutation strategy that introduces population diversity in JADE.

To summarize, our proposed approach is showing better results than other algorithms in comparison both in terms of dimensionality and convergence speed. The main reason for giving better results are: PSO strategy for alternate mutation and adaptive parameter selection approach for DE parameters is giving more exploration ability to our proposed approach. Additionally, the adaptive crossover, scaling parameters selection and a sigmoid function are utilized to transform the velocity of PSO and to handle the multiplicity of population and convergence speed of our proposed approach (Table 9).

6.2.2 Comparison on high dimensional datasets

We have also evaluated the performance of our proposed algorithm on high dimensional problems (D = 500,1000) [44]. For this experiment we have used \(NP = 5000,\) the number of iterations is 5000 and 8 test functions that includes unimodal and multimodal functions. The mean and standard deviation are calculated and summarized in Sect. 8. Experimental outcome indicates that our proposed algorithm is outperformed when compared with classical DE and PSO. This is again due to the co-operation strategy between DE and PSO which is working efficiently on high dimensional problems.

We have also calculated the average run time of each algorithm to demonstrate the computational complexity of our proposed algorithm. Table 10 shows the average run time of DE, PSO, ABC, FSDE, HDEPSO and our proposed DEPSO approach on 10 benchmark functions. From the table we can see that our approach has the better time complexity in comparison to other approaches. We can also confirm from the various experiments including higher dimensional data, that our approach is computationally faster and effective.

Table 2 Performance values achieved by DE, PSO, ABC, FSDE, HDEPSO and DEPSO on 10 test functions with 25D
Table 3 Performance values achieved by DE, PSO, ABC, FSDE, HDEPSO and DEPSO on 10 test functions with 50D
Table 4 Mean and SD for DE, PSO, ABC, FSDE, HDEPSO and DEPSO on 10 functions on 25 D
Table 5 Mean and SD for DE, PSO, ABC, FSDE, HDEPSO and DEPSO on 10 functions on 50 D
Table 6 Test Statistics using Friedman's test
Table 7 Mean and SD for PSO, SaDE, jDE, JADE and DEPSO on 11 functions on 30 D
Table 8 Mean and SD for PSO, SaDE, jDE, JADE and DEPSO on 11 functions on 100 D
Table 9 Mean and SD for PSO, DE and DEPSO on Unimodal and Multimodal functions with D-500 and 1000
Table 10 Average execution time on benchmark functions
Fig. 2
figure 2

Best value comparison graph of Sphere, Rosenbrock, Ackley, Griewank, Quartik, Schwefel_2_21, Michalewicz, Rastrigin, Zakharov, Powell on 25 D

Fig. 3
figure 3

Best value comparison graph of Sphere, Rosenbrock, Ackley, Griewank, Quartik, Schwefel_2_21, Michalewicz, Rastrigin, Zakharov, Powell on 50 D

7 Conclusion

Differential evolution algorithm is dependent on its mutation and crossover approach. To improve this, a hybrid variant of DE and PSO is proposed in this paper. Initially our proposed algorithm starts with DE mutation strategy to improve the exploration capability and in the later stage, mutation approach of PSO is applied. This will help in increasing the convergence speed of our proposed approach. We have also used dynamic parameter selection approach, and this help DE-PSO to handle multiple optimization problems effectively.

The proposed DE-PSO algorithm is compared with conventional DE, PSO, ABC, FSDE and HDEPSO algorithm on 25, 50-dimensional test functions. The results show that our proposed algorithm is giving better results in comparison to other algorithms. We have further compared our DEPSO algorithm with Canonical PSO, jDE, SaDE and JADE on 30 and 100-dimensional test functions. Our proposed approach is showing better results than other algorithms in comparison both in terms of dimensionality and convergence speed. To further establish the robustness and effectiveness of our proposed algorithm, we have also evaluated the performance of our proposed algorithm on high dimensional problems (D = 500, 1000). We have also performed Friedman's test to statistically shows the performance of our proposed algorithm. We have also calculated the average run time of each algorithm to demonstrate the computational complexity of our proposed algorithm.

In the future work, we are planning to implement this algorithm in feature selection to improve the accuracy of clustering algorithm. Apply this algorithm on motion estimation to develop video coding application is also another area of exploration.