1 Introduction

Differential evolution (DE) [17] is a powerful evolutionary algorithm that is also easy to implement. DE incorporates two control parameters, the scaling factor (F) and crossover rate (CR), and its performance depends on the choice of these two control parameters, which is why many authors have studied and are still studying DE to obtain suitable values of F and CR. Several of these studies are discussed below:

SPDE [25] is based on self-adaptation of the F and CR values, where the values of the control parameters are generated from a Gaussian distribution N (0, 1). In jDE [26], self-adaptation of F and CR values are considered, where the value of F is generated within the range [0.1, 1.0] with probability τ1, and CR is generated within the range [0, 1] with probability τ2. In JADE [27], the value of F is generated by a Cauchy distribution, whereas the value of CR is generated using a normal distribution. In SaDE [28], the value of F is calculated from a normal distribution with mean of 0.5 and standard deviation of 0.3, denoted by N (0.5, 0.3), and the CR value is calculated from a Gaussian distribution; these F and CR values are applied to each target vector. EPSDE [29] is based on the ensemble of multiple mutation strategies with multiple parameter settings of control parameters during different stages of evolution. CoDE [30] is based on the combination of three different trial vector generation strategies associated with three different parameter settings of F and CR. In MPEDE [37], a multi-population based ensemble of multiple strategies (i.e., “rand/1”, “current-to-rand/1”, and “current-to-pbest/1”) has been proposed.

However, recently, researchers have studied several hybrid algorithm that combines DE with other algorithms. Examples of these hybrid algorithms include DE-PSO [19], which is the combination of DE [17] and PSO [6, 15]; DESQI [18], a combination of DE and QA; BBDE [20], a combination of bare bones PSO and differential evolution; DE/BBO [21], a combination of DE and BBO [24]; GA-DE [22], a combination of GA [8] and DE; ABC-DE [23], a combination of ABC [1] and DE; and CADE [38], a combination of the cultural algorithms (CA) and DE algorithm. Additional improved hybrid algorithms can be found in [53,54,55]. In this work, a novel hybrid algorithm, hDEBSA, is proposed by combining DE with a newly proposed evolutionary algorithm, called the backtracking search algorithm (BSA) [4, 39], where BSA is a population-based nature-inspired optimization technique that utilizes mutation, crossover, and selection operators during each evolution to move each individual towards the global optimum. Many researchers have improved the performance of BSA with respect to the different self-adaptive strategy designs [21, 39, 42] and hybridization [43,44,45,46,47], and these versions of BSA are widely used to solve various complex global optimization problems from a variety of fields, such as antenna array synthesis [48], power systems optimization [49], trusses structure [50], urban traffic network [51], and surface wave analysis [52].

The aforementioned facts have motivated us to work further work on DE and BSA, where we attempt to find a new hybrid algorithm that combines the features of DE and BSA. The primary contributions of this study are abridged as follows:

  1. i)

    A hybrid algorithm, i.e., hDEBSA, is proposed, which uses the components of DE and BSA,

  2. ii)

    A self-adaptation scheme for control parameters are used in hDEBSA to improve the performance as well as the convergence rate of the proposed algorithm,

  3. iii)

    The proposed hDEBSA is applied to solve four real-world optimization problems.

The remaining part of the paper is organized as follows: Section 2 discusses the two components of hDEBSA, i.e., the basic DE and BSA. The proposed hDEBSA is presented in Section 3. Section 4 presents the performance evaluation of twenty-eight CEC 2013 test functions. Section 5 presents the formulation of four real-world optimization problems, the results, and a discussion of these optimization problems. Finally, Section 6 summarizes the contribution of this study.

2 The basic DE and BSA algorithms

A brief description of the basic DE and BSA are given in the following sub-sections:

2.1 Differential evolution algorithm

Differential evolution is a population-based evolutionary algorithm, which incorporates two important algorithm specific control parameters. One is a weighting coefficient or scaling factor (F), and the other is the crossover rate (CR). The scaling factor (F) is used to generate new trial solutions when executing the optimization process. The crossover rate (CR) is used to determine how much of a trial solution should be adopted into the current solution. It has been found that the performance of the DE algorithm depends on the proper values of F and CR [2], and varying the values of F and CR during the execution of the optimization process can improve its performance [3]. In the DE algorithm, the mutation operation is used in the current population to produce a mutant vector, where the crossover operator is used to produce the final form of the trial population; the selection operator is used between the trial population and target population to update the current population. By repeated cycles of the mutation operator, crossover operator, and selection operator, DE attempts to improve its performance. A detailed description of the DE algorithm is given in [17].

2.2 Backtracking search optimization algorithm

BSA is also a population-based stochastic evolutionary algorithm and incorporates two algorithm specific control parameters, i.e., the scaling factor (F) and mix rate (M). BSA uses the historical population to identify the search direction. The initial historical population is obtained by a uniform random generation strategy within the search space. BSA employs the one direction mutation strategy, which is different from other evolutionary algorithms. During the production of the trial population ‘Mutant’, parameter F controls the amplitude of the search-direction matrix. Once the mutant operation has ended, the crossover process is used to produce the final form of the trial population. The process of crossover strategy is divided into two steps. At first, a binary integer-valued matrix (map) of size NP×D (where NP = number of population and D = dimension of the optimization problem) is calculated, which indicates the individuals of ‘T’ (trial population) to be manipulated using the relevant individuals of ‘P’ (current population). Secondly, using the relevant individual to the mutant individual, the relevant dimensions of the mutant individual are updated. A detailed description of BSA is given in [4].

3 Proposed hDEBSA algorithm

The combination of one meta-heuristic optimization technique with other optimization techniques or the component of any optimization technique is called a hybrid meta-heuristic optimization technique. An efficient hybrid algorithm enables more efficient behaviour and a higher flexibility when dealing with real-world and large-scale optimization problems.

However, the performance of any algorithm depends on the choice of the proper values of its algorithm control parameters. DE incorporates two control parameters, F and CR, and the performance of DE depends on the proper choice of these to control parameters. Also, the performance of BSA depends on F. A lower value of F permits a fine search in small steps but is slow to converge, and a larger value of F speeds up the search but reduces the exploration capability.

Considering this fact, in this work, a hybrid algorithm, called hDEBSA, is proposed by combining the two algorithms, DE and BSA. Also, the control parameters are considered on a self-adaptive basis. In hDEBSA, first, the components of the DE algorithm are executed, and then the components of BSA are executed. When executing the component of the DE algorithm, the worst individual is updated according to probability p i . The value of probability p i is can be calculated by (1).

$$ p_{i}=\frac{r_{i}}{N} $$
(1)

Where NP is the population size, and r i is the ranking value of each individual when the population is sorted from the worst fitness to the best one. It may be noted here that (1) is similar to the selection probability in DE with ranking-based mutation operators [7, 11]. This selection strategy can be defined as follows:

$$ I_{s}=i, if rand\left( 0, 1 \right)>p_{i}, i=1,2,3,\mathellipsis N $$
(2)

Where I s is the selected individual and is optimized by DE.

A detailed description of self-adaptive-based control parameters setting and the proposed hDEBSA algorithm is given below:

3.1 Scaling factor/ weighting coefficient (F) of the DE algorithm

In the DE algorithm, the scaling factor (F) is used to produce a new set of the trial vector. It has been found that a smaller value (less than 0.4) and a larger value (greater than 1.0) of F are occasionally effective [17]. Several researchers have also observed that a large control parameter F reduces the local optimum [12, 13]. Gämperle et al. [12] found that F = 0.6 or 0.5 may be the proper initial value, whereas Rönkkönen et al. [13] found it to be F = 0.9. According to Rönkkönen et al. [13], the value of control parameter F lies in the range 0.4–0.95. Varying the value of control parameter F during the optimization process, one can improve the performance of the DE algorithm [3, 17]. Thus, the modification of the scaling factor (F) can be defined by (3). For clarity, instead of F, the variable is denoted F D E

$$ F_{DE} =F^{\max}_{DE} -(F^{\max}_{DE} -F^{\min}_{DE} )\ast \frac{f^{\max} _{i} -f^{\min}_{i}} {f^{\max}_{0} -f^{\min}_{0}} $$
(3)

where \(F^{\min }_{DE} =0.4\) and \(F^{\max }_{DE} =0.95\);\(f^{\max }_{0} \) and \(f^{\min }_{0} \) are the maximum and minimum fitness values of the initial population, respectively; \(f^{\max }_{i} \) and \(f^{\min }_{i} \) are the maximum and minimum fitness values of the population in the i th iteration, respectively.

3.2 Crossover rate (CR) of the DE algorithm

In DE, the crossover rate (CR) is used to produce the final form of the trial vector set. It is used to determine how much of a trial solution should be adopted into the current solution as well as the DE scheme. This is one of the crucial ideas behind DE for generating trial vectors [9, 16]. Researchers have verified that a large CR speeds up convergence but reduces the local search ability [12,13,14]. The value of CR = 0.1 is the proper initial choice, whereas a CR = 0.9 or 1.0 can improve the convergence speed [17]. The proper value of CR can be between 0.3 and 0.9 [12]. When CR = 1, the number of the trial vectors may be reduced dramatically, which may lead to immobility [10, 13]. By varying the value of CR during the execution of the optimization process, one can improve the performance of DE [3]. The modification of the crossover rate (CR) can be defined by (4). Instead of CR, it is written as C R D E

$$ CR_{DE} =CR^{\max}_{DE} -(CR^{\max}_{DE} -CR^{\min}_{DE} )\ast \frac{f^{\max}_{i} -f^{\min}_{i}} {f^{\max}_{0} -f^{\min}_{0}} $$
(4)

where \(CR^{\min }_{DE} =0.3\) and \(CR^{\max }_{DE} =0.9\);\(f^{\max }_{0} \) and \(f^{\min }_{0} \) are the maximum and minimum fitness values of the initial population, respectively; \(f^{\max }_{i} \) and \(f^{\min }_{i} \) are the maximum and minimum fitness values of the population in the i th iteration, respectively. Also, the value of another BSA control parameter, i.e., the mixrate, is considered as

$$ mixrate_{BSA} =0.3\ast (1+rand(0,1)) $$
(5)

The pseudocode of the proposed hDEBSA algorithm is presented in Fig. 1.

Fig. 1
figure 1

Pseudo code of the proposed hDEBSA algorithm

4 Performance evaluation on the CEC 2013 test functions

To validate the performance of hDEBSA, twenty-eight benchmark functions from the CEC 2013 special session on real-parameter optimization [5] are considered. These test functions consist of three different types of functions: (i) unimodal functions F1–F5, (ii) basic multimodal functions F6– F20, and (iii) composition functions F21– F28. A detailed description of these twenty-eight test functions can be seen in [5]. For this study, the dimension (D) of each test function is considered as 30 and 50. The algorithm is run for 30 times with 3000 (for D = 30) and 5000 (for D = 50) fitness evaluations (FEs) for a population size of 50. The range of each test function is considered as [ −100, 100]. The performance results are presented in terms of the mean and standard deviation of each test function. For the statistical analysis, the Friedman Rank Test is used to obtain the overall rank of all the algorithms using the mean results, where the Bonferroni-Dunn approach is taken as a post hoc procedure.

Table 1 shows the performance results of PSO [15], DE [17], ABC [1], BSA [4], ABSA [31], and hDEBSA on the twenty-eight benchmark functions of the CEC 2013 special session on real-parameter optimization problems with dimension 30. Table 2 shows the number of test functions where the performance of hDEBSA is better than, worse than, or similar to the compared algorithms. From Table 2, it can be seen that hDEBSA performs better than PSO on twenty-three test functions; DE on twenty-three test functions; ABC on eighteen test functions; BSA on twenty-three test functions, and ABSA on twenty-four test functions. Table 3 shows the ranks obtained by the Friedman rank test with respect to the mean performance of all algorithms for each test functions. From Table 3, it is clear that the rank of hDEBSA is the lowest. Therefore, it can be claimed that hDEBSA performs better than the compared algorithms. Several of the convergence graphs of hDEBSA are shown in Fig. 2.

Table 1 Performance results of PSO, DE ABC, BSA, ABSA and hDEBSA at dimension(D) 30 after reaching 3000 FEs of CEC2013 test functions over 30 runs with 50 population size
Table 2 Counting the mean performances result of hDEBSA, is better than, less than or equal to the other compared algorithms (PSO, DE, ABC, BSA, ABSA) on CEC2013 real world benchmark functions with 30 dimensions
Table 3 Ranks obtained by Friedman’s test in which Bonferroni–Dunn’s procedure was used as a post hoc procedure on the mean performance of CEC2013 test function with dimension 30. The Friedman’s test was conducted based on the SPSS software
Fig. 2
figure 2

Convergence graph of F1, F2, F3, F5, F12 and F13

Table 4 shows the performance results of CoDE [30], DE/rand/2/bin [18], CLPSO [32], CPSO-H [33], FI-PS [34], DE-PSO [19], and hDEBSA on the twenty-eight benchmark functions of the CEC 2013 special session on real-parameter optimization problems with a dimension of 30. Table 5 shows the number of test functions where the performance of hDEBSA is better than, worse than, or similar to the compared algorithms. From Table 5, it is seen that hDEBSA performs better than CoDE on twenty-five test functions; DE/rand/2/bin on twenty-six test functions; CLPSO on twenty-three test functions; CPSO-H on seventeen test functions; FI-PS on twenty-two test functions, and DE-PSO on eighteen test functions. Table 6 shows the ranks obtained by the Friedman rank test with respect to the mean performance of all algorithms of each test functions. From Table 6, it is clear that the rank of hDEBSA is the lowest compared with that of the other algorithms. Thus, it can be claimed that hDEBSA performs better than the other algorithms.

Table 4 Performance results of CoDE, DE/rand/2/bin, CLPSO, CPSO-H, FI-PSO, DE-PSO and hDEBSA at dimension(D) 30 after reaching 3000 FEs of CEC2013 test functions over 30 runs with 50 population size
Table 5 Counting the mean performances result of hDEBSA, is better than, less than or equal to the other compared algorithms (CoDE, DE/rand/2/bin, CLPSO, CPSO-H, FI-PS, DE-PSO, hDEBSA) on CEC2013 real world benchmark functions with 30 dimensions
Table 6 Ranks obtained by Friedman’s test in which Bonferroni–Dunn’s procedure was used as a post hoc procedure of CoDE, DE/rand/2/bin, CLPSO, CPSO-H, FI-PS, DE-PSO, hDEBSA with respect to the mean performance on CEC2013 test functions with dimension 30

Table 7 shows the performance results of CoDE [30], EPSDE [29], DE/rand/2/bin [18], CLPSO [32], CPSO-H [33], FI-PS [34], and hDEBSA on twenty-eight benchmark functions of the CEC 2013 special session on real-parameter optimization problems with a dimension of 50. Table 8 shows the number of occasions where the performance of hDEBSA is better than, worse than, or similar to the other algorithms. From Table 8, it is seen that hDEBSA performs better than CoDE on twenty-six test functions; EPSDE on fourteen test functions; DE/rand/2/bin on twenty-six test functions; CLPSO on twenty five-test functions; CPSO-H on nineteen test functions, and FI-PS on twenty-four test functions. Table 9 shows the ranks obtained by the Friedman rank test with respect to the mean performance of all algorithms for each test function and observed that the rank of hDEBSA is the lowest compared with that of the compared algorithms. Hence, it can be said that the performance of hDEBSA is better than that of the other algorithms.

Table 7 Performance results of CoDE, EPSDE, jDE, CLPSO, CPSO-H, FI-PS and hDEBSA at dimension(D) 50 after reaching 5000 FEs of CEC2013 test functions over 30 runs with 50 population size
Table 8 Counting the mean performances result of hDEBSA, is better than, less than or equal to the other compared algorithms (CoDE, EPSDE, DE/rand/2/bin, CLPSO, CPSO-H, FI-PS, hDEBSA) on CEC2013 real world benchmark functions with 50 dimensions
Table 9 Ranks obtained by Friedman’s test in which Bonferroni–Dunn’s procedure was used as a post hoc procedure of CoDE, EPSDE, DE/rand/2/bin, CLPSO, CPSO-H, FI-PS, DE-PSO, hDEBSA with respect to the mean performance on CEC2013 test functions with dimension 50

Table 10 compares the performance results obtained by IBSA [38], MPEDE [39], I-SOS [40], SOS-ABF1 [41], SOS-ABF2 [41], SOS-ABF1&2 [41], and hDEBSA on twenty-eight CEC2013 test functions with dimension 50. Table 11 shows the number of occasions where the mean performance of hDEBSA is better than, worse than, or similar to the other algorithms. From Table 11, it can be observed that the performance of hDEBSA is better than IBSA on nineteen test functions, MPEDE on twenty-four test functions, I-SOS on twenty-three test functions, SOS-ABF1on fourteen test functions, SOS-ABF2 on fifteen test functions, and SOS-ABF1&2 on nineteen test functions. Table 12 shows the ranks obtained by Friedman rank test with respect to the mean performances, where it is shown that the rank of hDEBSA is the lowest. Hence, it may be concluded that the performance of hDEBSA is better than that of the other algorithms.

Table 10 Performance results of IBSA, MPEDE, I-SOS, SOS-ABF1, SOS-ABF2, SOS-ABF1&2 and hDEBSA at dimension (D) 50 after reaching 5000 FEs of CEC2013 test functions over 30 runs with 50 population size
Table 11 Counting the mean performances result of hDEBSA, is better than, less than or equal to the other compared algorithms (IBSA, MPEDE, I-SOS, SOS-ABF1, SOS-ABF2, SOS-ABF1&2 and hDEBSA) on CEC2013 real world benchmark functions with 50 dimensions
Table 12 Ranks obtained by Friedman’s test in which Bonferroni–Dunn’s procedure was used as a post hoc procedure of IBSA, MPEDE, I-SOS, SOS-ABF1, SOS-ABF2, SOS-ABF1&2 and hDEBSA with respect to the mean performance on CEC2013 test functions with dimension 50

5 Formulation of the real-world optimization problems

In this section, the formulation of four real-world problems and the performance results of these four optimization problems are discussed.

5.1 Problem formulation

To apply the hDEBSA on real-world optimization problems, two real-world problems, namely the Gas Transmission Compressor Design problem and Optimal Capacity of Gas Production facilities, are taken from [35] and another two problems, the Frequency Modulation Sounds Parameter Identification problem and the Spread Spectrum Radar Polyphase Code Design, problem are taken from [36]. The formulation of these real-world optimization problems are presented below:

P1. Gas transmission compressor design problem

$$\begin{array}{@{}rcl@{}} Min f\left( x \right)&=&8.61\times {10}^{5}\times x_{1}^{\frac{1}{2}}\times x_{2}\times x_{3}^{\frac{-2}{3}}\times \left( {x_{2}^{2}}-1 \right)^{-\frac{1}{2}}\\&&+3.69\times {10}^{4}\times x_{3}+7.72\times {10}^{8}\times x_{1}^{-1}\\&&\times x_{2}^{0.219}-765.43\times {10}^{6}\times x_{1}^{-1} \end{array} $$
(6)

Such that, 10 ≤ x 1 ≤ 55,1.1 ≤ x 2 ≤ 2,10 ≤ x 3 ≤ 40;

P2. Optimal capacity of gas production facilities

$$\begin{array}{@{}rcl@{}} Min f\left( x \right)\!&=&\!61.8+5.72\times x_{1}\times 0.2623\\&&\!\times \left[ \left( 40-x_{1} \right)\times \ln \left( \frac{x_{2}}{200} \right) \right]^{-0.85}+0.087\\&&\!\times \left( 40\,-\,x_{1} \right)\times \ln \left( \frac{x_{2}}{200} \right)\,+\,700.23\times x_{2}^{-0.75} \end{array} $$
(7)

Such that, x 1 ≥ 17.5, x 2 ≥ 200,17.5 ≤ x 1 ≤ 40,300 ≤ x 2 ≤ 600;

P3. Frequency modulation sounds parameter identification problem:

In the modern sound system, the frequency-modulated (FM) sound wave synthesis has an important role to generate the target sound in the FM synthesizer. This optimization problem has six parameters, i.e., six dimensions given by X = {a1, ω1, a2, ω2, a3, ω3}. The objective is to determine the optimum values of these parameters in such a way that the sound that is generated is similar to that of the target sound. The sound waves, i.e., the estimated sound and the target sound waves using these parameters, are given by:

$$ y(t)=a_{1} \sin (\omega_{1} t\theta +a_{2} \sin (\omega_{2} t\theta +a_{3} \sin (\omega_{3} t\theta ))) $$
(8)
$$\begin{array}{@{}rcl@{}} y_{0} (t)&=&1.0\times \sin (5.0\times t\theta +1.5\times \sin (4.8\times t\theta\\ &&+2.0\sin (4.9\times t\theta ))) \end{array} $$
(9)

respectively (where 𝜃 = 2π/100), where each parameter is bounded by the range [ −6.4, 6.35]. The fitness function, i.e., the objective function for this optimization problem is given by

$$ f(\overrightarrow X )=\sum\limits_{t=0}^{100} {(y(t)-y_{0} (t))^{2}} $$
(10)

The optimum value, i.e., the minimum value of the frequency modulation sound parameter identification optimization problem is f(X ) = 0.

P4. Spread spectrum radar polyphase code design problem:

The pulse compression technique is an important technique widely used to design a radar-system. The polyphase compression code synthesis offers convenience and is easier to implement the digital processing technique. This optimization problem is a continuous min–max global optimization problem in continuous variables with numerous local optima. Based on the properties of the aperiodic auto-correlation function and the assumption of coherent radar pulse processing in the receiver, the min–max model can be defined as

$$ Global\min f(X)=\max \left\{ {\varphi_{1} (X),\varphi_{2} (X),.........\varphi_{2m} (X)} \right\} $$
(11)

where X = {(x 1, x 2, x 3,..........x D ) ∈ R D|0 ≤ x j ≤ 2π, j = 1,2,3,......., D} and m = 2D −1, with

$$ \varphi_{2i-1} (X)=\sum\limits_{j=i}^{D} {\cos \left( \sum\limits_{k=\left| {2i-j-1} \right|+1}^{j} {X_{k}} \right),i=1,2,3,.....D} $$
(12)
$$\begin{array}{@{}rcl@{}} &&\varphi_{2i} (X)=0.5+\sum\limits_{j=i+1}^{D} \cos \left( \sum\limits_{k=\left| {2i-j} \right|+1}^{j} X_{k} \right),\\ &&i=1,2,3,.......D-1 \end{array} $$
(13)
$$ \varphi_{m+i} (X)=-\varphi_{i} (X),i=1,2,3,.......,m $$
(14)

Here, the objective is to obtain the minimum value of the module of the largest among the samples of the auto-correlation function ϕ that are related to the complex envelope of the compressed radar pulse at the optimal receiver output.

5.2 Result and discussion of the real-world problems

To analyze the performance of hDEBSA on four real-world problems, the algorithm was run for 30 times with 5000 fitness evaluations and 50 population sizes. Table 13 shows the performance results of Beightler and Phillips [35], DE [17], BSA [4], and hDEBSA for two real-world problems (P1 and P2). From this table, it is seen that the performance of DE, BSA, and hDEBSA are the same. The performance of hDEBSA is better than that of Beightler and Phillips [35]. Table 14 shows the performance results of DE, BSA, and hDEBSA on real-world problems P3 and P4. From this table, it is seen that for P3, the performance of hDEBSA is better than that of BSA; for P4, the performance of hDEBSA is better than that of DE and BSA.

Table 13 Performance results of P1 and P2 real world optimization problems
Table 14 Best, Mean and SD results of P3 and P4 real life optimization problems compared to state of the art other algorithms

6 Conclusion

In this paper, a hybrid algorithm hDEBSA is presented using two popular optimization techniques, DE and BSA. In hDEBSA, self-adaptation schemes for control parameters are suggested, in which the value of control parameters vary automatically during the optimization process. The proposed hDEBSA is applied on twenty-eight CEC 2013 test functions, two industrial engineering design problems, and two real-world optimization problems for validation. The obtained results were compared with several standard algorithms, such as PSO, DE, ABC, BSA, and ABSA; several improved variants of DE (CoDE, EPSDE, DE/rand/2/bin); several improved variants of PSO (CLPSO, CPSO-H, FI-PS), and also one hybrid algorithm DE-PSO were also used as a comparison. The comparison results in terms of the numerical result and statistical analysis show that the proposed method is superior to the aforementioned algorithms and thus, is acceptable. Hence, the proposed method may be recommended to solve optimization problems in different branches of humanities, science, and engineering.