1 Introduction

In the 1990s, the genetic algorithm (Goldberg 1989; Preen and Smith 2019) has been into the booming period of rapid development, not only in theory research but also began to widely be used in applied research. The increase in the application field of the genetic algorithm is quickly, and what’s more, the learning ability is also enhanced obviously. Application research has introduced a lot of new theories and methods and obtained rapid development, which has added new vitality to the research of genetic algorithm.

D. Whitey presents a cross crossover operator based on the field in 1991 and validated by the TSP problem (Arram and Ayob 2019). Seront and Bersini (1996) put forward the single operation of crossover operator by combining genetic algorithm with the single method, the generation of a candidate is made up of two parents and one other individual. After the comparison of results, the three individual’s crossover operator has a better effect than point crossover and uniform crossover operator. There are quite a few domestic researchers continuously put forward new crossover operators of genetic algorithms, to optimize the genetic algorithm. In 1995, Kennedy and Eberhart presented a novel optimization technique, which named as PSO (Kennedy and Eberhart 1995; Ibrahim and Tawhid 2019). Similar to the traditional evolutionary algorithm (EA), PSO algorithm is a reliable and effective global optimization algorithm. It is also one of the most deeply studied and widely used algorithms. However, the essential contribution of the PSO algorithm lies in the effective design of the velocity and the position renewal equation based on the particle’s position information, which are the current position, the best previous position, and the best position discovered. In 1997, Storn and Price presented a new kind of optimization technique, differential evolution (DE) (Storn and Price 1997; Antoniouk et al. 2019). The vector is generated by choosing two individuals from the parent to do the differential operation; Secondly, combine another selected individual and the differential vectors to generate the experimental individual. Then, the parent and the corresponding experimental individuals are crossed to generate new offspring individual. Similar to the PSO, DE and its variants are reliable and effective global optimization algorithms, which has been successfully applied in real world problems and benchmark problems.

There are numerous improved algorithms for SGA, PSO, and DE algorithm. But if we analyze them together, we will find that the genetic operators of these algorithms share some general schemes. First, the genetic operator is an arithmetic equation that is combined with objects/individuals, equation coefficients, etc. Second, based on the same objects or individuals, the arithmetic equation can have many variants by transforming operators, individual positions, and equation coefficients. Third, for many evolutionary algorithms, the most difference is essentially the difference in genetic operator equations.

Some special evolutionary algorithms can achieve excellent equation discovery and modeling capabilities. The typical methods are Genetic Programming (GP) (Koza 1992; Rodriguez-Coayahuitl et al. 2019), Gene Expression Programming (GEP) (Ferreira 2001; Xiao et al. 2019), and Multi Expression Programming (MEP) (Oltean and Grosan 2004). Or in other words, GP, GEP, and MEP can be used to generate arithmetic equations automatically for given input and output. Take the GEP method for example, for the arithmetic equation generating, the input of GEP could be the objects/individuals, equation coefficients, and arithmetic operators, and the output is the genetic operator for the given function optimization problems. If this idea can be realized, we think this paper has the following enlightenment significances for the research of evolutionary algorithms.

  1. 1.

    The algorithm can be designed by the machine automatically. For the traditional algorithm design, a typical design limitation is that: the intrinsic nature of algorithm design is attached to the manual design, and the design process is based on the repetitive and complex conception and experiment of the designers, tentative co-exist with the uncertainty. Thus, the efficiency of the algorithm design can’t be guaranteed or expected. Based on machine design, although the design may not be as clever as manual design, it can give designers inspiration, perhaps some designed results obtained by machine are unimaginable in the aspect of human and beyond expectations.

  2. 2.

    The algorithm can be designed by the machine more adaptively. Generally, a lot of the research is based on the data statistical analysis to conclude the new difference algorithms. What’s more, most of the existing learning strategies are not so adaptable for applicable to each problem function. So, it is important to generate different optimal operators on the different given problems. Let the machine generate the proper algorithm for the problems, which has great theoretical and practical value.

In this paper, with the powerful modeling ability of GEP, a framework for designing of genetic operators automatically is presented based the DE algorithm skeleton, and the main contributions are summarized as follows.

  1. 1.

    This paper put forward a basic concept and method for designing algorithms with the algorithm, which realizes the automatic design of the algorithm to a certain extent.

  2. 2.

    According to the characteristics of genetic operators in the DE algorithms, with the help of the GEP algorithm, a framework that can automatically generate genetic operators is constructed and validated on classical function optimization problems.

  3. 3.

    Comparing with the classical, effective, and hybrid SaDE algorithm (Qin et al. 2009), it is found that the framework of this algorithm can obtain competitive results.

  4. 4.

    To be honest, this paper does not aim at designing high-performance algorithms but designing a framework that can design algorithms automatically. We believe that this kind of research and exploration is still in its infancy, but it has potential research prospects. It is wondered that this automatic framework could be able to integrate with any high-performance algorithm to achieve stronger problem-solving ability.

The rest of this paper are organized as follows. The following section is the related work, and several classical and similar frameworks are described and discussed. Section 3 gives a process of automatic design of genetic operators based on GEP, and presents a framework of the automatic design of genetic operators. In Sect. 4, the experimental verification of the algorithms is presented. In Sect. 5, conclusions and discussions were given.

2 Related work

Generally speaking, the design of automatic algorithms has attracted extensive attention to the evolutionary algorithm of academia. Furthermore, the main focus of automatic algorithm design is the selection of genetic operators and the influence of operator selection on the performance of current algorithms. For example, Zhang and Sanderson (2009) presented JADE, a new differential evolution (DE) algorithm, which could improves optimization performance by a new presented variation strategy. Brest and Maučec (2011) said that the selection of operators is critical for the DE performance, so they put forward two kinds of DE variants, which have two kinds of adaptive strategy selection technologies, namely, the probability match and the adaptive pursuit. Based on the typical multi-objective algorithm MOEA/D, Lin et al. (2017) presented an adaptive control strategy that could multiple DE strategies at different evolutionary stages adaptively. Based on the no free lunch theory, Mallipeddi et al. (2010) proposed an ensemble approach in which each mutation operator has its related population during the different stages of the problem-solving process. Jiang et al. (2014) and Jiang and Fan (2014) presented a novel evolutionary algorithm based on the automatic designing of genetic operators, which could find solutions in problem space and generate genetic operators automatically in operator space simultaneously. Based on the simple genetic algorithm, Diosan and Oltean (2009) presented a model to generate evolutionary algorithms by evolutionary means, which could find the optimal algorithm structure and variable parameter value to solve the problem more efficiently. SL-GEP was presented by Zhong et al. (2015). In the SL-GEP method, each chromosome in the population is composed of a main program and a set of automatically defined function (ADF) modules, and the defined functions are expressed by the gene expression of GEP. Then, in terms of mechanism, each ADF has functional units that can solve sub problems, which are combined into the main program to solve specific problems. Mahanipour and Nezamabadi-Pour (2019) presented a new gravitational search algorithm-based technique, called gravitational search programming (GSP) to create computer programs automatically. Nyathi and Pillay (2018) used genetic algorithm (GA) and grammatical evolution (GE) to design the GP-based classification algorithms automatically. In the experiments, the automated designed classifiers outperform manually designed classifiers. In view of the importance of mutation operator, Hong et al. (2018) used GP to generate operators automatically and the result shows that the proposed method outperforms existing human designed operators. Woodward and Swan (2012) presented a metal learning method to explore mutation operators in the constrained design space. Contreras-Bolton and Parada (2015) presented a combination mechanism for the crossover and mutation operators, which will improve the searching ability in the traveling salesman problem.

3 Designing of Genetic Operators Automatically based on GEP and DE

As mentioned before, the framework of the algorithm presented in this paper is based on GEP and DE algorithms, so this paper first reviews them. Then, based on GEP and DE, a detailed framework for designing genetic operators automatically is given. In addition, a program for verification is given to test the performance of new finding genetic operators.

3.1 GEP

As a linear expression programming method, GEP can be regarded as an extended algorithm which combines the advantages of GA and GP. Compared with GA, GEP utilized a new kind of encoding and decoding method to express and transform the individual, and essentially widely uses a hierarchical searching method to describe the problem. Furthermore, compared with the classical GP, the difference of GEP lies in that it uses a linearized genome to describe the individuals, and this genome can avoid combinatorial explosion in GP method.

Generally, the chromosomes of GEP consist of two parts, which are head (H) and tail (T) respectively. The gene in H is select from the operators set and the terminators set, whereas the gene in T can only selected from the terminators set (Ferreira 2001). Assuming h is the size of H, and t is the size of T, and all operations in operators set require at most n parameters, then t is a function which could be expressed:

$$\begin{aligned} t=h*\left( n-1 \right) +1 \end{aligned}$$
(1)

According to the rules mentioned above, a sample of chromosomes can be constructed as follows.

$$\begin{aligned} + {Q} - / {b} * {a \,a\, Q\, b\, a\, a\, b\, a\, a\, b\, b\, a\, a\, a\, b} \end{aligned}$$

Like the phenotype of GP, the phenotype of GEP use the tree structure to describe the expression. According to the specific reading rules, the expression tree (ET) could be constructed. Thus, the above chromosome can be transformed into ET as the following (Fig. 1).

Fig. 1
figure 1

A example of GEP expression tree

For this expression tree, we can travers it in an intermediate order, the equation could be obtained as follows.

$$\begin{aligned} {{Q}({a}/{a})+({b}-({Q}({a})*({a}+{b})))} \end{aligned}$$

GEP genetic operators include mutation operators, transformation and insertion sequence elements and recombination operators. More details about the GEP genetic operators could obtained in the reference (Ferreira 2001).

3.2 DE

The idea of the DE originates from the genetic algorithm. As a robust and efficient evolutionary algorithm, DE has been widely used in the parallel computing, multi-objective optimization, constrained optimization, and numerous application areas. Assume N is the size of the population, D is the dimension and t is the generation, the population could be named as \(X_i\left( t \right) , i=1,2,\cdots ,N\). The new vector \(X_i\left( t+1 \right)\) could be generated by:

$$\begin{aligned} X_i\left( t+1 \right) =X_{r_1}\left( t \right) +F\left( X_{r_2}\left( t \right) -X_{r_3}\left( t \right) \right) \end{aligned}$$
(2)

where \(r_1,r_2,r_3\in \left[ 0,N \right]\), and \(r_1\ne r_2\ne r_3\ne i\). F is the scale factor and \(F\in \left[ 0,1 \right]\) usually.

The core part of DE is its differential strategy, which determines the optimization ability and efficiency. Early DE proposed many different differential strategies [9,10]. Some typical differential strategies are shown below.

$$\begin{aligned}&X_1+F*\left( X_2+X_3-X_4-X_5 \right) \end{aligned}$$
(3)
$$\begin{aligned}&X_b+F*\left( X_2-X_3 \right) \end{aligned}$$
(4)
$$\begin{aligned}&X_b+F*\left( X_2+X_3-X_4-X_5 \right) \end{aligned}$$
(5)
$$\begin{aligned}&X_1+F_1*\left( X_b-X_1 \right) +F_2*\left( X_2-X_3 \right) \end{aligned}$$
(6)
$$\begin{aligned}&X_i+F_1*\left( X_1-X_i \right) +F_2*\left( X_2-X_3 \right) \end{aligned}$$
(7)
$$\begin{aligned}&X_i+F_1*\left( X_b-X_i \right) +F_2*\left( X_2-X_3 \right) \end{aligned}$$
(8)

where \(X_1,X_2,X_3,X_4,X_5\) represent random individuals, \(X_b\) is the best individual, \(X_i\) is the current individual, and \(F_1\), \(F_2\) are scaling factors, the first and fourth differential strategy are the most widely used currently, also one of the most successful differential strategy. As a matter of fact, in many different types of problems, this differential strategy (Liang et al. 2019) does not have the advantage to seek the optimal population or the optimal rate of convergence. If the problem is different, the corresponding difference strategy should also be changed, so as to be better adapted to the problem (Jiang et al. 2020). In this topic, the gene expression of evolutionary modeling method was adopted for different problems of optimal difference strategy. Modeling method by GEP evolution (Jiang and Fan 2014) can be automated design different expression that is suitable for different problems, and then use the expression in the differential evolution algorithm, in search of the best individual, as to solve the problem using approximate solution.

3.3 General Scheme of SGA, PSO and DE

If the individuals in the population can be regard as a particle, then the above genetic operators in SGA, PSO and DE can be regarded as the particle’s position update equation(\(\mu\)), which can be expressed as \(\mu _{GA}\), \(\mu _{PSO}\) and \(\mu _{DE}\). Though the above analysis, we can found that the chromosome \(\kappa\) of GEP method can be mapped (the mapping equation defined as \(\zeta\)) as an equation of the location update \(\mu\) (\(\mu =\zeta \left( \kappa \right)\)), where the terminal set T is determined by the specific individuals in population, constants, random number, etc. The operator set O however, is determined by arithmetic operators, such as ’+’, ’–’, ’*’, etc.

According to the characteristics of chromosome, the longer the length (\(l\left( \kappa \right)\)) of chromosome, the more of genotypes and phenotypes, then the more particle’s update equation can be mapped into. Suppose there are n chromosomes \(\left( \kappa _1,\cdots ,\kappa _n \right) , n\rightarrow \infty\). Select one chromosome \(\kappa _i, i\in \left\{ 1,\cdots ,n \right\}\). When \(l\left( \kappa _i \right) \geqslant L\), the following formula could be obtained.

$$\begin{aligned} \left\{ \mu _{GA},\mu _{PSO},\mu _{DE} \right\} \subset \left\{ \mu _1=\zeta \left( \kappa _1 \right) ,\cdots ,\mu _n=\zeta \left( \kappa _n \right) \right\} \end{aligned}$$
(9)

GA, PSO and DE’s location update equations could be converted to the corresponding chromosomes according to the inverse operation of \(\zeta \left( \zeta ^{-1} \right)\). Thus,

$$\begin{aligned} \kappa _{GA}= & {} \zeta ^{-1}\left( \mu _{GA} \right) \end{aligned}$$
(10)
$$\begin{aligned} \kappa _{PSO}= & {} \zeta ^{-1}\left( \mu _{PSO} \right) \end{aligned}$$
(11)
$$\begin{aligned} \kappa _{DE}= & {} \zeta ^{-1}\left( \mu _{DE} \right) \end{aligned}$$
(12)

Assuming all of the genes in chromosomes of \(\kappa _{GA}\), \(\kappa _{PSO}\) and \(\kappa _{DE}\) are effective genes (the definition of effective gene see (Jiang et al. 2006)), then,

$$\begin{aligned} L=\min \left( l\left( \kappa _{GA} \right) ,l\left( \kappa _{PSO} \right) ,l\left( \kappa _{DE} \right) \right) \end{aligned}$$
(13)

According to the analysis above, we can found that one obvious advantage of chromosome \(\kappa\) is its strong representation capability, and \(\mu _{GA}\), \(\mu _{PSO}\) and \(\mu _{DE}\) can be seen as s special case of \(\kappa\). Suppose a chromosome with length L (Take GEP method for example, the traditional chromosome of GEP are constructed by the head unit and tail unit usually. The genes in head are selected from T and O randomly, and genes in tail are only selected from set O. The head length is h, the tail length is t, the total length \(L=h+t=h+h\left( n-1 \right) +1=hn+1\), the size of set T is \(\alpha\) and O is \(\beta\)), the number of the location update equations (\({\mathbb {N}}\)) can be expressed as follows.

$$\begin{aligned} {\mathbb {N}}=\prod _{j=1}^h{\left( \alpha +\beta \right) \prod _{k=h+1}^{hn+1}{\beta }} \end{aligned}$$
(14)

Suppose h=10, n=2, \(\alpha\)=5 and \(\beta\)=3, then, \({\mathbb {N}}\)=190210142896128.

Since chromosomes have such representation ability in automatic programming methods, can automatic programming methods automatically construct genetic operators of evolutionary algorithms and generate operators in the process of solving problems instead of pre-defined ones? Generally, evolutionary algorithms have the characteristics of self-organization, self-adaptive and self-adjusting. However, the genetic operator of an evolutionary algorithm is always determined in advance. The algorithm composed of predetermined operators may be particularly effective in some problems, but it does not necessarily show the same degree of superiority in other problems. This phenomenon could be explained by NO Free Lunch (NFL) theory (Wolpert and Macready 1997). So can we make a hypothesis, if one algorithm’s genetic operators also have self-organizing, self-adapting, and self-regulating capacities, which means the genetic operator could automatically adjust to the problems in the problem-solving process, does the NFL theory could be broke seemly? Because the chromosome in automatic programming methods has a strong ability to express the equations, if the chromosome is mapped into the particle’s update equation, then the update equation may be incogitable, or cannot be designed manually by people. It can be said that using the automatic programming method to evolve genetic operators can realize the automation of evolutionary algorithms to some extent. This is a quite bold and novel concept. Of course, GP, GE, GEP and MEP methods all can be used to construct the genetic operators. In this paper, the GEP method is selected as the algorithm carrier of automatic genetic operator generation.

3.4 The framework for designing of genetic operators automatically based on GEP (DGOA)

In this paper, the effectiveness of the automatic framework is verified by using a set of single objective optimization problems first. To achieve this goal, the framework is generally divided into two modules, the first one is the genetic operators designing module, and the other is the function optimization module.

The two modules have different responsibilities. In the genetic operators designing module, the duty of the GEP based method is to find the proper genetic operators automatically. Meanwhile, in the function optimization module, the operators, generated by the genetic operators designing module, are obtained to manipulate the individuals for function optimization, in order to find the global optimal solution of the problem.

Obviously, in the automatic design of genetic operators, the genetic operators are not defined by the designer before solving the problem. In fact, genetic operators are searched and designed in the process of solving problems. For this consideration, two models run at the same time and interact with each other to realize genetic operators’ discovery and problem solutions discovery (see Fig. 2).

Fig. 2
figure 2

The general diagram of DGOA

The interaction between the two modules can be described as follows: In the genetic operators designing module, GEP method is used to a variety of genetic operators for differential evolution algorithm, then apply them into the DE algorithm, the DE adopted to the evaluate the efficiency of genetic operator and then feedback to GEP algorithm.

First initializing the genetic operators, and then using the selection operator, mutation operator, the string of operator, and recombination operator methods to generate a new genetic operator, and apply the generated genetic operator to DE algorithm, calculate the optimal value of the corresponding problem. If the optimal value does not meet the optimal value desired, then continue the process of selecting the operator, mutation operator, the string of operator, and recombination operator to adaptive a new genetic operator, until the optimal value of calculating meet the expected, then break the circulation. So the method of GEP evolution automation design modeling is suitable for generating differential expression of different problem (Chen et al. 2017; Song et al. 2020; Jiang et al. 2020).

Because the genetic operators produced by genetic operators designing module are unpredictable, the differential operators may be good or bad. For these unforeseen operators, some operators need to be avoided, such as the follows:

$$\begin{aligned}&X_i-F*X_j \end{aligned}$$
(15)
$$\begin{aligned}&X_i-X_i \end{aligned}$$
(16)

In Formula 16, it can be seen that the operators only contain two individuals, and the diversity of two individuals is smaller than that of three individuals. Therefore, it is necessary for the DGOA algorithm avoid generating such operators (Fig. 3).

Fig. 3
figure 3

The framework of DGOA

In the formula 16, we can clearly know that fitness for this operates will be the constant 0. No matter how many individual values participate in the mutation, the application of this formula leads to the DE algorithm cannot produce new changes, and thus falls into local search.

To solve the problems mentioned above, a detection method is presented in this paper to effectively avoid the generation of invalid operators: according to the dimension D of the problem, M different test vectors \(T_i, i\in \left[ 1,M \right]\) are generated randomly. By substituting M test vectors into genetic operator G, M new individuals \(T_{i}^{'}\) can be obtained, and then M individuals can be compared whether they are invariant or not. If it is invariable, the difference strategy is invalid and cannot cause changes, which means it is not proper applied to DE algorithm; on the contrary, it is effective.

3.5 Program for verification

In order to verify the performance of DGOA, two programs for verification are designed. For the first program, it is very similar to the classical DE algorithm, but the only difference is the genetic operator. For the second program, we choose the SaDE algorithm (Qin et al. 2009). SaDE is a classical and effective hybrid algorithm for complex function optimization. Two genetic operators generated by DGOA algorithm are selected and add them to SaDE to further verify the performance. DE algorithm is familiar and classical, so in this paper, we focus on the second program.

As has been said in the previous section, GEP model are used to automatically evolve genetic operators. This chapter will talk about genetic operator combination of GEP evolution, which can greatly reduce the DE algorithm trapped in local optimal solution and can’t walk out of trouble. The main steps of hybrid evolutionary algorithm based on DE algorithm are common, with not big difference, in the process of DE algorithm, ordinary DE is to use a genetic operator to update the population iterative, but in a hybrid evolutionary algorithm using two genetic operators.

Since two candidate learning strategies have been selected, it is assumed that one strategy can be selected for each individual in the current population, and the probability of applying one learning strategy is p1, and the probability of applying the other is p2, where \(p2 =1-p1\). In the initialization phase, \(p1= p2 =0.5\). Then, in the process of solving the problem, p1 and p2 are not fixed, but dynamically adjusted. After evaluation of all newly generated test candidates, record the test candidates generated by strategy one and successfully enter the next generation. The total number of recorded candidates is marked as ns1, and the success test candidates generated strategy two is ns2. What’s more, we also record the failure of the two strategies. The number of discarded test candidates generated by strategy one is recorded as nf1, and the number of discarded test candidates generated by strategy two is recorded as nf2. The probability of p1 and p2 are updated as follows:

$$\begin{aligned} p1= & {} \frac{ns1*\left( ns2+nf2 \right) }{ns2*\left( ns1+nf1 \right) +ns1*\left( ns2+nf2 \right) } \end{aligned}$$
(17)
$$\begin{aligned} p2= & {} 1-p1 \end{aligned}$$
(18)

At last we adapted the generation using two candidates learning strategies applied according to the probability of p1 and p2 to conduct the comprehensive experiments, and then comparing the performance with SaDE.

The framework of hybrid evolutionary algorithm based on DE

figure a

4 Experimental Verification

4.1 Problem Description

Function minimization problem is a classical differential evolution problem: given a set of functions and its scope, and calculate the minimum of the function within the scope, the following Table 1 lists the function of eight:

Table 1 The test example of function to minimize

4.2 The operators generated by DGOA

Table 2 shows the parameter setting for the DGOA. For genetic operators designing module, \(T=\left\{ X_i,X_j,X_k,F \right\}\) and \(F=\left\{ +,-,* \right\}\). It can be seen that in order to simplify the program design of this topic, only three individuals and one scaling factor are selected in this paper. In addition, the genetic operators generated by GEP can only contain three operators in F. In fact, the individual possibility of the difference operators composed of T and F is enough to satisfy most of the problem-solving needs. Of course, it could be imagined that there will be more and more complex operators with the different selection of T and F. Other parameters of DE algorithm are set according to classical DE algorithm.

Table 2 The parameters setting in the proposed DGOA

In this experiment, DGOA automatically evolved difference operators for the different function optimization problem and the results of the optimum operator obtained in the different generation is showed as follows (Table 3).

Table 3 The optimum operator obtained in the different generation for different function optimization problem

Due to the time limits of the DGOA, the max generation program is set as 120, so when the iteration to 120 even if haven’t reached in the required accuracy and stop the iteration. It can be known that only the F7 did not get the expectations after max generation from the experiment, F1–F6 and F8 problems, under the less number of iterations approximation expectation to the required accuracy value of DE algorithm is close to expectations, then jumping out of the DGOA cycle. So, it can be identified that DGOA is close to the performance of DE in the ability of the minimum result finding, which means that the genetic operators generated by the DGOA are effective and reasonable (Fig. 4). The most excellent operators generated in the different problems are shown as follows.

$$\begin{aligned}&\text {Strategy}1: X_i+X_i*X_j*F*F*X_j+X_i*X_i \end{aligned}$$
(19)
$$\begin{aligned}&\text {Strategy}2: F*\left( X_i+X_j+X_k \right) \end{aligned}$$
(20)
$$\begin{aligned}&\text {Strategy}3: X_i*X_k*X_k \end{aligned}$$
(21)
$$\begin{aligned}&\text {Strategy}4: X_j+X_j*X_j*X_j+F*X_j*X_j \end{aligned}$$
(22)
$$\begin{aligned}&\text {Strategy}5: X_i+X_i*X_i*X_j+F*X_i+X_i+X_i*X_i \end{aligned}$$
(23)
$$\begin{aligned}&\text {Strategy}6: F*X_j*X_j-X_j*X_j*X_k \end{aligned}$$
(24)
$$\begin{aligned}&\text {Strategy}7: X_j+F*X_i-F*X_k+F \end{aligned}$$
(25)
$$\begin{aligned}&\text {Strategy}8: F*X_k*X_k*X_k \end{aligned}$$
(26)
Fig. 4
figure 4

The convergence speed between Classical DE and the operators generated by the DGOA

4.3 DE based on the outcome of GEP

This is the structure of general genetic operator of DE algorithm. We have selected eight candidate learning strategies, which are eight better performed genetic operators that have generated from the novel evolutionary algorithm in the evolution of GEP. Eight operators respectively apply in eight questions function. Comparing the performance of all genetic operators in these eight functions, several genetic operators (two operators are enough in our research) are selected as candidate operators, which will be used in hybrid evolution algorithm.

After finishing the experiment, eight appropriate difference strategy were selected in the experiment results. The following table is the mean results of each function based on DE algorithm, using the eight genetic operators selected, where Max_FEs=100000 and 25 independent runs were executed (Tables 4,  5 67).

Table 4 For function 2
Table 5 For function 5
Table 6 For function 8

From the line chart above, it is obvious that the difference strategy8 converge much earlier to expectation value than classical differential strategy DE algorithm. So in comparison, for function 8, differential strategy which automatic designed based on GEP performed much better.

From the experimental results, it can be known that the seventh function optimization is not exactly reached the expected value, but the performance is almost the same as classic DE difference strategy. And the rest of the functions in the best of circumstances are reached the expected value. In with other functions, however, the median is completely meet the expectations of each function, even the worst case, and only one value didn’t meet the expectations, it shows that the DE evolutionary algorithm based on GEP for minimum problem of the function is effective.

Compared with the used of classic genetic operators in DE algorithm, the performance of difference strategy that automatic designed based on GEP is higher, the better.

4.4 The comparative experiment with SaDE

Fig. 5
figure 5

The convergence speed between the SaDE and AdhDE

Two genetic operators generated by DGOA algorithm are selected and add them to SaDE to further verify the performance. Through this experiment (Fig. 5), we want to verify that although DGOA is oriented to the genetic operators generated by independent and specific problems, these operators have certain wide adaptability. Here, we present an Adaptive and hybrid DE (AdhDE) based on the selected genetic operators designed by the DGOA, and the selected operators are \(X_k*F*F-X_k+X_i+X_k+X_k-X_k-X_j\) and \(X_k+F*X_k-X_j+X_i*F+F*F\). In the SaDE, the selected operators are \(X_i+F*\left( X_j-X_k \right)\) and \(X_i+F*\left( X_b-X_i \right) +F*\left( X_2-X_3 \right)\). In the following figures, AdhDE stands for the improved method of Automatic Designed Hybrid differential strategy that based on DE. The line chart below is about the mean value obtained by the AdhDE and SaDE, Max_FEs=100000 and 25 independent runs were executed.

By observing the experimental results, it can be found that most of the mixed operator (DE) of automatically generated by the GEP algorithm can get the better minimum value than SaDE, and can be closer to the optimum value. From the line chart, we can find that while the function is approaching to the minimum, most of the function that used hybrid operator algorithm has more quickly convergence speed than traditional SaDE.

It shows that the hybrid evolutionary algorithm based on DE for minimum problem of the function is effective. Even the occasional results do not meet expectations, but the hybrid evolutionary algorithm based on DE overall is stable, reasonable and effective.

Table 7 The results obtained by the SaDE and the mixed operator (DE) of automatically generated by the GEP algorithm (AdhDE)

5 Conclusion

The content of this topic in research is to explore the GEP evolution modeling, and the performance of the generated genetic operators under the applications of DE algorithm. Then comparing the performance with SaDE algorithm, that is to say, comparing the performance of hybrid operator which generated by GEP evolution with classic SaDE operator. It can be found that our evolution algorithm of hybrid genetic operator is better on the convergence of most the optimism function than SaDE using genetic operators and has a better effect of the evolutionary algorithm. So this article is divided into three modules: evolutionary algorithm automatically design generating difference strategy, selected candidate difference strategy based on DE algorithm and the hybrid evolutionary algorithm based on DE and compared with the performance of SaDE.

In evolutionary algorithm of automatic design module, the difference strategy based on DE algorithm is generated automatically by GEP model. Each different difference strategy is automatically generated based on the corresponding problem. Thus, researchers no longer need to manually analyze the data to conclude the corresponding difference strategy. Then different difference strategy was applied to DE algorithm, based on the approximation of solving problems to generate the adaptive value, then through natural selection principle, keeping good individual and constantly optimize the approximate solution, then analyzing the approximation of the solving problems.

Hybrid evolutionary algorithm based on DE, mixed the automatically designed genetic operators, and the minimum value for each function is calculated and validated base on the difference strategy generated. After the verification, we can know the convergence of difference strategy generated by evolution of GEP modeling is better, evolution of GEP modeling can generate effective reasonable difference strategy. DE algorithm based on GEP modeling is also effective in solving most function, and the differential strategy is more adapted in solving corresponding problems to a certain degree.