Keywords

1 Introduction

Coal has always been the main energy source in China, and accounts for more than 60% of consumption, it will remain be the main energy source in China until 2050. Coal intelligent mining is a new stage in the development of coal comprehensive mining technology, which is also an inevitable requirement for the technological revolution and upgrading development of coal industry [1]. Three-dimensional modeling of coal seams at the fully mechanized mining face is an important foundation for coal enterprises to realize “intelligent management and transparent mining”.

Researchers often use kriging interpolation to interpolate unknown regions in space to build 3D models. For the problem that kriging interpolation is prone to overfitting or underfitting in the fitting process of variogram, this paper proposes a Differential Evolution algorithm with Adaptive Population size Reduction (APRDE) to optimize the kriging interpolation algorithm. After experimental verification, the adaptive differential evolution algorithm proposed in this paper has higher solution accuracy, faster convergence speed and better stability.

2 The APRDE Algorithm

The Differential Evolution (DE) algorithm [2] is an effective heuristic search algorithm that can be used to solve parameter optimization problems. This paper optimizes the DE algorithm by modifying the variation strategy, parameter adaptive adjustment mechanism, and population reduction strategy.

2.1 Variation Strategy

The mutation process and the crossover process are the core parts of the differential evolution algorithm, and the mutation formula is:

$$ \begin{array}{*{20}c} {v_{i} = x_{i} + F\left( {x_{j} - x_{r} } \right)} \\ \end{array} $$
(1)

where \(v_{i}\) is the individual after mutation, \(x_{i} ,x_{j} ,x_{r}\) are the random individuals in the current population and \(i \ne j \ne r\), In the process of mutation, in order to enrich the diversity of the population and improve the local exploitation ability of the algorithm, this paper adopts an ordered mutation strategy [3], and the mutation equation is:

$$ v_{i} = x_{i} + F(x_{best} - x_{i} ) + F(x_{middle} - x_{worst} ) $$
(2)

Three randomly selected individuals from the current population are sorted according to the fitness value to obtain \(x_{best} ,x_{middle} ,x_{worst}\). With the current vector as the base vector, avoiding the algorithm from falling into a local optimal solution or stagnation. Combining the base vector and two ordered difference vectors, enriching the diversity of the population, and making the direction of variation gradually approach the optimal solution.

2.2 Parameter Adaptive Adjustment

In the variation process, the variation factor F controls the magnitude of the base vector change. To balance the global exploration and local exploitation abilities of the algorithm, the hyperbolic tangent curve between [−4,4] [4] is used in this paper to control the variation of the mutation factor with the following variation equation.

$$ \begin{array}{*{20}c} {F = \frac{{F_{max} + F_{min} }}{2} + \frac{{\tanh \left( { - 4 + 8\frac{{G_{max} - G}}{{G_{max} }}} \right)\left( {F_{max} - F_{min} } \right)}}{2}} \\ \end{array} $$
(3)

where \(F_{\min }\) is the minimum value and \(F_{\max }\) is the maximum value of the variation factor. \(G_{max}\) is the maximum evolutionary generation and G is the current evolutionary generation. The hyperbolic tangent curve changes very little at the beginning and the end. The variation factor varies approximately linearly between the maximum and minimum values, striking a balance between global exploration and local exploitation ability. Besides, a variation factor based on normal distribution is used in this paper to enhance the diversity of the variation vector and jump out of the local optimal solution. The variation process of the variation factor is as follows:

$$ F = \left\{ {\begin{array}{*{20}c} {randn(0.5,0.1),} & {\left| {x_{best,g} - x_{best,g + 1} } \right| < 10^{ - 8} } \\ {equation(7),} & {otherwises} \\ \end{array} } \right. $$
(4)

The crossover process of the differential evolution algorithm is as follows:

$$ \begin{array}{*{20}c} {u_{i,j} = \left\{ {\begin{array}{*{20}c} {v_{i,j} ,} & {\begin{array}{*{20}c} {rand\left( {0,1} \right) < CR} & {or} & {j = rand\left( {1,D} \right)} \\ \end{array} } \\ {x_{i,j} ,} & {otherwises} \\ \end{array} } \right.} \\ \end{array} $$
(5)

The crossover process is to operate on each variable in an individual, and D denotes the number of variables. The variables in the mutated individual are crossed with those in the initial individual by setting the conditions to obtain the crossover individual. To accommodate the crossover process, this paper changes the crossover factor in a linearly reduced manner with the evolutionary process, and the change equation is:

$$ CR = CR_{max} - \frac{{G\left( {CR_{max} - CR_{min} } \right)}}{{G_{m} }} $$
(6)

where the change range of the crossover factor is \([CR_{min} ,\,CR_{max} ]\). The adaptive differential evolution algorithm proposed in this paper adaptively changes the variance and crossover factors in evolutionary process, and balances the global exploration and local exploitation ability to some extent in the algorithm search process.

2.3 Population Reduction Strategy

Reduction of populations during evolution process of differential evolution algorithm can effectively capture useful individual information, reduce unnecessary computational resources, and improve convergence speed. This paper proposes a nonlinear population reduction strategy to control the reduction of population size according to the hyperbolic tangent function curve between [−2.5,4]. Through a predetermined maximum evolutionary generation \(G_{\max }\), the reduction function changes with the current evolutionary generation as the independent variable. The reduction equation is as follows:

$$ \begin{array}{*{20}c} {F = \frac{{NP_{max} + NP_{min} }}{2} + \frac{{\tanh \left( { - 2.5 + 4 \cdot \frac{{G_{max} - G}}{{G_{max} }}} \right)\left( {NP_{max} - NP_{min} } \right)}}{2}} \\ \end{array} $$
(7)

In the early stage of evolution, the size of population is large, and to ensure the diversity of population and to improve the global search ability of the algorithm. As the evolutionary process proceeds, the solved optimal individuals are closer and closer to the optimal solution. To save computational resources and improve the convergence speed, some individuals far from the optimal solution are removed. In the later stage of the evolutionary process, the population iterates around the optimal solution. To improve the local search ability, the population size is maintained at the small value and local search is performed carefully to ensure that the optimal solution in that range is found.

3 Numerical Experiment and Analysis

The experiments in this paper use a 64-bit Windows 10 operating system. The processor is an Intel(R) Core (TM) i5-5200U CPU @ 2.20 GHz with an Intel(R) HD Graphics 5500 GPU. Python 3.5.2 is selected as the experimental code language, and the experiment is run in PyCharm software to complete the experimental process.

3.1 Experiments Setup

In this paper, we choose five comparison algorithms, namely DE [2], LSHADE (Linear Success-History based Adaptive DE) [5], AGDE (Adaptive Guided Differential Evolution) [6], AMODE (a DE algorithm based on Adaptive Mutation Operator) [7] and ASVDE (modified DE algorithm based Adaptive Secondary Variation) [8]. The comparative analysis of the algorithms is performed on eight typical benchmark functions in CEC2014 [9], which are unimodal f1, f2, simple multimodal f6, f12, hybrid f17, f22, and composite functions f24, f27, as shown in Table 1. D is the dimensionality of the problem. The search space of these benchmark functions is [-100,100], with more local optima and function values greater than 0. Therefore, the fitness function is defined as f = f(x)−f(x*). f(x) is the function value calculated by the algorithm, f(x*) is the known optimal value of the function. The closer the f is to zero, the closer the function value calculated by the algorithm is to the global optimum. The parameter variables set for the comparison experiments are shown in Table 2.

Table 1. Some benchmark functions of CEC2014
Table 2. Parameter settings of comparison algorithms

3.2 Comparison of Solution Accuracy and Stability

With the variable settings and experiments conducted in Table 2, the six comparison algorithms were run 21 times on the benchmark functions in the dimensions of D = 30., the average (avg) and standard deviation (std) of fitness function values were calculated and recorded in Tables 3, with the optimal values bolded in the table.

Table 3. Results of comparison algorithms on benchmark functions for D = 30

As shown in Table 3, the mean values solved by the APRDE algorithm on eight functions are 5.51E+02, 0.00E-00, 2.65E-01, 6.00E-01, 2.24E+02, 2.84E+01, 2.22E+02 and 3.20E+02, which are closer to the optimal solution than the mean values solved by other algorithms. The results obtained by the APRDE algorithm are closer to the optimal solutions compared with other algorithms, and also have better performance for solving high-dimensional optimization problems. The APRDE algorithm solves to the closest global optimal solution on 88% of the functions. The standard deviation range obtained by APRDE is 0.00E+00 to 3.85E+05, which has the smallest value compared with other algorithms, so the algorithm has the best stability.

3.3 Comparison of Convergence Speed

When D = 10, the convergence curves of the six compared algorithms on each function are shown in Fig. 1. From Figs. 1-a and 1-b, all algorithms show a single decreasing trend in the process of solving for unimodal functions, and the solution results are near the optimal solution. Compared with other algorithms, the APRDE algorithm converges the fastest, finds the global optimal solution first, and has the highest solution accuracy. In the simple multimodal functions f3 and f4, the fitness function values solved by APRDE algorithm keep decreasing in the iterative process, the algorithm converges faster and first finds the global optimal solution first in the f3 function, and in the f4 function, the solved optimal is closer to the global optimal solution. Among the above four functions, compared with the ASVDE, LSHADE and AGDE algorithms, the APRDE algorithm not only finds the global optimal solution, but also converges faster and more efficiently. In the hybrid and composite functions f5 ~ f8, the functions have multiple local optimal solutions and larger local optimal values. The comparison results shows that the convergence curve of the APRDE algorithm is decreasing and the solved results are smaller and closer to the global optimal solution.

Fig. 1.
figure 1

The convergent curve of comparison algorithms on eight benchmark functions.

4 Conclusion

In the process of solving optimization problems, the differential evolution algorithm tends to converge prematurely and falls into a local optimal solution or stagnation state. In this paper, we propose a Differential Evolution Algorithm with Adaptive Population size Reduction (APRDE) strategy to remedy the shortcomings of traditional differential evolution algorithm. The APRDE algorithm proposes a parameter adaptive adjustment mechanism to balance the global exploration and local exploitation ability in the search process and adopts an ordered variation strategy to enrich the diversity of the population and improve the convergence speed and solution accuracy of the algorithm. In the evolutionary process, a nonlinear population reduction strategy is proposed to save computational cost and improve the quality of computational results at the same time. Compared with the other five optimization algorithms, the APRDE algorithm proposed in this paper has a fast convergence speed, the optimal solution is obtained in 88% of the tested functions, and the stability of the algorithm is relatively high.