Keywords

1 Introduction

Optimization problem often appears in scientific research, and many engineering problems ultimately boil down to optimization problems. It can be expressed as a mathematical problem. It generally refers to the question of how to find a specific factor (variable) under a given constraint to make the target reach the optimal value. The optimization algorithm is used to solve the optimization problem, and the objective function of the optimization problem is established as an optimization model to obtain the optimal value. For the more complex optimization problems of non-linear, multi-dimensional and global optimization, traditional optimization algorithms have been difficult to meet the needs. And various intelligent optimization algorithms that are inspired by bionics have a better solution in the complex optimization problem such as Genetic Algorithm (GA), Particle Swarm Optimization Algorithm (PSO), and Grasshopper Optimization Algorithm (GOA). And they have attracted the attention of many scholars. Because of the high efficiency and strong convergence of this kind of algorithm, more and more scholars have applied it to their respective fields and achieved good results [1,2,3]. Genetic Algorithm (GA) is a meta-heuristic algorithm proposed by Professor Holland [4] in 1975. Its principle on Darwin’s evolutionary theory of survival of the fittest. Genetic Algorithm takes all individuals in a group as variable objects and represents the gene sequence in binary code form. The algorithm searches for the optimal value within the range of the coded variables through the genetic operations of selection, crossover and mutation, which retain the excellent individuals and eliminate the poor individuals, and then form a new population. And the optimal solution is obtained after repeated iterations. However, the convergence efficiency of GA is low, and it is easy to converge prematurely. Particle Swarm Optimization (PSO) is another metaheuristic algorithm proposed by Kennedy et al. [5]. This algorithm simulates the predation behaviour of a flock of birds. The solution of the optimization problem is compared to a bird in the search space which called “particles”. And all particles are searched in the space of the variable range, and the fitness value is calculated by the optimized function to determine the distance of the current location to the food. The algorithm finds the global optimal by updating the individual historical optimal value and the overall population optimal value. The original PSO algorithm has the disadvantages of slow convergence speed and low convergence precision. In order to balance the local search ability and global search ability in the original PSO algorithm, Shi et al. [6] proposed an improved algorithm with inertia weight ω to adjust convergence and convergence speed dynamically which is called the standard PSO algorithm (For the convenience of description, PSO refers to the standard PSO algorithm in this paper). However, there are two problems with the algorithms: the algorithm is easy to fall into local optimization and has poor convergence precision when performing high-dimensional optimization; the convergence efficiency is low when entering the late iteration. For this reason, Li et al. [7] proposed an efficient and improved particle swarm optimization strategy, which divides the whole population into several sub-groups for the division of labor and information exchange to improve the local search ability and global search ability of the algorithm. Grasshopper Optimization Algorithm (GOA) is a new meta-heuristic algorithm proposed by Salemii et al. [8] in 2017. The basic idea is based on the regularity of grasshopper cluster activities and the model of group intelligence activities. The influencing factors are wind direction, gravity, effects of other grasshoppers in the population and the optimal position reached in the current population. However, GOA is not only easy to fall into local optimum but also has high design complexity and time-consuming. Therefore, it is necessary to improve the algorithm to get a better algorithm. And there are also some other metaheuristics such as Grey Wolf Optimization Algorithms [9], Whale Optimization Algorithms [10], etc.

Based on the work of Li, this paper improves PSO. The improved algorithm (GCPSO) uses the variation of inertia weight and adds a Gaussian perturbation strategy based on greedy thought to make the particles maintain strong vitality during the evolution process. The algorithm was carried out on the benchmark functions and compared with other intelligent optimization algorithms. Experiments show that GCPSO has a significant improvement at convergence speed, convergence accuracy and stability.

2 GCPSO Algorithm

Co-evolutionary algorithm establishes two or more populations to establish competition or cooperation between them [11]. Each population enhances its performance through its iterative strategy and interaction to achieve the purpose of population optimization. Traditional particle swarm optimization algorithm uses a single group iterative strategy. The algorithm has slow convergence speed and is easy to fall into local optimum when dealing with high-dimensional complex functions so that the satisfactory results cannot be obtained. This paper draws on the division strategy idea of the co-evolutionary algorithm, combines Gaussian perturbation strategy based on greedy thought [12], and proposes an algorithm (GCPSO) with the cooperative division of labor based on greedy disturbance. The algorithm effectively compensates for the defect. GCPSO is described as follows:

The whole particle swarm is divided into three subgroups: S1, S2 and S3. Each subgroup has different iterative strategies. The subgroup S1 adopts the traditional standard PSO iterative strategy, and the subgroup S2 adopts the global search to enhance the strategy gradually. The subgroup S3 only uses the “social experience” part, which considers the information sharing and cooperation between particles. Let \( \text{x}_{\text{i}} \) refers to the coordinate position of the particle \( \text{i} \) in the particle group, and \( v_{i} \) is the corresponding velocity, \( \text{c}_{1} \) and \( \text{c}_{2} \) are constant named learning factors, \( \text{r}_{1} \) and \( \text{r}_{2} \) are uniform random numbers between [0, 1]; \( \text{pbest}_{\text{i}} \) is the individual historical optimum of the particle \( \text{i} \) and \( \text{gbest} \) is the global best value of the particle swarm. Let \( \text{t} \) be the current number of iterations and \( \text{T} \) be the maximum number of iterations. Then the iteration formula for each group is formulated as follows:

$$ \text{Population}\,\text{S1}:\quad \text{v}_{\text{i}} = \omega_{1} \text{v}_{\text{i}} + \text{c}_{1} \text{r}_{1} (\text{pbest}_{\text{i}} - \text{x}_{\text{i}} ) + \text{c}_{2} \text{r}_{2} (\text{gbest} - \text{x}_{\text{i}} ) $$
(1)
$$ \text{Where,}\quad \omega_{\text{1}} = \omega_{{\text{1max}}} - \text{t} * \frac{{\omega_{{\text{1max}}} - \omega_{{\text{1min}}} }}{\text{T}} $$
(2)
$$ \text{Population}\,\text{S2}\quad \text{v}_{\text{i}} = \omega_{2} \text{v}_{\text{i}} + \text{c}_{1} \text{r}_{1} (\text{pbest}_{\text{i}} - \text{x}_{\text{i}} ) + \text{c}_{2} \text{r}_{2} (\text{gbest} - \text{x}_{\text{i}} ) $$
(3)
$$ \text{Where,}\quad \omega_{2} = \omega_{{\text{2min}}} + \text{t} * \frac{{\omega_{{\text{2max}}} - \omega_{{\text{2min}}} }}{\text{T}} $$
(4)
$$ \text{Population}\,\text{S}3\quad \text{v}_{\text{i}} = \omega_{3} \text{v}_{\text{i}} + \text{c}_{2} \text{r}_{2} (\text{gbest} - \text{x}_{\text{i}} ) $$
(5)
$$ \text{Where,}\quad \omega_{3} = \frac{{\omega_{1} + \omega_{2} }}{2} $$
(6)

Among them, \( \omega_{1} \), \( \omega_{2} \), and \( \omega_{3} \) are iterative weights, \( \omega_{{\text{1max}}} \), \( \omega_{{\text{1min}}} \) and \( \omega_{{\text{2max}}} \), \( \omega_{{\text{2min}}} \) are the maximum and minimum values of the iterative weights, respectively. The larger iterative weight has better exploration ability and global convergence ability, while the smaller iterative weight makes stronger local convergence ability in the later stage, which can get more accurate results. At the same time, to prevent the algorithm from crossing the boundary, the boundary values \( \text{v}_{{\text{min}}} \) and \( \text{v}_{{\text{max}}} \) are set for the above velocity term.

The coordinate position \( \text{x}_{\text{i}} \) of the particle \( \text{i} \) in each of the above subgroups is updated by the formula:

$$ \text{x}_{\text{i}} = \text{x}_{\text{i}} + \text{v}_{\text{i}} $$
(7)

To further avoid the algorithm falling into local optimum, a Gaussian perturbation is added to the global best position of the particles:

$$ \text{gbest} = \text{gbest} * (\text{c} + \text{gau}) $$
(8)

Where \( \text{gau} \) represents White Gaussian Noise, and \( \text{c} \) represents the interference factor, which is a constant.

To increase the convergence rate, we add the greedy strategy idea and iterate multiple times in the Gaussian perturbation process:

(9)

Where M represents the maximum number of iterations and \( \text{f} \) represents the function to be optimized, \( \text{gbest}\_\text{fit} = \text{f}(\text{gbest}) \).

The working principle of GCPSO is to divide the whole group into several sub-groups and assign different evolution strategies to different sub-groups. Different sub-groups exchange information by sharing global best information \( \text{gbest} \) to complete group collaboration and accelerate the convergence speed. And Gaussian perturbation with greedy thought is added to prevent local optimum and achieve fast and accurate convergence. In GCPSO, the subgroup S1 is iterated according to the standard PSO. And the inertia weight value \( \omega_{1} \) of S1 is linearly decreased, representing particle optimization gradually evolves from the strong global convergence at the early stage to the strong local convergence at the later stage, and the accurate convergence results are obtained. The inertia weight value \( \omega_{2} \) of the subgroup S2 is linearly increased to improve the global search capabilities of whole particle swarm and avoid the local convergence of S1 in the later stage of the algorithm. The subgroup S3 only contains the “social experience” part, that is, it only searches near the current optimal position so that it can quickly converge to the current optimal position.

At the same time, to improve the convergence rate and further avoid the algorithm falling into local optimum, the disturbance with the thought of greedy strategy is added. GCPSO improves the efficiency and accuracy of optimization through the divide-and-conquer strategy of cooperative thinking and greedy disturbances. The pseudo code for GCPSO is given in Table 1.

Table 1. The procedure of GCPSO.

3 Experiments and Analysis

In this section, we focused on the effect of the improved particle swarm optimization algorithm in global optimization. Ten classical benchmark functions [13, 14] in Table 2 are used to test the algorithm. The functions \( \text{f}_{1} - \text{f}_{6} \) are unimodal functions, and the functions \( \text{f}_{7} - \text{f}_{11} \) are multimodal functions. The expressions and parameters of the functions are shown in Table 2. Dim represents the dimension of the function, and Range represents the range of values of each variable of the function, \( \text{f}_{{\text{min}}} \) represents the minimum value of the function, and D represents the dimension of the function \( \text{f}_{7} \).

Table 2. Description of benchmark functions.

The improved algorithm-GCPSO is compared with PSO, GA and GOA in function optimization experiments. For comparing the experimental performance of each algorithm quantitatively, the maximum number of iterations is set as 1000 in the experiment. The experimental parameters of each algorithm are given in Table 3.

Table 3. Parameter settings.

3.1 Quality Analysis of Solutions

Table 4 shows the performance of GPSO, PSO, GA and GOA on different benchmark functions. F denotes the benchmark function, ave denotes the average optimal value of the function, std denotes the average standard deviation of the function value, and tim denotes the average running time of the algorithm on the function. And each benchmark function was run many times to generate these statistical results. The dimension of the experimental search space is 30-dimensional, and the population size is set to 300. The improved particle swarm optimization algorithm-GCPSO contains 100 particles per subpopulation, and each test function was run 30 times independently.

Table 4. Comparison of optimization results.

From the Table 4, we can see that the proposed algorithm-GCPSO takes a little shorter time than other algorithms except for PSO, but the mean value of the function is closer to the theoretical value than PSO. It indicates that GCPSO has advantages in solving high-dimensional function problems. It effectively solves the problem of poor convergence and local optimum of PSO in the later iteration period and improves the accuracy of the solution. At the same time, GCPSO has lower average standard deviation than PSO, which indicates that GCPSO improves the stability of the original algorithm. Comparing GCPSO with GA, we can see that GCPSO has better results in mean, standard deviation and running time for all functions except for a slight difference in the optimization of function \( \text{f}_{11} \). This shows that the GCPSO proposed in this paper is much better than GA in convergence, convergence accuracy, optimization speed and robustness. Compared with GOA, except for the function \( \text{f}_{7} \), GCPSO is also in a leading position in three statistical parameters for benchmark functions: the average time-consuming is short, indicating that GCPSO optimization speed is faster; the mean value of the function is closer to the theoretical value, indicating that GCPSO has better global convergence and convergence accuracy; the average standard deviation is lower, indicating that GCPSO has higher robustness than GOA.

3.2 Convergence Analysis of the Algorithm

Figure 1 shows the convergence curves of the function values varying with the number of iterations on the benchmark functions under different optimization algorithms. And the convergence curves for the functions \( \text{f}_{1} - \text{f}_{11} \) are arranged in the order from left to right and top to bottom. In the lower right corner of the figure, the enlargement effect in the yellow frame is given to show more clearly.

Fig. 1.
figure 1

Convergence curves.

It can be seen from the Fig. 1 that the final convergence values of GCPSO on the benchmark functions are the smallest except for the function \( \text{f}_{7} \), which indicates that GCPSO has the best global convergence and high convergence precision, while the other algorithms fall into local convergence on the benchmark functions, resulting in low convergence and low convergence accuracy. Observing the convergence curves, GCPSO can reach the minimum in the number of iterations less than 50 on most functions, that is, the convergence rate is fast, and the PSO algorithm follows. This shows that the proposed algorithm-GCPSO not only improves the global search ability and convergence efficiency of PSO but also can find the optimal value more quickly. Compared with other algorithms, the proposed algorithm-GCPSO can converge to the optimal value stably and quickly, the global search ability of the algorithm is stronger, and the convergence results are better.

In conclusion, the proposed algorithm-GCPSO has the characteristics of high convergence efficiency, strong global convergence, high convergence accuracy and good robustness.

4 Conclusion

The algorithm-GCPSO proposed in this paper borrows the divide-and-conquer strategy of cooperative thinking to makes full use of the advantages of group division and cooperation. And the algorithm combines the perturbation based on the greedy strategy, which not only improves the convergence efficiency of the algorithm but also improves the convergence accuracy of the algorithm. Experiments of 11 benchmark functions show that GCPSO has great advantages in accuracy, speed and stability compared with other algorithms. Future research will focus on simplifying the initial parameters and more complex high-dimensional optimization problems to enhance the universality of the algorithm.