Keywords

1 Introduction

In nature, if every creature wants to survive, they must have their own ability to survive. Under the screening of nature, only the unique survival ability that can adapt to nature can be preserved. Nowadays, people can observe the ability of various animals in nature, get inspiration from it, and come up with many new ideas that can solve practical optimization problems. This is the origin of the swarm intelligence optimization algorithm. The swarm intelligence optimization algorithm is an evolutionary algorithm based on random search. It mainly simulates the group behavior of insects, beasts, birds and fish. The main behavior of these groups is to forage and avoid enemies. Look for food and constantly exchange food information, so that those animals can find more food faster and avoid the enemy’s attack. Studying these swarm behaviors and abstracting an algorithm is the swarm intelligence algorithm.

The swarm intelligence optimization algorithm is an approximate optimization algorithm, which has aroused great interest of many researchers in computer science, engineering, medicine and other fields [1, 2]. This is because these algorithms can solve various complex optimization problems in a relatively short period of time. Since some complex tasks cannot be solved in a short period of time. And at the same time swarm intelligence optimization algorithms are simple, flexible and robust [3].

Dragonfly Algorithm (DA) is a new swarm intelligence optimization algorithm proposed by scholar Seyedali Mirjalili in 2015. Dragonflies are a relatively primitive and relatively few species of insects. There are about 5000 species in the world. They are carnivorous insects. They prey on a variety of agricultural, forestry and animal husbandry pests such as flies, mosquitoes, leafhoppers, horseflies and small butterfly moths. Dragonflies are a class of important natural enemy insects that are beneficial to humans. The main inspiration of the DA algorithm comes from the static and dynamic flocking behavior of dragonflies in nature, that is the behavior of foraging groups and the behavior of migratory groups. The algorithm is simple in principle, easy to understand and easy to implement, has strong search ability, and has been widely studied and discussed by scholars from many countries. The Dragonfly algorithm has been well applied and is currently used to solve various optimization problems, such as image processing, medicine, computer science, engineering, etc. And the dragonfly algorithm has shown excellent performance in these optimization problems. But the algorithm still has some common defects, such as low accuracy, premature convergence, and insufficient global search ability.

In view of the above shortcomings of the Dragonfly algorithm, this paper improves it and proposes an improved Dragonfly algorithm based on mixed strategy (Mixed Strategy Improved Dragonfly Algorithm, MSDA). At the beginning of the algorithm, the Sobol sequence is used to initialize the population to make the distribution in the initial solution space more uniform. This allows the algorithm to obtain a better initial population, improve the quality of initial solution. Then, the global search force and local development force are improved by nonlinearly decreasing inertia weight. The modification of inertia weight makes the algorithm better adapt to the convergence process. Using Cauchy mutation to increase the diversity of the population, improve the global search ability of the algorithm, and increase the search space. And the position of the individual is adjusted by a random learning strategy. The random learning strategy enhances the diversity of the population, effectively improve the global optimization performance of the algorithm. Through the test experiments of 8 benchmark functions. The experimental results show that compared with the original Dragonfly algorithm and some other algorithms, this algorithm has better search power in global search and local development, and has a certain improvement in performance.

2 Related Work

The source of inspiration for the dragonfly algorithm is the behavior of dragonflies in nature. Dragonflies use these behaviors to ensure their own survival, which is a simulation of dragonflies in nature. Dragonflies have social behaviors among individuals. Different from the hierarchical social behaviors of insects such as bees and ants, each individual dragonfly has corresponding activities for its survival. The biggest problem in its survival is the source of food and the avoidance of natural enemies. Under the above conditions, individual dragonflies will follow the society in which they live. Behavior, keeping moving while searching for food and avoiding predators.

In the dragonfly optimization algorithm, dragonfly individual behaviors are mainly described as 5 kinds, they are:

  1. 1)

    Separation behavior. It is a behavior that avoids collision between individual, Each individual in the group tends to maintain the same speed of movement;

  2. 2)

    Alignment behavior. It is the behavior of maintaining speed consistency between adjacent individuals;

  3. 3)

    Cohesion behavior. That is the behavior of individuals moving toward the average position of adjacent individuals;

  4. 4)

    Foraging behavior is that the individual is attracted by the food source;

  5. 5)

    Enemy avoidance behavior is the behavior of the individual to avoid natural enemies. Each behavior has its corresponding weight.

The behaviors of foraging behavior and enemy avoidance behavior are mainly included in the static group behavior. In static group behavior, there is a change in step size, while dragonflies in small groups are in a state of local movement. Dynamic flock behavior is when large numbers of dragonflies come together to make long-distance migrations in the same direction. In the dynamic group behavior, there are three main behaviors: separation behavior, alignment behavior and cohesion behavior. They fly in the same direction, which is also a major feature of dynamic group behavior. Each dragonfly in the group is equivalent to the solution of the search space (Fig. 1).

Fig. 1.
figure 1

Inception V1 structure

These five behaviors are the most basic and the most important. Each of these behaviors is essential, and if one of them is missing, the dragonflies will not be able to survive. Dragonflies survive under the above behavior. This is their unique survivability, with which they have survived nature’s long sifting. Similar to most swarm intelligence optimization algorithms, the dragonfly optimization algorithm imitates the behavior of individual dragonflies in nature. The Dragonfly algorithm mathematically describes the above five behaviors. The specific mathematical description is as follows:

  • Separation

Separation behavior is also known as collision avoidance behavior, it represents the degree of separation, refers to the avoidance behavior between adjacent individuals, and the mathematical expression is as follows:

$$ S_{i} = - \sum\limits_{j = 1}^{N} {X - X_{j} } $$
(1)

Here X represents the position of the current individual, and Xj represents the position of the j-th adjacent individual. N represents the number of adjacent individuals.

  • Alignment

The alignment behavior, it represents the degree of alignment. The behavior of individuals tending to the same speed, and the mathematical expression is as follows:

$$ Ai = \frac{{\sum\limits_{j = 1}^{N} {Vj} }}{N} $$
(2)

Among them, Vj represents the velocity of the j-th adjacent individual.

  • Cohesion

Aggregation behavior, it represents the degree of cohesion, refers to the behavior of individuals tending towards the center of the population, and the mathematical expression is as follows:

$$ Ci = \frac{{\sum\limits_{j = 1}^{N} {Xj} }}{N} - X $$
(3)

where X is the position of the current individual, N is the total number of adjacent individuals, and Xj represents the position of the j-th adjacent individual.

  • Foraging

Foraging behavior, that is, the attraction degree of food, dragonflies are attracted by food and move towards food, and the mathematical expression is as follows:

$$ Fi = X^{ + } - X $$
(4)

where X is the current location of the individual and X+ is the location of the food. The location of the food is selected from the currently found optimal solution.

  • Enemy avoidance

The avoidance behavior of enemies, that is, the repelling force of enemies, the behavior of dragonflies away from enemies when threatened by enemies, and the mathematical expression is as follows:

$$ Ei = X^{ - } + X $$
(5)

where X is the current location of the individual and X is the location of the enemy. The location of the enemy is selected from the worst solution found so far.

Dragonflies are developed according to the above five behaviors, and the step size update formula of the offspring dragonflies is as follows:

$$ \Delta Xt + 1 = (sSi + aAi + cCi + fFi + eEi) + \omega \Delta Xt $$
(6)

Among them, s, a, c, f, and e represent the weights of separation, alignment, cohesion, foraging, and enemy, respectively, and ω represent the inertia weight, and S, A, C, F, and E represent the degree of the above five behaviors. The t is the current number of iterations. The position of the next offspring dragonfly individual is expressed by the following formula:

$$ Xt + 1 = Xt + \Delta Xt + 1 $$
(7)

In order to judge whether there are adjacent individuals around the dragonfly, draw a circle with radius r around the dragonfly individual as the search radius of the dragonfly. Individuals in the circle are considered adjacent. As the number of iterations increases, the search radius of the dragonfly is also updated. The update formula of the search radius is as follows:

$$ r = \frac{a - b}{4} + 2(a - b)\frac{t}{t\max } $$
(8)

Among them, r is the search radius of dragonfly, t is the current number of iterations, tmax is the maximum number of iterations, and a and b are the upper and lower limits of the search range, respectively. If there are no adjacent individuals around the dragonfly, perform the Levy flight behavior of random walk:

$$ X_{t + 1} = X_{t} + {\text{Levy}}(d) \times X_{t} $$
(9)

where t represents the number of iterations of the current iteration, and d represents the dimension of the current position vector. Levy flight is a random walk strategy, where each step can be displaced in any direction and by any length. Many creatures in nature have similar laws in their activities, and their Levy function is:

$$ {\text{Levy}}(x) = 0.01 \times \frac{r1 \times \sigma }{{\left| {r2} \right|^{1/\beta } }} $$
(10)

where r1 and r2 is a random number in the range [0,1], β is a constant (equal to 1.5 in the original dragonfly algorithm), and σ is expressed as:

$$ \sigma = \left( {\frac{{\Gamma (1 + \beta ) \times \sin (\frac{\pi \beta }{2})}}{{\Gamma (\frac{1 + \beta }{2}) \times \beta \times 2^{(\beta - 1)/2} }}} \right)^{1/\beta } $$
(11)

3 Mixed Strategy Improved Dragonfly Algorithm

3.1 Initialization Population Based on Sobol Sequence

In many swarm intelligence optimization algorithms, the state of the initial distribution of the population will affect the performance of the next algorithm, and the same is true of the Dragonfly algorithm. In the original DA algorithm, in the initialization phase of the algorithm, the random generation of the initialization population is used. However, the distribution of randomly generated population states is relatively uneven and cannot cover every great point well, which will affect the subsequent performance of the algorithm. In order to improve this defect, this paper discards the original random initialization population, and adopts the initialization population based on Sobol sequence. The Sobol sequence is a low variance and random sequence with good distribution uniformity, which can generate a relatively uniform distribution in the probability space, resulting in non-repetitive and uniform points. The figure below compares the original random number distribution of DA with the spatial distribution of random numbers generated by the Sobol sequence. In the case of population number N = 500, the randomly generated initialization population and the population distribution map generated by Sobol sequence. It can be seen that the population distribution obtained by the Sobol sequence is more uniform and the coverage of the solution space is more complete (Fig. 2).

Fig. 2.
figure 2

Random generation sequence and Sobel generation sequence

3.2 Nonlinearly Decreasing Inertia Weights

Inertial weights are used to simulate inertia in real-world physics, enabling objects to maintain their original speed and continue to fly. In the swarm intelligence optimization algorithm, the algorithm should change the corresponding inertia weight at different times to make the algorithm more adaptable to the current changing state. In the original DA algorithm, a linearly decreasing inertia weight is used, and the calculation formula of the inertia weight is as follows:

$$ w = 0.9 - \frac{(0.9 - 0.4)t}{{t\max }} $$
(12)

In this formula, tmax is the maximum number of iterations, t is the number of iterations in the current iteration. 0.9 and 0.4 represent the upper and lower bounds of the inertia weight, respectively.

Since the original DA uses a linearly decreasing inertia weight, and the search process of the algorithm is not linear, in the later stage of the search, the inertia weight decreases too fast, and the search ability and development ability of the algorithm are not utilized accordingly. Therefore, this paper changes the linear decrease of the original DA algorithm to nonlinear, and uses the following formula to express:

$$ W = fc - fc \times (2 \times (t/t\max )\exp (h) - ((t/t\max )^{2} )) $$
(13)

where fc and h are the upper limit of the inertia weight and the lower limit of the inertia weight, fc = 0.9, h = 0.4, tmax is the maximum number of iterations, and t is the number of iterations at the current time. The curve of the above formula is represented by the following figure (Fig. 3):

Fig. 3.
figure 3

Modified inertia weight curve

3.3 Random Learning Strategy

In order to enrich the population information and increase the information sharing between individuals among the populations, this paper adds a random learning strategy to the original DA. Randomly select an individual x from the population, and a different individual xp, and determine the better individual by comparing the fitness values of the two. Adjust the position of individuals by learning from better individuals.

$$ xnew = \left\{ \begin{gathered} x + rand(0,1) \times (x - xp), \, f(xp) < f(x) \hfill \\ x + rand(0,1) \times (xp - x), \, f(xp) \ge f(x) \hfill \\ \end{gathered} \right. $$
(14)

The learning factor rand(0,1) is a random number between 0 and 1, which represents the learning difference of different individuals. After learning, if f(xp) < f(x), accept the new individual Xnew and replace the original individual, otherwise reject the new individual Xnew. This strategy increases the information sharing between individuals, increases the diversity of the population, and can effectively improve the global search ability.

3.4 Cauchy Mutation Strategy

Using Cauchy mutation to increase the diversity of the population, improve the global search ability of the algorithm, and increase the search space. The peak value of the Cauchy distribution function at the origin is small, but the distribution at both ends is relatively long, which can generate greater disturbance to the current individual, making it easier for the algorithm to jump out of the local optimal solution and enhancing the global search ability of the algorithm. In this paper, the following Cauchy variation formula is used to update the position of the current optimal individual:

$$ x_{newbest} = x_{best} + x_{best} \times Cauchy(0,1) $$
(15)

where xnewbest is the new value obtained after the current optimal value is disturbed by Cauchy mutation. Cauchy(0,1) is the Cauchy operator, and the standard Cauchy distribution function formula is as follows:

$$ f(x) = \frac{1}{{\pi \times (x^{2} + 1)}}{ , }x \in ( - \infty , + \infty ) $$
(16)

Compared with other functions, the peak of the Cauchy distribution is lower, which can shorten the time spent by individuals in the search space, thereby speeding up the convergence speed of the algorithm. The extension of the two ends of the Cauchy distribution can generate random numbers farther from the origin, which can generate strong disturbance to the individual, so that the disturbed individual has the ability to quickly avoid local traps.

3.5 Algorithm Implementation

After the above improvements, the algorithm flow of this paper is as follows:

  1. Step 1:

    Initialization algorithm, dragonfly population size N, maximum iteration number MIT, problem dimension D, neighborhood radius r, step vector \(\Delta x\);

  2. Step 2:

    Use Sobol sequence to generate initialization population, and set t = 1;

  3. Step 3:

    Calculate the moderate value of each individual;

  4. Step 4:

    Adjust the individual position through the random learning strategy of Eq. (14);

  5. Step 5:

    Update the behavior weights s, a, c, f, e;

  6. Step 6:

    Use the nonlinear decreasing strategy of Eq. (13) to update the inertia weight;

  7. Step 7:

    Calculate S, A, C, F, E;

  8. Step 8:

    Update the neighborhood radius r, if there are adjacent individuals, use Eq. (9), (10) Update the step vector, otherwise use the Levy flight to update the position vector;

  9. Step 9:

    Perturbation of individuals using the Cauchy mutation strategy;

  10. Step 10:

    The number of iterations is increased by one. If the maximum number of iterations is reached, the optimal solution will be output, otherwise, it will return to Step 3.

The above algorithm flow can be represented by the following chart (Fig. 4):

Fig. 4.
figure 4

MSDA algorithm flow chart

4 Experiment

In the experiment, this paper introduces the original Dragonfly Algorithm (DA), Grasshopper Optimization Algorithm (GOA), Particle Swarm Optimization algorithm (PSO), and this algorithm (MSDA) for comparative experiments. The settings of the comparative experiments are as follows: the population size N = 30, the maximum number of iterations MIT = 500, and the dimensions are all set to 30.

In order to verify the improvement effect of the MSDA algorithm, four different standard single-peak test functions and four different standard multi-peak test functions are used for verification tests. The specific function types and attributes are shown in the following table (Table 1):

Table 1. List of test functions

The experimental operating environment of this paper is the 64-bit Windows 7. The processor is Intel Core i5-3230M CPU @ 2.60GHz, and the host has 8 GB of installed memory. The simulation software used for the test is MATLAB R2016b.

In order to obtain a more average result in the experiment and reduce the influence of the randomness of the experiment, this experiment will run each test function 10 times independently, and compare the optimal solution, worst solution, mean and std to evaluate the algorithm. The experimental results are shown in following table (Table 2):

Table 2. Experimental structures for different network structures

To sum up, the MSDA algorithm has obtained better results than other comparison algorithms in both the single-peak test function and the multi-peak test function, and can obtain better solutions under the same conditions.

5 Conclusion

DA has been successfully used in some fields, proving its strength. Compared with other optimization algorithms, it has the characteristics of less parameters, simple control and easy enhancement. But it still has some shortcomings, such as slow convergence speed and search precision, and it is also prone to fall into local optimal solutions. In order to solve these problems, this paper does some work. On the basis of the original dragonfly algorithm, this paper mixes a variety of improved strategies, and proposes an improved dragonfly algorithm MSDA with a mixed strategy. The algorithm introduces some improvement strategies. First, initialize the population with the Sobol sequence. In this way, the algorithm can obtain a better initial population and improve the quality of the initial solution. Secondly, the inertia weight is improved, and the nonlinear decreasing inertia weight is adopted. The modification of inertia weights makes the algorithm better adapt to the convergence process. Then use the Cauchy mutation to increase the diversity of the population, improve the global search ability of the algorithm, and increase the search space. Finally, a random learning strategy is added. The random learning strategy enhances the diversity of the population and effectively improves the global optimization performance of the algorithm. Simulation experiments prove that its performance has a certain improvement compared with the original DA algorithm. The next research direction is towards combining the MSDA algorithm with optimization problems in other domains.