Keywords

1 Introduction

Many engineering problems are proven to be NP complete or NP hard. In recent years, the intelligent heuristic optimization algorithms become more and more attractive for they can be used to search for the optimal solutions to solve NP problem partially [1]. However, due to the particularity and complexity of the various problems, each algorithm shows its advantages and disadvantages. The key challenge is the tradeoff between the time performance and optimization effect. If the algorithm is designed to find the optimal solution in limited time, it has to compromise in the optimization effectiveness and vice versa. One of the promising approach is to adopt the ideas of different algorithms and find a mixed way to be a better solution.

Particle swarm optimization (PSO) and ant colony optimization (ACO) are two important algorithms to find optimal solutions for the NP problems. PSO was proposed as an evolutionary calculation method based swarm intelligence [2]. It is an optimization method based on iteration, which is similar to genetic algorithm. PSO algorithm has adopted the concepts of group and evolution, which is derived from the research on the behaviors of the birds’ feed [3]. It operates mainly based on individual adaptive values. This algorithm has simple design concept and it is easy to implement. It has strong global search ability with less experienced parameters. However, it also has obvious disadvantage that this algorithm is easy to fall into local optimal solution though it has fast global search capability with fast initial speed.

ACO is a different type of intelligent optimization algorithm. This algorithm was derived from the study of path finding behaviors of ants’ activity of looking for food [4]. It also adopts the concepts of group and evolution and is based on iterative optimization. It uses a positive feedback mechanism. This algorithm could converge to the optimal solution through the pheromone that updating continuously. However, it has a slow convergence speed due to the lack of pheromone at the beginning [5].

PSO and ACO are popular algorithms. They are used to solve many problems. These two algorithms can be used individually or jointly [6,7,8,9,10]. Many researches focused on the performance and convergence of the two algorithms [11,12,13,14] that showing both of PSO and ACO have their advantages and disadvantages. In this paper, our design is to adopt their concepts to obtain the optimal solutions. PSO has a better performance to search the optimal solutions. This algorithm is used as the initial processing. When premature convergence is emerging, ACO is used to complete the rest of the optimization process. The key is how to identify the premature convergence. Our design focuses on the aggregation of the particles. It helps our approach to switch PSO to ACO.

This paper is organized as the follows. Section 2 provides the PSO based optimization. Section 3 describes the ACO based optimization. Section 4 depicts the hybrid algorithm. The experiments and result analysis are discussed in Sect. 5. And at last, we give the conclusions in Sect. 6.

2 Basic Principle of PSO and Its Optimization

PSO simulates birds’ feeding behaviors. Imagine that a flock of birds search for food in the area randomly, in which there is only a piece of food. All the birds do not know the location of the food, but they know the distances between their own current positions to the food. The simplest and most effective method is to search the area, in which a bird is the nearest to the food. PSO algorithm was inspired from this kind of thought. It is used to solve the optimization of the problem. Each bird in this search space may be the solution for the problem. The bird can be considered as an idealized particle without quality and volume in this space. Each particle has an adaptive value that is determined by the optimization function, and there is a velocity that determines the direction and distance of the flight. Located in a S dimension of the target search space, there are m particles to form a group, where the current position of the particle i is represented as X i (x i1 , x i2 ,…, x iS ), current flight velocity Vi(v i1 , v i2 ,…, vi S ) and the position of P i (p i1 , p i2 ,…, p iS ) in which the best position P pbest (that is, with the best fitness value of the position, which is an individual extreme value). The current space of all particles have experienced the best position P gbest (global extreme value). In each iteration, the particle updates itself by tracking the two extreme values. Particle i in S dimensional space update its velocity and position according to the following computation:

$$ v_{is} \left( {t + 1} \right) = w *v_{is} \left( t \right) + c_{1 } r_{1} \left( {P_{pbest} \left( t \right) - x_{is} \left( t \right)} \right) + c_{2} r_{2} \left( {P_{gbest} \left( t \right) - x_{is} \left( t \right)} \right) $$
(1)
$$ x_{is} \left( {t + 1} \right) = x_{is} \left( t \right) + v_{is} \left( {t + 1} \right) $$
(2)

Among them, \( i \in [1, m] \) and \( s \in [1, S] \); inertia weight w is non-negative number to control the influence from the previous velocity on the current velocity, which has very big effect on balancing the global search ability and local search ability of the algorithm. When w is small, the previous velocity has little effect on the local search ability of PSO algorithm. When the w is large, the previous velocity has great influence on the global search ability of PSO algorithm. c 1 and c 2 are learning factors, which are non-negative values. r 1 and r 2 are independent pseudo-random numbers, which obey the uniform distribution on [0, 1]. \( v_{is} \in [ - v_{max} ,v_{max} ] \), and vmax is constant. In the process of updating, the maximum velocity in each dimension of a particle is restrained as vmax, and the coordinates in each dimension of a particle is also restrained in the permitted range. At the same time, P pbest and P gbest are constantly updated in the iterative process. The final output is P gbest , which is the optimal solution output by the algorithm.

Standard particle swarm algorithm completes the search for the optimal solution through the individual extremum and global extremum. The operations are simple with fast convergence. However, when the number of iterations increases, the particles become similar with the population convergence. It may result into local optimal solution. Such approach to track the particles’ positions is replaced by other methods. The optimal solution is searched through the crossover of the individual extremum and the global extremum, or the mutation of the particles.

3 Principle of ACO Algorithm and Its Model

3.1 Optimization Principle of ACO

Ants have the ability to find the shortest path from their nest to the food without any cues. They can avoid the obstacles appropriately according to the terrain and be adapt to search a new path as the different choice. The nature of such phenomenon is that the ants will release a special kind of secretion called pheromone. Pheromone will disappear gradually over time. The remains can represent the distance of the path. And then, the ants can change their paths according the concentration of the rest pheromone. If the probability of choosing the path also high, more ants will choose this path. They will release more pheromone. When a path is chosen by more ants, it will keep more pheromone. This forms a positive feedback mechanism. Through such positive feedback mechanism, the ants can find its nest to food source of the shortest path finally. In particular, when there is an obstacle between the ants and food source, the ants can not only pass around the obstacles, and they can find the shortest path after a period of time of the positive feedback through the changes of the pheromone in different paths.

3.2 Elitist Ant System (EAS)

EAS is a relatively well global optimization of ACO algorithm [15]. Assumes that a path e(i, j) has the τ ij (t) as the concentration of the pheromone track at time t. At the initial moment, the different paths have the same pheromone. Ant k (k = 1, 2, 3, …, m) determines its direction during the movement according to the concentration of the pheromone in each path. \( p_{ij}^{k} (t) \) represents the probability that the ant k shifts from position i to position j at t time. Then the probability can be obtained as follows:

$$ p_{ij}^{k} = \left\{ \begin{aligned} \frac{{\tau_{ij}^{\alpha } \left( t \right) *\eta_{ij}^{\beta } \left( t \right)}}{{\mathop \sum \nolimits_{{s \in allowed_{k} }} \tau_{is}^{\alpha } \left( t \right) *\eta_{is}^{\beta } \left( t \right)}} ,\quad \;j \in allowed_{k} \hfill \\ 0,\quad \quad \quad \quad \quad \quad \quad \quad others \hfill \\ \end{aligned} \right. $$
(3)

In (3), \( allowed_{k} = \{ 0, 1, \ldots , n - 1\} \). In addition, tabu k is defined as a taboo list and indicates the positions that ant k can choose in the next step. tabu k (k = 1,2,3,…,m) is used to record the current position of ant k. \( \eta_{ij } \) represents the visibility of a path e(i, j), which is general set as \( \eta_{ij} = \frac{1}{{d_{ij} }} \). d ij represents the distance between position i and position j. α represents the relative importance of the tracks; β indicates the relative importance of visibility; ρ represents the persistent of the track; 1ρ is the disappear degree of the information. After the ants complete a cycle, the amount of the pheromone in each path are adjusted according to the follows:

$$ \tau_{ij} \left( {t + n} \right) = \rho \tau_{ij} \left( {\text{t}} \right) + \sum\nolimits_{k = 1}^{m} {\Delta \tau_{ij}^{k} ({\text{t}})} $$
(4)

3.3 Max-Min Ant System

MMAS (Max-Min Ant System) was proposed as a general-purpose ant colony algorithm [16]. It improved ant colony algorithm in the following areas. Firstly, only the ants in the shortest path could update the pheromone after an iteration. The update methods was same to EAS. Secondly, the concentration of the pheromone was set in the range min , τ max ] to improve the efficiency. Any values beyond this range would be forced to set in this range as τmin or τmax. Thirdly, the initial values of the pheromone in each path were set as the τmax for better search. Furthermore, a smaller evaporation coefficient was set in order to find more search paths.

4 HOA: Hybrid Optimization Algorithm

4.1 HOA Architecture

The basic idea of HOA is derived from the advantages of PSO and ACO without their defects. The process of HOA has two different stages. The former is to use PSO for the efficiency, the global search ability and the fast convergence. When the premature is emerging, ACO is used to replace PSO. PSO will output the basic information of the paths. They are the initial parameters for ACO, which means ACO can obtain a better initialization compared with the original one. In the process of the algorithm, ant colony algorithm is used for the positive feedback. Its overall framework is shown in Fig. 1.

Fig. 1.
figure 1

HOA framework

4.2 Premature Phenomena and Early Maturity of PSO

PSO is similar to many heuristic optimization algorithms just like the other heuristic algorithms. This algorithm has no operations such as selection, crossover and mutant. It means this algorithm is relatively simple with faster convergence. However, if a particle finds a current optimal node, the other particles will quickly move closer to this particle during the processing. If this node is the local optimal node, the particles will fall into the local optimal solution without escape. And the search cannot be restart. The output is a local optimal solution, which is called premature convergence phenomenon.

The existing works showed that particles in PSO will aggregate whether or not the algorithm is premature convergence or global convergence. The particles will aggregate at a special position or several special positions, which is determined by the problem itself and the selection of the fitness function. In our design, the heuristic starts from the normal distribution and fitness variance is adopted as the judgment condition to the premature convergence.

If particle swarm particle number is n, F i represents the fitness of the i-th particle, F avg represents the average fitness of the PSO, fitness variance \( \sigma^{2} \) can be obtained as the follows:

$$ \sigma^{2} = \sum\nolimits_{i = 1}^{n} {\left( {\frac{{F_{i} - F_{avg} }}{F}} \right)}^{2} $$
(5)

In (5), F is the normalized calibration factor and it is used as the limit to the size of the σ^2 variance. F can be obtained as the follows:

$$ {\text{F}} = \left\{ {\begin{array}{*{20}l} {\hbox{max} \left| {F_{i} - F_{avg} } \right|,\hbox{max} \left| {F_{i} - F_{avg} } \right| > 1,} \hfill & {among\,{\text{i}} \in \left[ {1,{\text{n}}} \right]} \hfill \\ {\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,1,} \hfill & {others} \hfill \\ \end{array} } \right. $$
(6)

The variance gives the degree of aggregation of the particles in the particle swarm. The smaller the \( \sigma^{2} \) variance is, the aggregation degree of particle swarm is big. If the algorithm does not meet the end condition and the aggregation is too large, the particle swarm algorithm falls into the so-called premature phenomenon. So when \( \sigma^{2} \le h \) (h for a given constant), this algorithm will process using ACO.

5 Experiments and Results Analysis

5.1 Implementation

In order to overcome the problem that PSO is easy to fall into local optimal solution and ACO is lack of initial pheromone, HOA is designed to take use of the advantages of both PSO and ACO. The steps of HOA are as follows. First, PSO is chosen as the initial algorithm. Its parameters are set as the follows. The total number of particles is 100. The parameter h, which is used to judge whether PSO falls into a local optimal solution, is set to 15 as a constant. And then when the PSO falls into a local optimal solution, the output of PSO algorithm will be recorded as bestPath. And then ACO is initialized with the 5 times of bestPath as the initial pheromone concentration. And then ACO starts with 4 as the number of ants. At last ACO gives the output.

5.2 Experimental Results and Analysis

TSP (Travelling Salesman Problem) is an important combinatorial optimization problem, which has been proved to be NP complete. In order to verify HOA, TSP is adopted as the target problem.

Taking the urban plan as an example, the theoretical optimal value is 423.7406. The parameters of PSO algorithm are set as the follows: the total number of particles is 100; the number of cities is 30; The parameter h, which is used to judge whether PSO falls into a local optimal solution, is set to 15 as a constant. The parameters of ACO is set as the follows. α and β are set random values in [1, 2] (here α = β = 2 as usual); ρ = 0.9 (usually from [0, 1]); the number of ants is 4.

The experimental results are shown in Table 1. PSO still contributes most iterations in HOA. PSO will compact the search space as an intermediate one and provide enough initial information for PSO. It also means that when PSO may fall into the local optimal solution during the process. At that moment, the premature is detected and PSO is switched to ACO. ACO only provides relatively less iterations. ACO can complete the whole algorithm in limited iterations. When h is suitable, HOA can complete the process quickly.

Table 1. Experimental results of HOA

Table 2 shows the experimental results of the HOA for the length of the shortest paths and the iterations. HOA has less iterations. Furthermore, HOA can iterate repeatedly. And it can also avoid the local optimal solution trap with higher accuracy. The experimental results also show that this algorithm can complete the processing in 30 s, while the average execution time of the hybrid particle swarm algorithm exceeds 100 s. In a word, HOA has better optimization performance on the accuracy and efficiency.

Table 2. Experimental results of PSO

Figure 2 shows the average value comparison of HOA and the hybrid particle swarm algorithm after running 20 times. When PSO is running, HOA and hybrid PSO almost have the same curve. It means they almost have the same convergence speed. However, when the convergence is going to the end, HOA is faster. ACO provides more accurate solution.

Fig. 2.
figure 2

Comparison of the iterative process of PSO and HOA

6 Conclusions

This paper presents the HOA algorithm based on the particle swarm algorithm and ant colony algorithm. This algorithm is designed to take use of the advantages of both PSO and ACO whereas to avoid their disadvantages. PSO improves the convergence speed and provides the input to ACO for further processing. ACO is used to avoid the premature convergence. The switch time is determined by the fitness variance. ACO helps HOA to provide the accuracy of processing. Our algorithm is verified through the experiments. TSP is used as the target problem. The experimental results shows that our algorithm can provide faster convergence speed and higher accuracy.