1 Introduction

With increasing optimization demands in our lives, more and more powerful optimization algorithms are proposed to tackle these problems in different areas. Computational Intelligence gives computational methodologies and approaches to address and tackle these complex real-world optimization problems especially for those with noise and uncertainty. The standard approach to tackle these complex problems often begins by designing an objective function that can model the problems’ objectives while incorporating any constraints [1]. Optimization algorithm targeting a specific problem may be of less use for different categories of optimization tasks. The idea driving the development of a new optimization algorithm, is that the heuristic do not necessarily have to be completely problem-dependent, but that general optimization techniques could be developed that were applicable to a broad class of different problems [2]. Therefore, the aim of the paper is to propose a powerful optimization algorithm for a broad class of different problems rather than a problem-specific approach for a concrete use.

Particle Swarm Optimization (PSO) was a powerful evolutionary computational algorithm introduced in 1995 [3]. Since the inception of PSO algorithm, many researchers had expanded on the original idea of the canonical PSO. Representative changes of the PSO evolution equation varied from the alterations of parameter adjustments to alterations of particle topologies. As the original PSO algorithm was inspired by the behavior of the bird flocks, particle neighborhoods were firstly used for the simulation. The calculation of Euclidian neighborhood was time consuming, and the performance of the optimization results that utilized Euclidian neighborhood were not good enough as well. All these resulted in the abandon of Euclidian neighborhood model. Neighborhood topology unrelated to particle locations came to be useful in particle evolution. Particles used their historical best (lbest) and topology best (gbest) for evolution, and they did not use crossover operations which used in Genetic Algorithm. The moving velocity was used to make a balance between the exploitation and exploration of a particle.

Parameters adjustments showed significant improvement of the canonical PSO, [4] proposed a new optimizer using particle swarm theory, and examined how the changes in the paradigm affected the number of iteration required to meet an error criterion. Inertia weight and constriction coefficient of velocity were consequently proposed and analyzed for the importance of convergence [57]. Particle neighborhood topologies also played important roles in many complex especially multimodal optimization problems. Mendes et al.  [8] analyzed the effect of different neighborhood topologies and consequently the authors proposed a Fully-Informed Particle Swarm (FIPS) variant giving some hints that simpler maybe better. Other variants solved some shifted or rotated benchmark functions by using dynamic multi-swarm or dynamic topology [9, 10]. Bratton and Kennedy  [11] defined a standard PSO as a baseline for performance testing of improvements among newly proposed PSO variants. Topology, bound constraints, particle initialization and population size were all analyzed in the paper. Some recently proposed PSO variants such as Dynamic Neighborhood Learning based PSO (DNLPSO) [12] and Social Learning PSO (SLPSO) [13] used dynamic topology to void premature of convergence aiming at better performance on multimodal functions.

With the development of optimization research, many new optimization algorithms have also been proposed in the recent few decades such as Cat Swarm Optimization [14], Bat Algorithm [15], and Ebb-tide-fish algorithm [16, 17], etc. Some of the newly proposed algorithm make contrasts with the canonical PSO algorithm, and they may be better than this canonical algorithm, but these comparisons usually make little sense as that much improvement have been done on the classical algorithms. That why the standard PSO form was proposed in  [11] as a baseline for performance comparison. So the paper gives a comparative view of some state-of-the-art PSO variants with the new proposed algorithm, contribution and tradeoff are also analyzed herein the paper.

Optimization of vehicle navigation in a city is an urgent problem nowadays with the increasing vehicles on the road. The optimization often aims at a better satisfaction of individual drivers as well as a better throughput of the whole city traffic networks. Vehicle localization and traffic condition acquisitions play important roles in such optimization of a certain routing scheme, and GPS based localization approach is usually a commonly choice for many applications. As there are still some week points of GPS based approach, some other localization techniques are proposed for robustness and critical use, such as dead reckoning, cellular localization, image/video localization in vehicular ad-hoc network to overcome GPS limitations [18]. Many applications of wireless sensor networks use the sensor node for localization, these different approaches for localization and navigation can be seen in these papers [1921]. As wireless sensor network is a convenient approach to grasp the traffic conditions of a certain areas [22], the grasped traffic condition can be used as a key element in vehicle navigation, and this is also used in the model proposed in this paper.

The rest of the paper is organized as follows. Section 2 shows some earlier work of swarm based algorithms. Section 3 discusses the details of the newly proposed Monkey King Evolution algorithm. Section 4 shows the benchmark functions and verification of MKE algorithm. Section 5 shows the application of MKE algorithm to tackle micro-scope vehicle navigation problem in wireless sensor network environment. Section 6 gives the final conclusion.

2 Related works

PSO is simple and powerful algorithm for optimization problems, many researchers have learned this technique and proposed many variants to improve the performance for a large domain of optimization tasks or for a certain specific problem. Equation 1 presents a Inertia Weighted Particle Swarm Optimization(IWPSO) variant, a smaller inertia weight value often results in a fast convergence when the variant can find the global optima.

$$\begin{aligned} {\left\{ \begin{array}{ll} v_{i}^{t+1}\leftarrow \omega *v_{i}^t+c_1*(X_{fb}-x_{i})+c_2*(X_{gb}-x_{i}),\\ X_{i}^{t+1}\leftarrow X_{i}^t + v_{i}^{t+1}. \end{array}\right. } \end{aligned}$$
(1)

A smaller value often means that the variant pays more attention on exploitation while a larger one pays more attention on exploration. Equation 2 is called a constriction coefficient variant, and the inertia weighted variant can be considered as a special form of the constriction coefficient variant. The pseudo code of Constriction Coefficient Particle Swarm Optimization (CCPSO) variant is shown in Algorithm  1.

$$\begin{aligned} {\left\{ \begin{array}{ll} v_{i}^{t+1}\leftarrow \chi *(v_{i}^t+c_1*(X_{fb}-x_{i})+c_2*(X_{gb}-x_{i})),\\ X_{i}^{t+1}\leftarrow X_{i}^t + v_{i}^{t+1}. \end{array}\right. } \end{aligned}$$
(2)
figure e

FIPS uses specific neighborhood topology, such as ring topology, four cluster topology, Von Neumann topology et al., rather than the population global best topology for evolution. Equation 3 shows the evolution equation of FIPS. FIPS seems to find optima in few iterations but it depends much more on particle topology. Other new proposed PSO variants use different topologies for evolution, Comprehensive Learning Particle Swarm Optimization (CLPSO) uses randomly generated neighborhood relation to supervise the learning process, and particles learn from worse solutions of the neighborhood under certain learning probability to void premature. SLPSO also uses a dynamic neighborhood and particles learn from any better particles in the sorted neighborhood.

$$\begin{aligned} {\left\{ \begin{array}{ll} v(t+1) = \displaystyle {\chi *\left( v(t)+ \frac{1}{K_{i}}\sum _{n=1}^{K_{i}}c*(Xnb_{n} -x(t))\right) ,}\\ x(t+1) = x(t)+ v(t+1). \end{array}\right. } \end{aligned}$$
(3)

3 The Monkey King Evolutionary algorithm

3.1 Overview of ebb-tide-fish algorithm

Ebb-tide-fish algorithm is a newly proposed meta-heuristic algorithm mimicking foraging behavior of fish [16, 17]. The algorithm examines the different velocity initialization approaches of PSO algorithm, and shows a velocity-free evolution scheme. Particles in ebb-tide-fish algorithm are divided into two categories, and particles in the first category search the local area around their locations while particles in the second category search towards the global best particle in the population. The category labels of all particles change dynamically as to the change of the search behaviors. The ebb-tide-fish algorithm performs very well in lower dimension optimization problems. Equation 4 shows the evolution equation of the ebb-tide-fish algorithm.

$$\begin{aligned} X_{etf,G}^i= & {} \{x_{1},x_{2},\ldots ,x_{j},\ldots ,x_{D}\}, \nonumber \\ x_{j}= & {} x_{j}\pm r*rand()*x_{j},j \in D,\nonumber \\ X_{i,G+1}= & {} X_{i,pbest} + F*rand()*(X_{gbest} - X_{i,G}). \end{aligned}$$
(4)

The pseudo code of the ebb-tide-fish algorithm is shown in Algorithm 2. Deeper analysis of ebb-tide-fish on high dimensional multimodal benchmark function shows that the algorithm may be premature for some complex optimization problems. Improvements should be done to change the local search behavior to void premature when tackling these tough problems, and this the reason why we propose an enhanced version (Monkey King Evolution algorithm) of it.

figure f

3.2 Details of Monkey King Evolution algorithm

“Journey to the West” is one of the four great classical novels of Chinese literature, the novel related to the amazing adventures of the priest Sanzang as travels west in search of Buddhist Sutras with his three disciples. Monkey King is the most powerful disciple of the three and he can transforms into several different small monkeys to acquire the circumstances around current location. All these small monkeys give feedbacks to Monkey King after their own exploration, and then Monkey King can grasp where is the optima among these locations. Exploration behaviors of these small monkeys are implemented by Eq. 5. \(x_{i,G+1}\) denotes the ith particle in \(G+1\)th generation, \(x_{i,pbest}\) denotes the ith particle’s historical best, \(x_{r1}\) is the first randomly selected particle while \(x_{r2}\) denotes the second randomly selected particle from the population. \(x_{r1}\) and \(x_{r2}\) in the equation can be the same, and this is also similar but different in comparison to DE [1].

$$\begin{aligned} x_{i,G+1} =&x_{i,pbest} + FC*(x_{r1} - x_{r2}). \end{aligned}$$
(5)

Figure 1 shows the exploitation behavior of Monkey King, and this is implemented by the exploration of small monkeys. The origin point the coordinate in Fig. 1 denotes Monkey King’s location. The arrows in the figure show the moving direction and step size of the small monkeys. The small monkeys move from the origin point of the coordinate to the arrows pointed points, and then Monkey King can grasp where the optimal location is among all the locations of small monkeys after their exploration. There are several schemes to generate Monkey King particles in the population. The first scheme is that Monkey King particles are randomly generated with the number related to a certain rate of the population, then the left particles in the population are labeled as common particle. In the second scheme, the global best particle is denoted as monkey king particle as Monkey King in the mythological novel “Journey to the West” is the most powerful disciple. The last scheme is that all particles are equal and all of them are labeled as monkey king particles. They evolve according to the following equation written in a matrix style Eq. 6. \(\widehat{X}\) denotes the matrix of all particles in population, and \(\widehat{X}_{gbest}\) denotes the global best matrix. Both of the two matrices have ps components, ps means the population size. The ith component of \(\widehat{X}\) is the coordinate of the ith particle.

$$\begin{aligned} {\left\{ \begin{array}{ll} \widehat{X}_{diff} = (\widehat{X}_{r1}-\widehat{X}_{r2}) \\ \widehat{X}_{gbest,G+1}=\widehat{X}_{gbest,G}+FC*\widehat{X}_{diff} \\ \widehat{X}_{G+1} = M\bigotimes \widehat{X}_{G}+Bias \\ Bias = \overline{M}\bigotimes \widehat{X}_{gbest,G+1}. \\ \end{array}\right. } \end{aligned}$$
(6)

For the global best matrix \(\widehat{X}_{gbest}\), all the components are the same with the value equaling to \(X_{gbest}\) (\(X_{gbest}\) is the coordinate of the particle that finds the global optima in the population). The equations of the two matrices are shown in Eq. 7.

$$\begin{aligned} \widehat{X} = \left[ \begin{array}{cccc} X_{1}\\ X_{2}\\ \cdots \\ X_{ps} \end{array} \right] \widehat{X}_{gbest,G} = \left[ \begin{array}{cccc} X_{gbest,G}\\ X_{gbest,G}\\ \cdots \\ X_{gbest,G}, \end{array} \right] \end{aligned}$$
(7)

\(\widehat{X}_{diff}\) denotes the difference matrix, and it is calculated by the difference of two particle-randomly-permutated matrices of \(\widehat{X}\). In other words, \(\widehat{X}_{r1}\) and \(\widehat{X}_{r2}\) are generated by randomly permutating the row vector of matrix \(\widehat{X}\). The equations of \(\widehat{X}_{r1}\) and \(\widehat{X}_{r2}\) are shown in Eq. 8.

$$\begin{aligned} \widehat{X}_{r1} = \left[ \begin{array}{cccc} X_{j1}\\ X_{j2}\\ \cdots \\ X_{jps} \end{array} \right] \widehat{X}_{r2} = \left[ \begin{array}{cccc} X_{k1}\\ X_{k2}\\ \cdots \\ X_{kps}. \end{array} \right] \end{aligned}$$
(8)
Fig. 1
figure 1

The illustration of Monkey King particle’s exploitation vector

FC is a constant value and it can be considered as the fluctuation coefficient of the difference matrix. \(\widehat{X}_{G}\) denotes the Gth generation of \(\widehat{X}\). M is the selection matrix, and \(\overline{M}\) denotes the reverse binary operation of matrix M. The selection matrix is defined in the following equation in Eq. 9. Selection matrix is generated by two consequent steps from an extended lower-triangular matrix \(M_{tmp}\) in Eq. 9. The first step is to randomly permutate row elements of \(M_{tmp}\), each row of \(M_{tmp}\) is dealt separately. The second step is to randomly permutate the row vectors with each row element unchanged. Footnote 1

$$\begin{aligned} M_{tmp}= \left[ \begin{array}{cccc} 1 &{} &{} &{}\\ 1 &{} 1 &{} &{}\\ &{} &{} \cdots &{}\\ 1 &{} 1 &{}\cdots &{}1\\ 1 &{} &{} &{}\\ 1 &{} 1 &{} &{}\\ &{} &{} \cdots &{}\\ 1 &{} 1 &{}\cdots &{}1\\ 1 \\ 1 &{} 1\\ \end{array} \right] \sim&\left[ \begin{array}{cccc} 1 &{} 1 &{} &{}\\ &{} \cdots &{} &{}\\ 1 &{} 1 &{}\cdots &{} 1 \\ &{} \cdots &{} &{}\\ &{} &{} 1&{}\\ &{} \cdots &{} &{}\\ 1 &{} \cdots &{} 1&{}\\ &{} \cdots &{} &{}\\ &{} 1 &{} &{}\\ 1 &{} &{}\cdots &{} 1 \\ \end{array} \right] =M. \end{aligned}$$
(9)

We can see that each particle in MKE algorithm only has one mode which means that each particle in the population is equal. The exploration and exploitation are implemented by both exploitation vector and the evolution matrix M. For the ebb-tide-fish algorithm, particles have two modes in the evolution. One mode is for exploitation and the other is for exploration. The exploitation of ebb-tide-fish algorithm is implemented by Eq. 10. Difference vector is first used for the enhancement of ebb-tide-fish algorithm, and then the evolution equation is changed to Eq. 11.

$$\begin{aligned} X_{etf,G}^i =&\{x_{1},x_{2},\ldots ,x_{j},\ldots ,x_{D}\}, \nonumber \\ x_{j} =&x_{j}\pm r*rand()*x_{j},j \in D. \end{aligned}$$
(10)
$$\begin{aligned} X_{etf,G}^i =&\{x_{1},x_{2},\ldots ,x_{j},\ldots ,x_{D}\}, \nonumber \\ x_{j} =&x_{j} + FC*(x_{r1} - x_{r2}). \end{aligned}$$
(11)

Then we find that the two mode in the particle evolution consumes much time with less performance improved. Then we use the evolution matrix M instead of two modes for particle evolution. Finally, we found the powerful monkey king evolution algorithm. Particles’ distribution within 50 generations without fitness value selections over a 20 population size in the search domain \([-5,+5]\) on each dimension of ebb-tide-fish algorithm and monkey king evolutionary algorithm are shown in Fig. 2.

Fig. 2
figure 2

Empirical particles’ positions within 50 generations. Particles are of no selection according to their fitness values. The range of each dimension is \([-5,+5]\), the population size is 20. a Ebb-tide-fish algorithm, b Monkey King Evolution algorithm

4 Benchmark functions and algorithm verification

Table 1 Search domain and minimum of CEC2013 benchmark functions

To predict the performance of optimization algorithm is difficult as the available theory is limited. Therefore benchmark functions play important roles in analyzing these algorithm. For the verification of algorithms that suit for a broad class of different problems, the carefully selected benchmark functions make more senses than some problem-specified functions. In this section for the verification of proposed MKE algorithm, CEC2013 test suite for real-parameter optimization is used here. The benchmark functions in this test suite are many sophisticated benchmark functions which reflect many real-world applications. All these 28 benchmarks can be split into several groups, such as separate functions, function with low conditioning, functions with high conditioning and unimodal, multimodal functions with global structure, multimodal functions with low global structure, etc., to examine different characteristics of optimization algorithms. All these benchmark functions are rotated and shifted with the global optima locating in a same coordinate. The names, domain and optima are listed in Table 1 while the detailed function equations are listed in Table 2.

Table 2 CEC2013 test suite for real-parameter optimization benchmarks

We compare IWPSO [5], CCPSO [6], FIPS [24], CLPSO [10], and SLPSO [13] in our experiment for 10-D optimization. The parameter setting of each algorithm is the recommended one, and population size of each compared algorithm is set to a constant number \(ps = 100\). Previous work shows that an empirically good constant \(\omega \) performs better than decreasing \(\omega \) related to iterations in the range [0.4, 0.9], so we use constant \(\omega = 0.5\) for IWPSO, \(\chi = 0.7298\), \(c_{1}=2.05, c_{2}=2.05\) for CCPSO. The maximum velocity is constrained to maximal range of each dimension as this setting is empirical best. Fluctuation coefficient in MKE algorithm F is set to a constant value (\(F = 0.7\)).

We run 20 times for each of the benchmark function in CEC2013 test suite with the number of function evaluations equaling to 1, 000, 000 (10, 000 iterations) for each run. The best values and average/standard values of fitness errors (\(f_{target}-f_{optima}\)) for 10-D optimization are listed in Tables 3 and  4 respectively. From Table 3, we can see that the proposed MKE algorithm can find 8 optima out of the 28 benchmarks and the success rate that algorithm can find the global optimum is much higher than any of the contrasted algorithm. The proposed MKE algorithm performs very well on unimodal functions such as \(f_1\)\(f_5\), and it also performs very well on many multimodal benchmark functions such as \(f_6\), \(f_9\), \(f_{11}\), \(f_{17}\), \(f_{21}\), \(f_{24}\), and \(f_{28}\).

Table 3 Comparison results of best values in 20 runs with the same population size (100) and number of function evaluations
Table 4 Comparison results of mean/standard deviation of 20 runs with the same population size and number of function evaluations

The overall performance of our proposed MKE algorithm is much better than other contrasted state-of-the-art PSO variants. We can see that many of the contrasted algorithms can find global optima of function \(f_1\) and \(f_5\) from Table 4, so we figure out the convergence curve of these algorithms in Figs. 3 and  4. The average iteration for the contrasted algorithms to find the optima of these two benchmark functions are shown in Table 5.

Fig. 3
figure 3

Convergence curve of different algorithm on function f1

Fig. 4
figure 4

Convergence curve of different algorithm on function f5

5 Vehicle navigation under wireless sensor network environment

With the development of micro-electronic technology, wireless sensor networks have been widely used in many applications. Intelligent transportation systems under a wireless sensor network environment show tremendous advantageous in vehicle information collecting, vehicle localization and tracking. As more and more vehicles are traveling on the road nowadays, the desire of a better travel path attributes to the development of vehicle navigation technique. In this paper, the proposed monkey king algorithm is utilized to tackle the vehicle navigation problem aiming at less travel time and overall better throughput of a micro-scope city traffic networks under wireless sensor network environment. SUMO [25, 26] platform is used in our simulation, and Fig. 5 shows a small region of the real traffic networks in Shenzhen city on SUMO platform. A grid network on SUMO platform is also used in our simulation and the related figure is shown in Fig. 6.

Table 5 Iteration needed to find the global optima over different algorithms on function \(f_1\) and \(f_5\)

To achieve an efficient and robust navigation, end device sensors are emplaced in the intersections of the road networks to grasp traffic information. Road distance between two intersections, lanes of the road, maximum velocity restriction, moving direction, safe distance between two vehicles, are all collected and used in traffic navigation. In the simulation, ten thousand cars are randomly dispersed on the \(8\times 8\) grid network shown in Fig. 6, and the distance between two neighbor intersection node is 2 km. Congestion is happened in two predefined situations, one is that there are more than two cars within a safe distance of a single lane in the networks, the other is that there are more than 89 cars in the interval between two neighbor intersection nodes. In our simulation, the maximum velocity of the urban area is set to 72 km/h for easy calculation, and the safe distance is set to 70 m. Four different techniques are used for the navigation contrasts, and they are Dijkstra algorithm, A star algorithm, ebb-tide-fish algorithm [17] and the proposed monkey king algorithm in the paper.

In the simulation, there are 64 intersection nodes, and the navigation can be transformed into a 64-D optimization problem for least travel time of a specified vehicle under real-time traffic condition. The path from the start to the destination is within a sequence such as \(Path_{i}: \{x_{1},\ldots ,5,\ldots ,x_{k},\ldots ,59,\ldots ,x_{64}\}, x_{k} \in [1,64]\). The fitness function of travel time is calculated in Eq. 12. \(g(\omega _{i,j})\) denotes the weight function of the edge between two intersection nodes i and j. \(T_{i,j}\) is a function of the traffic density obtained by sensor nodes in the wireless sensor network, and it means the time consumed when traveling the interval between intersection nodes i and j while \(T_{delay}\) denotes the time delay crossing the intersection. The weight function \(g(\omega _{i,j})\) is calculated with regard to traffic density and the value is stored in the adjacent matrix of the nodes. If there is no directly connected edge between two intersection nodes, the weight is set to \(\omega _{i,j} = \infty \), and if any edge in the solution candidate is out of the edges in the navigation, the corresponding weight \(\omega _{i,j}\) is set to 0. Fig. 7 shows once of navigation of the grid network simulation. The yellow point is the start, the pink point is the destination, the red points denote congestion, and the green points denote the available path of navigation.

$$\begin{aligned} f(x_{1},x_{2},\ldots ,x_{64}) = \displaystyle {\sum _{i=1,j=i+1}^{63}{(g(\omega _{i,j})*T_{i,j}+T_{delay}).}} \end{aligned}$$
(12)
Fig. 5
figure 5

Small region of real traffic networks in Shenzhen city on SUMO platform

Fig. 6
figure 6

Grid network used in our simulation for vehicle navigation

Fig. 7
figure 7

One navigation of randomly generated start and destination in the grid network simulation

In the monkey king algorithm based navigation, each particle is initialized by a randomly generated traversing sequence of the nodes in the simulation, and after calculation of the fitness values, we can locate the temporal global optima. Hamming distance [27] in Eq. 13 is used for candidate updating scheme. The distance between two solution candidates is calculated in the hamming distance way with the equation shown in Eq. 13. \(\oplus \) means XOR operation, and \(x_{j}\) and \(x_{gbest_j}\) mean the jth component of \(X_{i}\) solution candidate and global best candidate respectively. Equation 6 can be rewritten in a hamming distance way as illustrated in Eq. 14. \(\widehat{X}_{diff}\) denotes the hamming distance between \(\widehat{X}_{r1}\) and \(\widehat{X}_{r2}\). \(\widehat{X}_{gbest,G+1}\) is a generated coordinate which satisfies the hamming distance \(d(\widehat{X}_{gbest,G+1}, \widehat{X}_{gbest,G})\) equaling to \(F*\widehat{X}_{diff}\).

Table 6 The comparison of average fuel consumption and time consumption for 200 times navigation
Table 7 The comparison of average fuel consumption and time consumption for 2000 times navigation
$$\begin{aligned} d(X_{gbest}, X_{i}) = \displaystyle {\sum _{j=1}^{d} x_{gbest_j}\oplus x_j} \end{aligned}$$
(13)
$$\begin{aligned} {\left\{ \begin{array}{ll} \widehat{X}_{diff} = d(X_{r1},X_{r2})\\ d(\widehat{X}_{gbest,G+1}, \widehat{X}_{gbest,G}) = F*\widehat{X}_{diff}\\ \widehat{X}_{G+1} = M\otimes \widehat{X}_{G}+Bias\\ Bias = \overline{M}\otimes \widehat{X}_{gbest,G+1}.\\ \end{array}\right. } \end{aligned}$$
(14)

In our simulation, we conduct two experiments, we run 200 times to calculate the average travel time for the first experiment, and run 2000 times for the second. The comparison results are shown in Tables 6 and  7 respectively. Both Dijkstra algorithm and A star algorithm can find the shortest pathes in these two experiments, and they consumed the same travel time to complete the journey from the start to the destination. There are the cases that congestion is in the shortest path, so the least travel time navigation may be not the shortest path. From the two tables, we also can see that the travel time of ebb-tide-fish algorithm and monkey king algorithm is the same in experiment one (results shown in Table 6, as the least travel time path can be found both of these two algorithm in a predefined time span. In the second experiment, the monkey king algorithm performs good as ebb-tide-fish algorithm can not find the least time path within the restricted time.

6 Conclusion

In this paper, we propose MKE algorithm, and it is an enhanced version of the former proposed ebb-tide-fish algorithm for global optimization. The new proposed algorithm in this paper is a much powerful technique to tackle optimization problems, CEC2013 test suite for real-parameter optimization benchmarks is used to verify the characteristic of the new algorithm. Contrasts are made between state-of-the-art PSO variants (including IWPSO, CCPSO, FIPS, CLPSO, SLPSO) and the proposed algorithm, and experiment results show that our proposed algorithm performs very well not only on unimodal optimization functions but also on multimodal optimization functions. An application of MKE algorithm to tackle vehicle navigation under wireless sensor network environment is also shown in the paper, and experiment results show that it also performs good on this application.