Abstract
The PSOGSA is a novel hybrid optimization algorithm, combining strengths of both particle swarm optimization (PSO) and gravitational search algorithm (GSA). It has been proven that this algorithm outperforms both PSO and GSA in terms of improved exploration and exploitation. The original version of this algorithm is well suited for problems with continuous search space. Some problems, however, have binary parameters. This paper proposes a binary version of hybrid PSOGSA called BPSOGSA to solve these kinds of optimization problems. The paper also considers integration of adaptive values to further balance exploration and exploitation of BPSOGSA. In order to evaluate the efficiencies of the proposed binary algorithm, 22 benchmark functions are employed and divided into three groups: unimodal, multimodal, and composite. The experimental results confirm better performance of BPSOGSA compared with binary gravitational search algorithm (BGSA), binary particle swarm optimization (BPSO), and genetic algorithm in terms of avoiding local minima and convergence rate.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
Recently, nature-inspired stochastic optimization techniques have received much attention. Such optimization mostly mimics social/individual behavior of a group of animal or natural phenomena. Such techniques start the optimization process by creating a set of random solutions and improve them as candidate solutions for a particular problem. Due to the superior performance of such techniques compared to mathematical optimization approaches, the application of stochastic optimization methods can be found in different fields. It has been logically proven by the well-known No Free Lunch (NFL) theorem that there is no optimization technique which for solving all optimization problems [1]. This theorem therefore allows researchers to propose new optimization techniques or improve the current algorithms for solving a wider range of problems. Some of the most popular and well-known algorithms are particle swarm optimization (PSO) [2], genetic algorithm (GA) [3], differential evolution algorithm (DE) [4], simulated annealing (SA) [5], harmony search (HS) [6], ant colony optimization (ACO) [7], gravitational search algorithm (GSA) [8], biogeography-based optimization algorithm (BBO) [9–12], grey wolf optimizer (GWO) [13], and Krill Herd (KH) algorithm [14–20].
The literature shows that hybridizing stochastic optimization techniques is one of the ways for designing superior algorithms and using the advantages of multiple algorithms when solving optimization problems [21–28]. The PSOGSA algorithm is a novel hybrid of PSO and GSA, which was proposed in 2010 [29]. It has been proven that this algorithm has a good performance in solving optimization problems. In this algorithm, the search process is carried out by agents, which mimics the behavior of GSA in the exploration phase and PSO in exploitation phase. In fact, this algorithm was proposed to alleviate slow exploitation of the GSA algorithm, which was identified to be one of its main drawbacks.
Since the proposal of the GSA algorithm, many studies have been aimed for improving the performance of this algorithm. In 2011, Hatamlou et al. [30] hybridized GSA with another heuristic method for improving the solutions obtained by GSA for solving clustering problems. In 2012, an position-based learning GSA [31] and Immune Gravitation Optimization Algorithm (IGOA) [32] were employed to improve the convergence speed of GSA. The latter study incorporated the characteristics of antibody diversity and vaccination to the GSA algorithm. Similarly to PSOGSA [29], social thinking and individual thinking of PSO were integrated to GSA for solving a continuous problem called parameter identification of hydraulic turbine governing system [33].
The GSA algorithm was modified by Rashedi et al. [34] to solve binary problems as well. Since BGSA utilizes the same concepts of GSA, however, it suffers from the same drawbacks of GSA mentioned above. Needless to say, the binary version of improved GSA algorithms in the literature would also have superior performance compared to BGSA if they design in a way that uses the same concepts of continuous optimization for binary optimization. This motivates the authors to extend PSOGSA algorithm as one of the best improved GSA algorithms to a binary version and investigate its performance since the original version of PSOGSA has been designed for solving problems with continuous real search spaces (domains) [29].
In real world, there are many optimization problems with discrete binary search spaces (for instance, dimensionally reduction or feature selection in different fields). There are different methods in the literatures to require a continuous algorithm for solving binary problems as well. Some of these methods change the structure of the algorithm, whereas others maintain the algorithm’s mechanism in binary search spaces. In the literatures, stochastic optimization algorithms such as HS, DE, PSO, and GSA [35] have been adapted to solve binary problems. The binary HS algorithm utilizes a set of harmony search considerations and pitch adjustment rules to perform binary optimization [36]. In addition, binary DE uses a probability estimation operator in order to solve discrete problems [37]. The mechanism of binary PSO [38] and binary GSA [34] is quite similar since they both use transfer functions to solve binary problems. The difference of BPSO and BGSA compared to other above-mentioned binary optimization methods is that these two algorithms retain the same concepts of position and velocity updating procedures.
In this paper, a binary version of PSOGSA is introduced called BPSOGSA in order to solve binary problems. A transfer function and new positing updating procedure are integrated to BPSOGSA. The paper also considers the proposal of adaptive values for BPSOGSA in order to balance exploration and exploitation.
The rest of the paper is organized as follows. Section 2 presents a brief introduction to hybrid PSOGSA. Section 3 discusses the basic principles of binary version of PSOGSA. The experimental results are demonstrated in Sect. 4. Finally, Sect. 5 concludes the work and suggests some researches for future works.
2 The hybrid PSOGSA
The hybrid PSOGSA algorithm was introduced by Mirjalili and Mohd Hashim [29]. The basic idea of PSOGSA is to combine the ability of social thinking (gbest) in PSO with exploration capability of GSA. The PSOGSA algorithm was mathematically modeled as follows:
Similarly to PSO and GSA, every search agent has a position vector reflecting the current position in search spaces as follows:
where N is the number of search agents, d is the dimension of the problem, and x d i is the position of the ith agent in the d-th dimension.
So the whole population is represented as a matrix as follows:
There is another matrix for storing the corresponding fitness value for each search agent as follows:
The optimization process begins with filling out the position matrix by random values. During optimization, the gravitational force from agent j on agent i at a specific time t is defined as follows:
where M aj is the active gravitational mass related to agent j, M pi is the passive gravitational mass related to agent i, G(t) is a gravitational constant at time t, ɛ is a small constant, and R ij (t) is the Euclidian distance between two agents i and j.
The gravitational constant (G) and the Euclidian distance between two agents i and j are calculated as follows:
where α is the descending coefficient, G 0 indicates the initial gravitational constant, iter is the current iteration, and maxiter shows the maximum number of iterations.
In a problem space with dimension equals to d, the total force that acts on agent i is calculated by the following equation:
where rand j is a random number generated with uniform distribution in the interval [0, 1].
The law of motion has also been utilized in this algorithm which states that the acceleration of a mass is proportional to the resultant force and inverse of its mass, so the acceleration of all agents are calculated as follows:
where d is the dimension of the problem, t is a specific time, and M ii is the inertial mass of agent i.
During optimization, the best obtained solution so far is saved in gbest variable inspired by PSO. The reason for using the best solution is that GSA suffers from slow exploitation and deteriorates in final iterations [29, 39, 40]. In GSA, the masses movements are calculated based on their weights, and the weights are directly calculated by the fitness function. Thus, the masses that have better values of fitness function are considered as heavy objects, and consequently, they move slowly. According to the concepts of EA, particles should wander through the search space at initial iterations. Then, after finding a good solution, they have to gather around that solution in order to exploit the best solution. In GSA, during running time, masses become heavier and heavier. In the final steps of iterations, masses have almost the same weights due to gathering around a promising solution. They approximately attract each other with the same intensity of gravitational forces. Therefore, they are not able to move toward the best solution quickly. This problem can be seen in Fig. 1. This figure shows a simple one-dimensional problem where the fitness function is Y = X 2. This above-mentioned problem is more critical for high dimensional problems.
In order to see how PSOGSA alleviate this drawback, a conceptual model is illustrated in Fig. 2. This figure shows that the hybrid PSOGSA employs the best solution obtained so far for guiding the heavy masses toward the global optimum. Apparently, this method speeds up the overall movement of masses as well, resulting in enhancing exploitation ability of PSOGSA.
The Eq. (2.9) was proposed as follows for combining PSO and GSA:
where V i (t) is the velocity of agent i at iteration t, c ′ j is an accelerating factor, rand is a random number generated with uniform distribution between 0 and 1, ac i (t) is the acceleration of agent i at iteration t, and gbest is the best obtained solution so far.
In each iteration, the positions of agents are updated as follows:
In PSOGSA, at first, all agents are randomly initialized using uniform distribution. Each agent is considered as a candidate solution. After initialization, gravitational force, gravitational constant, and resultant forces among agents are calculated by Eqs. 2.2, 2.3, and 2.5, respectively. After that, the accelerations of particles are defined by Eq. 2.6. In each iteration, the best attained solution should be updated. After calculating the acceleration and updating the best solution, the velocities of all agents can be calculated by Eq. 2.7. Finally, the positions of agents are updated by Eq. 2.8. The process of updating velocities and positions will be stopped when meeting an end criterion. The steps of hybrid PSOGSA are represented in Fig. 3.
It was experimentally proved that PSOGSA is powerful enough to solve optimization problems better than PSO and GSA [29]. However, the original PSOGSA is a continuous algorithm, which is not capable of solving binary problems directly. In the following subsections, the binary version of PSOGSA is proposed and investigated.
3 Binary version of PSOGSA (BPSOGSA)
In the original PSOGSA, agents can continuously move around the search space because of having position vectors with continuous real domain. In order to require the search agents to move in a binary search space, we have to modify position updating (Eq. 2.10). According to [34, 38], a transfer function is also needed to change position of an agent with the probability of its velocity. Transfer functions map the velocities values to the probability values for updating the positions. A set of standard transfer functions can be found in [41]. According to Mirjalili and Lewis [41], a transfer function should be able to provide a high probability of changing the position for a large absolute value of the velocity. In addition, it should present a small probability of changing the position for a small absolute value of the velocity. Moreover, the range of a transfer function should be bounded in the interval of [0, 1] and increased with the increasing of velocity. The utilized function in [34] is presented as Eq. 3.1. This function is also depicted in Fig. 4.
We use this equation for mapping velocities of agents in BPSOGSA to probabilities of flipping their position vectors’ elements. After calculating the probabilities, the agents update their positions based on the presented rules in Eq. 3.2.
The pseudocode regarding the general steps of BPSOGSA is culminated in Fig. 5.
In should be noted that we utilize adaptive values for c ′1 and c ′2 in this study. According to Mirjalili and Lewis [42], adding gbest to the velocity vector have weakened the exploration phase, since it establishes a permanent element of velocity updating. In order to resolve this issue, we utilize adaptive values for c ′1 and c ′2 as follows [42]:
where t indicates the current iteration and T is the maximum number of iterations.
The behavior of these two variables is illustrated in Fig. 6. We adaptively decrease c ′1 and increase c ′2 so that the masses tend to accelerate toward the best solution as the algorithm reaches the exploitation phase. Since there is no clear border between the exploration and exploitation phases in evolutionary algorithm, the adaptive method is one of the best options for allowing an algorithm to gradually transit between these two phases.
Theoretically, the BPSOGSA algorithm has the potential to outperform both BPSO and BGSA since it uses the same concepts of PSOGSA. In the following subsection, a comparative study is provided in order to justify the performance of BPSOGSA practically. Note that the source code of this algorithm can be found in http://www.alimirjalili.com/Projects.html.
4 Experimental results and discussion
In this section, 22 benchmark functions dividing into three groups such as unimodal, multimodal, and composite are chosen to evaluate the performance of BPSOGSA [43–47]. Tables 1, 2 and 3 present these benchmark functions, range of search space, and the optimum values.
A comparative study with BGSA, BPSO, and GA is conducted for verifying the performance of BPSOGSA. The dimension of all benchmark functions is set to 5 (m = 5). Fifteen bits are used for representing each variable in binary format (one bit is reserved for the sign). Therefore, the dimensions of test functions and search agents are 75. Note that the initial values of basic parameters of the algorithms are provided in Table 4.
The experimental results are presented in Tables 5, 6 and 7. The results are collected over 10 independent runs. The average (ave) and standard deviation (std) of the best obtained solutions in the last iterations are reported as the results in bold type.
4.1 Unimodal benchmark functions
The unimodal test functions are useful to examine exploitation of the algorithms. As per the results of unimodal functions in Table 5, the BPSOGSA algorithm shows the best results on three of the unimodal benchmark functions in terms of the mean and standard deviation the results. According to the statistical results in Fig. 7, BPSOGSA and BPSO provide better results on 43 % of the unimodal benchmark functions, followed by GA with 14 %. BGSA did not show good performance on this set of functions due to the above-mentioned drawback (slow exploitation). Therefore, this is evidence that the proposed algorithm has high performance in finding the global solution of unimodal benchmark functions.
According to the convergence curves of algorithm when solving unimodal test functions in Fig. 8, the BPSOGSA and BPSO algorithms have the fastest convergence rate. These findings prove that the BPSOGSA algorithm seems to show the best exploitation ability and convergence rate.
4.2 Multimodal benchmark functions
The results of the multimodal benchmark functions are provided in Table 6 and Figs. 9, 10. Multimodal test functions are helpful for benchmarking local optima avoidance of the algorithms. According to the results of Table 6 and Fig. 9, BPSOGSA shows the best results in four of the multimodal benchmark functions (45 % of the functions). However, the BPSO and GA algorithms outperform BPSOGSA in two (22 %) and three (33 %) multimodal test functions, respectively. Once again, the BGSA algorithm could not provide competitive results.
The convergence curves in Fig. 10 prove that BPSOGSA has the fastest convergence behavior in four out of nine functions. These findings prove that the BPSOGSA algorithm is able to avoid local optima, and this avoidance does not have negative impact of the convergence speed. In addition, the results can also evidence high exploration of BPSOGSA.
4.3 Composite benchmark functions
Composite functions are the most challenging test functions and suitable for benchmarking exploration and exploitation combined. The results in Table 7 and Figs. 11, 12 show that the BPSOGSA is evidently better than other algorithms on 84 % of the composite functions. For the function f 22, BGSA has the best results, followed by BPSOGSA.
The convergence of algorithms when solving composite functions are illustrated in Fig. 12. This figure shows that BPSOGSA has the fastest convergence rate. These findings prove that BPSOGSA properly and efficiently balances exploration and exploitation, which is due to the employed adaptive values for \(c_{1}^{'}\) and \(c_{2}^{'}\).
Figure 13 shows the overall statistical results on all the benchmark functions whereby the BPSOGSA algorithm has the best result for 55 % of the benchmark functions.
To summarize, results prove BPSOGSA have better performance than BGSA, BPSO, and GA. Therefore, it can be said that the binary version of hybrid PSOGSA also has the strengths of both PSO and GSA in solving binary problems. According to this comprehensive comparative study and discussion, we state that this algorithm has merit among other binary algorithms.
The reason of the high performance of BPSOGSA on unimodal test functions is due to the fact the algorithm has higher exploitation compared to GSA. The social component of PSO allows BPSOGSA to exploit accurately around the best mass obtained so far. High exploration of BPSOGSA algorithm originates from the intrinsic characteristic of the GSA algorithm, wherein all search agents have impact on each other at each iteration (in contrast to the search agents of PSO that only sees pbest and gbest). This high exploration ability assists BPSOGSA to outperform other algorithms on multimodal test functions. The reason of significantly better results of BPSOGSA on composite test functions is due to adaptive values for \(c_{1}^{'}\) and \(c_{2}^{'}\). These two variables require BPSOGSA to balance between exploration and exploitation, so it emphasizes exploration in initial steps of iteration. However, exploitation is promoted as iteration increases. This cases local minima avoidance and accelerated convergence toward the global optimum over the course of iterations. Finally, the employed v-shaped transfer function obliges search agents of BPSOGSA to switch their positions when they are moving into unpromising areas of the search space, making it more likely to move toward promising areas of search space in contrast to s-shaped transfer functions.
5 Conclusion
In this paper, a binary version of hybrid PSOGSA called BPSOGSA was proposed by utilizing the same concepts of the continuous version in terms of search behavior. In order to justify the performance BPSOGSA, 22 benchmark functions were employed, and the results were compared with BGSA, BPSO, and GA. The results proved that BPSOGSA was able to provide competitive results and has merit among binary heuristic optimization algorithms in binary search spaces. According to the findings, the BPSOGSA algorithm successfully inherits the advantages of the PSOGSA. On one hand, BPSOGSA shows superior exploration since all search agents participate in updating position of a search agent. On the other hand, the exploitation of BPSOGSA is very accurate due to the social component of PSO integrated that causes accelerated convergence. The findings also proved that the adaptive values for \(c_{1}^{'}\) and \(c_{2}^{'}\) balance exploration and exploitation, so BPSOGSA is able to avoid local optima and converge to the promising regions of the search space. The transfer function employed also proved to be advantageous since there is no obligation for search agents to take 0 or 1.
For future studies, it is recommended to apply BPSOGSA in real optimization problems in order to further evaluate the efficiencies of BPSOGSA in solving real-world problems. Investigating the effect of different transfer functions on BPSOGSA would be interesting as well.
References
Wolpert DH, Macready WG (1997) No free lunch theorems for optimization. IEEE Trans Evol Comput 1:67–82
Kennedy J, Eberhart R (1995) Particle swarm optimization, vol 4, pp 1942–1948
Holland JH (1992) Genetic algorithms. Sci Am 267:66–72
Storn R, Price K (1997) Differential evolution—a simple and efficient heuristic for global optimization over continuous spaces. J Global Optim 11:341–359
Aarts EHL, Laarhoven PJM (1989) Simulated annealing: an introduction. Stat Neerl 43:31–52
Geem ZW, Kim JH (2001) A new heuristic optimization algorithm: harmony search. Simulation 76:60–68
Dorigo M, Birattari M, Stutzle T (2006) Ant colony optimization. IEEE Comput Intell Mag 1:28–39
Rashedi E, Nezamabadi-Pour H, Saryazdi S (2009) GSA: a gravitational search algorithm. Inf Sci 179:2232–2248
Simon D (2008) Biogeography-based optimization. IEEE Trans Evol Comput 12:702–713
Mirjalili S, Mirjalili SM, Lewis A (2014) Let a biogeography-based optimizer train your multi-layer perceptron. Inf Sci 269:188–209. doi:10.1016/j.ins.2014.01.038
Saremi S, Mirjalili S, Lewis A (2014) Biogeography-based optimisation with chaos. Neural Comput Appl 1–21. doi:10.1007/s00521-014-1597-x
Saremi S, Mirjalili S (2013) Integrating chaos to biogeography-based optimization algorithm. Int J Comput Commun Eng 2:655–658
Mirjalili S, Mirjalili SM, Lewis A (2014) Grey wolf optimizer. Adv Eng Softw 69:46–61. doi:10.1016/j.advengsoft.2013.12.007
Gandomi AH, Alavi AH (2012) Krill herd: a new bio-inspired optimization algorithm. Commun Nonlinear Sci Numer Simul 17:4831–4845
Guo L, Wang G-G, Gandomi AH, Alavi AH, Duan H (2014) A new improved krill herd algorithm for global numerical optimization. Neurocomputing 138:392–402
Wang G-G, Gandomi AH, Alavi AH (2013) An effective krill herd algorithm with migration operator in biogeography-based optimization. Appl Math Model 38(9–10):2454–2462
Wang G-G, Gandomi AH, Alavi AH (2014) Stud krill herd algorithm. Neurocomputing 128:363–370
Wang G-G, Guo L, Gandomi AH, Hao G-S, Wang H (2014) Chaotic Krill Herd algorithm. Inf Sci 274:17–34
Wang G, Guo L, Wang H, Duan H, Liu L, Li J (2012) Incorporating mutation scheme into krill herd algorithm for global numerical optimization. Neural Comput Appl 1–19. doi:10.1007/s00521-012-1304-8
Saremi S, Mirjalili SM, Mirjalili S (2014) Chaotic Krill Herd optimization algorithm. Procedia Technol 12:180–185. doi:10.1016/j.protcy.2013.12.473
Esmin A, Lambert-Torres G, Alvarenga GB (2006) Hybrid evolutionary algorithm based on PSO and GA mutation. In: Sixth international conference on hybrid intelligent systems, pp 57–57
Holden N, Freitas AA (2008) A hybrid PSO/ACO algorithm for discovering classification rules in data mining. J Artif Evol Appl 2008:2
Holden NP, Freitas AA (2007) A hybrid PSO/ACO algorithm for classification. In: GECCO '07 proceedings of the 9th annual conference companion on genetic and evolutionary computation, pp 2745–2750
Lai X, Zhang M (2009) An efficient ensemble of GA and PSO for real function optimization. In: 2nd IEEE international conference on computer science and information technology, pp 651–655
Niu B, Li L (2008) A novel PSO-DE-based hybrid algorithm for global optimization. In: Advanced intelligent computing theories and applications. With aspects of artificial intelligence, pp 156–163
Zhang WJ, Xie XF (2003) DEPSO: hybrid particle swarm with differential evolution operator. In: IEEE international conference on systems, man and cybernetics, vol 4, pp 3816–3821
Wang G-G, Gandomi AH, Alavi AH, Hao G-S (2013) Hybrid krill herd algorithm with differential evolution for global numerical optimization. Neural Comput Appl 1–12. doi:10.1007/s00521-013-1485-9
Wang G-G, Gandomi AH, Alavi AH (2013) A chaotic particle-swarm krill herd algorithm for global numerical optimization. Kybernetes 42(6):6962–6978
Mirjalili S, Hashim SZM (2010) A new hybrid PSOGSA algorithm for function optimization. In: 2010 international conference on computer and information application (ICCIA), pp 374–377. doi:10.1109/ICCIA.2010.6141614
Hatamlou A, Abdullah S, Othman Z (2011) Gravitational search algorithm with heuristic search for clustering problems. In: 3rd conference on data mining and optimization (DMO), pp 190–193
Shaw B, Mukherjee V, Ghoshal SP (2012) A novel opposition-based gravitational search algorithm for combined economic and emission dispatch problems of power systems. Int J Electr Power Energy Syst 35:21–33
Zhang Y, Wu L, Zhang Y, Wang J (2012) Immune gravitation inspired optimization algorithm advanced intelligent computing, vol 6838. In: Huang D-S, Gan Y, Bevilacqua V, Figueroa J (eds) Advanced intelligent computing. Springer, Berlin, pp 178–185
Li C, Zhou J (2011) Parameters identification of hydraulic turbine governing system using improved gravitational search algorithm. Energy Convers Manag 52:374–381
Rashedi E, Nezamabadi-Pour H, Saryazdi S (2010) BGSA: binary gravitational search algorithm. Nat Comput 9:727–745
Rashedi E, Nezamabadi S, Saryazdi S (2009) GSA: a gravitational search algorithm. Inf Sci 179:2232–2248
Wang L, Xu Y, Mao Y, Fei M (2010) A discrete harmony search algorithm. Life Syst Model Intell Comput 37–43
Wang L, Fu X, Menhas M, Fei M (2010) A modified binary differential evolution algorithm. Life Syst Model Intell Comput 6329:49–57
Kennedy J, Eberhart RC (1997) A discrete binary version of the particle swarm algorithm, vol 5, pp 4104–4108
Mirjalili S, Mohd Hashim SZ, Moradian Sardroudi H (2012) Training feedforward neural networks using hybrid particle swarm optimization and gravitational search algorithm. Appl Math Comput 218(22):11125–11137. doi:10.1016/j.amc.2012.04.069
Mirjalili S (2011) Hybrid particle swarm optimization and gravitational search algorithm for multilayer perceptron learning. Universiti Teknologi Malaysia, Faculty of Computer Science and Information System, Master thesis
Mirjalili S, Lewis A (2013) S-shaped versus V-shaped transfer functions for binary particle swarm optimization. Swarm Evol Comput 9:1–14. doi:10.1016/j.swevo.2012.09.002
Mirjalili S, Lewis A (2014) Adaptive gbest-guided gravitational search algorithm. Neural Comput Appl. doi:10.1007/s00521-014-1640-y
Yao X, Liu Y, Lin G (1999) Evolutionary programming made faster. IEEE Trans Evol Comput 3:82–102
Yang XS (2010) Engineering optimization: an introduction with metaheuristic applications. Wiley, London
Molga M, Smutnicki C (2005) Test functions for optimization needs. http://www.zsd.ict.pwr.wroc.pl/files/docs/functions.pdf
Digalakis J, Margaritis K (2001) On benchmarking functions for genetic algorithms. Int J Comput Math 77:481–506
Liang J, Suganthan P, Deb K (2005) Novel composition test functions for numerical global optimization, pp 68–75
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Mirjalili, S., Wang, GG. & Coelho, L.S. Binary optimization using hybrid particle swarm optimization and gravitational search algorithm. Neural Comput & Applic 25, 1423–1435 (2014). https://doi.org/10.1007/s00521-014-1629-6
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00521-014-1629-6