Abstract
The Particle Swarm Optimization is one of the most famous nature inspired algorithm that belongs to the swarm optimization family. It has already been used successfully in the continuous problem. However, this algorithm has not been explored enough for the discrete domain. In this work we introduce new operators that are dedicated to combinatorial research that we implemented on a modified discrete particle swarm optimization called DPSO-CO to solve travelling salesman problem. The experimental results on a set of different instances, and the comparison study with others adaptations show that adopting new ways, combinations and operators gives birth to a really competitive efficient algorithm in operational research.
Access provided by CONRICYT-eBooks. Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
The metaheuristic algorithms, is a set of nature inspired methods, generally based on search probability technique to find a good solution within a reasonable time. Today, the metaheuristic algorithms have become more involved into solving a real engineer single and multi-objective problems; before applying the metaheuristics on a real problem, the researchers tested its quality performance to resolve classical problems. As far as the combinatorial problems is concerned, the travelling salesman problem (TSP) is considered as the most famous one that belongs to NP-hard problems [1], Its resolution consists in finding a short travel among a set of cities that the salesman has to visit in order to finish his journey at the initial starting point. Furthermore, TSP has several variations in different areas, which makes its resolution more attractive in computer wiring, vehicle routing and scheduling problems. Moreover, numerous metaheuristics have been published for TSP such as genetic algorithm [2], harmony search algorithm [3], particle swarm optimization [4], ant colony optimization [5] and bee colony optimization [6].
The particle swarm optimization (PSO) [7] is one of the well known optimization algorithms, inspired by bird flocks intelligence method, and classified on swarm intelligence algorithm. The principle of PSO focuses on the collaboration of all particles swarm, where each particle seeks a good position and moves with a specific velocity. This algorithm defines a specific rule which allows each particle to follow the best position in swarm without ignoring its own search making it always search for what is best. In fact, several adaptations of PSO called discrete particle swarm optimization (DPSO) have been proposed [8,9,10] for a different combinatorial optimization problems, one of the most known adaptations was introduced by Clerc [11], but the experience showed that adaptation of DPSO does not give competitive results like other metaheuristics methods.
This work presents a new adaptation of DPSO named DPSO-CO with the introduction of new combinatorial operators that follows the idea of Clerc in addition to unification dimension, which is then concluded by the presentation of a performance DPSO. The rest of this paper is organized as follows: definition of travelling salesman problem in the Sect. 2 and description of particle swarm optimization algorithm in the Sect. 3, while the Sect. 4 is devoted to give presentation of new operators and a new adaptation. The application and the study of experimental results of this proposition comes in the Sect. 5, and the conclusion and perspective in the Sect. 6.
2 Travelling Salesman Problem
The travelling salesman problem consists on finding the shortest path for the salesman to take in order to visit all cities while two rules must be respected: first, he needs to visit each city only once, second, he has to finish at the starting point. Mathematically speaking, the solution of this problem is a permutation π, contains n elements, where n presents the number of cities, and the i th element π i indicate the i th city that should be visited, therefore, the objective function to be minimized is:
3 Particle Swarm Optimization
In 1995, Kennedy and Eberhart introduced the particle swarm optimization [12] that is a research algorithm based on the cooperation and sharing information within a defined research space in order to find a good food source like the social behaviour in bird flocking and fish schooling for example In researcher space, the algorithm launches a set of individuals as candidate solution called “particles”, each particle has a position known by all the group. In every moment, particles are moving with a specific velocity towards the best position without ignoring their previous ones that will be shared in the group once found.
More concretely, in D-dimensional space, a population of n particles as potential solutions, each particle has a position \( P^{t} \) (Eq. 2) which is generated randomly in \( {\text{t}} = 0 \).
However, before next iteration \( t \), each \( i \) th particle is based on the global best position founded in group \( gbest^{t - 1} = \left( {gbest_{k}^{t - 1} } \right)_{k = 1, .., D} \), and its best previous position \( pbest_{i}^{t - 1} = \left( {pbest_{i, k}^{t - 1} } \right)_{k = 1, ..,D} \) to calculate its next velocity \( V_{i}^{t} = \left( {v_{i, k}^{t} } \right)_{k = 1, ..,D} \) according Eq. 3, which will take it towards another position \( X_{i}^{t} = \left( {x_{i, k}^{t} } \right)_{{k = 1, ..,{\text{D}}}} \) by formulation of Eq. 4.
Where \( \upomega \) is the inertia coefficient constant in interval [0, 1], c1 and c2 are cognitive and social parameters constants in interval [0, 2], and \( r_{1} \), \( r_{2} \) are two generated parameters randomly in [0, 1]. This process keeps repeating until the stopping condition will be reached. As in our case, the objective is to find the best cycle of TSP without exceeding a maximum iteration number, Algorithm 1 resumes this process with a pseudo code of PSO.
4 Proposed Particle Swarm Optimization with Discrete Operators
The proposed discrete particle swarm optimization with combinatorial operators (DPSO-CO), follows the same method of operators presented by Clerc [11], but they focus mainly on the unification of the dimension of velocity and position, in order to avoid any truncate of velocity so that no data would be lost. This section is devised in three sub-sections, where the first one contains a presentation of the novel operators while in the second one adaptation of DPSO for this operator is to be found.
4.1 Novel Discrete Operators
Before starting representing operators, as it is known, the structure of the position is represented as a permutation π, and velocity is a set of permutation. In this proposed operators, we defined the data structure of velocity like position, which means that the velocity is a permutation of d elements, where d is dimension of research space.
Addition for position and velocity.
The result of addition between velocity and position is a position, which velocity translates the order of item position, that means in each i th item of x will be the (v i ) th item in x′ resumed in Eq. 6, to clarify this operation formulate 7 and Fig. 1. Example of position x plus velocity vshow example of addition between position and velocity.
With this operator, if addition of position and two velocities give the same position, it implies that they are equal (Eq. 8), and this redefinition makes more sense of vector translation to velocity.
Addition between two velocities.
The addition of velocity v 1 with another velocity v 2 is a new velocity v 3 with d elements, where each item i in v 3 equal value of (v 1 i ) th item in v 2, this action resumes to translation of v 1 by v 2, where (x + v 1) + v 2 = x + (v 1 + v 2) but v 1 + v 2 not equals v 2 + v 1. Equation 9 resumes an example of this operation.
Subtraction operator.
At the same idea of addition operator, it is applied between two positions, so the x 1 minus x 2 is v where each i th item in v (v th i ) equal the rang k flowing Eq. 11, the result is a velocity v which enables the second position to move towards the first position. And if x 1 - x 2 = v then x 1 = x 2 + v.
Multiplication operator.
In this operator, the coefficient represents a probability parameter of adjustment for velocity, in other way, the particle moves with some velocity which can include some problems that can create a small disruptive impact on its position for the next iteration. Thus, the multiplication between a coefficient c and velocity v. After that the operator checks if a random number between 0 and 1 is less than c, then he applies a random swap in v, otherwise it does nothing (Eq. 13).
4.2 Discrete Particle Swarm Optimization
The proposal method DPSO-CO based on these new defined operators generates randomly particles population, then starts the search loop, where it detects, in each iteration, gbest and generates the next solution for each particle following Eqs. 3 and 4, where the inertia coefficient constant takes value 1, c1 and c2 are generated for each iteration between 0 and 1. Each new solution gets a small perturbation with 2-opt in order to look for a better neighbour, 2-opt improves a solution by applying iteratively exchanges between two edges resuming in Fig. 2. 2-Opt Swap, where (A) represents the initial case, and (B) is the new perturbation of tour. In TSP case, to accept exchange between two pairs (a, b) and (c, d) to (a, c) and (b, d) the sum of distances of (a, c) and (b, d) must be less than the sum of pairs (a, b) and (d, c). This process continues until DPSO-CO finds the best known solution for the problem or if it exceeds the maximal number of possible iterations.
5 Experimental Results
In order to validate its performances, the proposed adaptation has been programmed with C++ language and has been made on PC with processor Intel(R) Core(TM) i5-2500 CPU @ 3.30 GHz and 4 Go of RAM, and has been tested on some symmetric benchmarks of TSPLIB library [13].
Table 1 represents the numerical results on twenty instances of symmetric problems, each instance is executed thirty times, the first column represents the name of instance problem, the second column Opt shows the best known optimum of instance, while the third and fourth columns show respectively the best and worst solutions obtained by DPSO-CO. In the fifth column the average length of all solutions is to be found, and in the next column the standard deviation SD of all the results is shown. The seventh and eighth columns represent respectively the percentage relative error of the average PDav and the best solution PDbest according to the optimal known solution presented in second column, this value is calculated following Eq. 14. Finally in the ninth column C1%/Copt, where C1% indicates the number of solutions where relative error is less than 1 and Copt is the number of solutions equal to optimum known solution that means the number of iteration which its relative error is null, and the last one is time column that shows the average of time execution of all iterations in second.
The experimental results have shown that this proposed method gets the optimum one for the majority of instances, especially for medium-sized category instances where the average execution time is the shortest. To verify its performance according to the current DPSO, we have implemented DPSO presented by Clerc [11] and showed the results in Table 2. Comparison of the proposed algorithm with DPSO proposed by Clerc [11]. When comparing, DPSO gets the best known solutions for some instances, but in general, it does not provide results better than DPSO-CO that give solutions in a shorter time and with less amount of relative errors terms of who get result in small time and with a less reduced amount of relative errors. Both Fig. 3. Comparison average relative error between proposed method and PSO-ACO-3Opt [14] and Fig. 4. Comparison average execution time between proposed method and PSO-ACO-3Opt [14] show the difference of the PDav and the execution time between the proposed DPSO and the latest improved hybrid DPSO with Ant colony and 3-opt called PSO-ACO-3opt [14], from these figures DPSO-CO is more efficient than PSO-ACO-3Opt in terms of time and quality.
In addition, Table 3. Comparison of the proposed algorithm with DCS [15]. compares results of DPSO-CO with DCS [15], where column Time represents the average time for all iteration average execution, and the last line represents the average of each column. The results show that their performances are very close, and DPSO-CO gives a nice result especially in time of execution for medium-size instances, but when the size of the problem increases, DCS takes the advantage.
6 Conclusion
This paper presents a new adaptation of DPSO-CO characterized by a novel definition of discrete operators. This proposed DPSO-CO method has been applied on different symmetric TSP instances from TSPLib. The result was compared in confrontation with the last hybrid PSO known and a recent competitive algorithm DCS. From this obtained study, it can be concluded that the proposed DPSO-CO are effective and more performant than other methods. However, this work opens new horizons for DPSO-CO to solve other combinatorial problems especially when using open discrete operators in new different ways that can be used in the future for others metaheuristic algorithms to test and increase the performance of research.
References
Arora, S.: Polynomial time approximation schemes for Euclidean traveling salesman and other geometric problems. J. ACM (JACM) 45, 753–782 (1998)
Grefenstette, J., Gopal, R., Rosmaita, B., Van Gucht, D.: Genetic algorithms for the traveling salesman problem. In: Proceedings of the First International Conference on Genetic Algorithms and their Applications. Lawrence Erlbaum, Hillsdale (1985)
Bouzidi, M., Riffi, M.E.: Adaptation of the harmony search algorithm to solve the travelling salesman problem. J. Theor. Appl. Inf. Technol. 62(1) (2014)
Wang, K.P., Huang, L., Zhou, C.G., Pang, W.: Particle swarm optimization for traveling salesman problem. In: 2003 International Conference on Machine Learning and Cybernetics (2003)
Dorigo, M., Birattari, M.: Ant colony optimization. In: Encyclopedia of Machine Learning. Springer (2010)
Wong, L.P., Low, M.Y.H., Chong, C.S.: A bee colony optimization algorithm for traveling salesman problem. In: Modeling & Simulation, AICMS 2008 (2008)
Kennedy, J.: Particle swarm optimization. In: Encyclopedia of Machine Learning, pp. 760-766. Springer, US (2011)
Chen, A.L., Yang, G.K., Wu, Z.M.: Hybrid discrete particle swarm optimization algorithm for capacitated vehicle routing problem. J. Zhejiang Univ.-Sci. A 7(4), 607–614 (2006)
Kennedy, J., Eberhart, R.C.: A discrete binary version of the particle swarm algorithm. In: Systems, Man, and Cybernetics, 1997. Computational Cybernetics and Simulation (1997)
Pan, Q.K., Tasgetiren, M.F., Liang, Y.C.: A discrete particle swarm optimization algorithm for the no-wait flowshop scheduling problem. Comput. Oper. Res. 35(9), 2807–2839 (2008)
Clerc, M.: Discrete particle swarm optimization, illustrated by the traveling salesman problem. In: New Optimization Techniques in Engineering, pp. 219–239 (2004)
Eberhart, R., Kennedy, J.: A new optimizer using particle swarm theory. In: Proceedings of the Sixth International Symposium on Micro Machine and Human Science (1995)
Reinelt, G.: TSPLIB—A traveling salesman problem library. ORSA J. Comput. 3(4), 376–384 (1991)
Mahi, M., Baykan, Ö.K., Kodaz, H.: A new hybrid method based on particle swarm optimization, ant colony optimization and 3-opt algorithms for traveling salesman problem. Appl. Soft Comput. 30, 484–490 (2015)
Ouaarab, A., Ahiod, B., Yang, X.S.: Discrete cuckoo search algorithm for the travelling salesman problem. Neural Comput. Appl. 24(7–8), 1659–1669 (2014)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer International Publishing AG
About this paper
Cite this paper
Bouzidi, M., Riffi, M.E., Serhir, A. (2018). Discrete Particle Swarm Optimization for Travelling Salesman Problems: New Combinatorial Operators. In: Abraham, A., Haqiq, A., Muda, A., Gandhi, N. (eds) Proceedings of the Ninth International Conference on Soft Computing and Pattern Recognition (SoCPaR 2017). SoCPaR 2017. Advances in Intelligent Systems and Computing, vol 737. Springer, Cham. https://doi.org/10.1007/978-3-319-76357-6_14
Download citation
DOI: https://doi.org/10.1007/978-3-319-76357-6_14
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-76356-9
Online ISBN: 978-3-319-76357-6
eBook Packages: EngineeringEngineering (R0)