Keywords

1 Introduction

The metaheuristic algorithms, is a set of nature inspired methods, generally based on search probability technique to find a good solution within a reasonable time. Today, the metaheuristic algorithms have become more involved into solving a real engineer single and multi-objective problems; before applying the metaheuristics on a real problem, the researchers tested its quality performance to resolve classical problems. As far as the combinatorial problems is concerned, the travelling salesman problem (TSP) is considered as the most famous one that belongs to NP-hard problems [1], Its resolution consists in finding a short travel among a set of cities that the salesman has to visit in order to finish his journey at the initial starting point. Furthermore, TSP has several variations in different areas, which makes its resolution more attractive in computer wiring, vehicle routing and scheduling problems. Moreover, numerous metaheuristics have been published for TSP such as genetic algorithm [2], harmony search algorithm [3], particle swarm optimization [4], ant colony optimization [5] and bee colony optimization [6].

The particle swarm optimization (PSO) [7] is one of the well known optimization algorithms, inspired by bird flocks intelligence method, and classified on swarm intelligence algorithm. The principle of PSO focuses on the collaboration of all particles swarm, where each particle seeks a good position and moves with a specific velocity. This algorithm defines a specific rule which allows each particle to follow the best position in swarm without ignoring its own search making it always search for what is best. In fact, several adaptations of PSO called discrete particle swarm optimization (DPSO) have been proposed [8,9,10] for a different combinatorial optimization problems, one of the most known adaptations was introduced by Clerc [11], but the experience showed that adaptation of DPSO does not give competitive results like other metaheuristics methods.

This work presents a new adaptation of DPSO named DPSO-CO with the introduction of new combinatorial operators that follows the idea of Clerc in addition to unification dimension, which is then concluded by the presentation of a performance DPSO. The rest of this paper is organized as follows: definition of travelling salesman problem in the Sect. 2 and description of particle swarm optimization algorithm in the Sect. 3, while the Sect. 4 is devoted to give presentation of new operators and a new adaptation. The application and the study of experimental results of this proposition comes in the Sect. 5, and the conclusion and perspective in the Sect. 6.

2 Travelling Salesman Problem

The travelling salesman problem consists on finding the shortest path for the salesman to take in order to visit all cities while two rules must be respected: first, he needs to visit each city only once, second, he has to finish at the starting point. Mathematically speaking, the solution of this problem is a permutation π, contains n elements, where n presents the number of cities, and the i th element π i indicate the i th city that should be visited, therefore, the objective function to be minimized is:

$$ f\left( x \right) = \sum\nolimits_{i = 1}^{n - 1} {distance\left( {\pi_{i} , \pi_{i + 1} } \right)} + distance\left( {\pi_{n} , \pi_{1} } \right) $$
(1)

3 Particle Swarm Optimization

In 1995, Kennedy and Eberhart introduced the particle swarm optimization [12] that is a research algorithm based on the cooperation and sharing information within a defined research space in order to find a good food source like the social behaviour in bird flocking and fish schooling for example In researcher space, the algorithm launches a set of individuals as candidate solution called “particles”, each particle has a position known by all the group. In every moment, particles are moving with a specific velocity towards the best position without ignoring their previous ones that will be shared in the group once found.

More concretely, in D-dimensional space, a population of n particles as potential solutions, each particle has a position \( P^{t} \) (Eq. 2) which is generated randomly in \( {\text{t}} = 0 \).

$$ P^{t} = \left( {X_{i}^{t} } \right)_{i = 1,..,n} = \left( {x_{i1}^{t} ,x_{i2}^{t} , \ldots ,x_{iD}^{t} } \right)_{i = 1, .., n} $$
(2)

However, before next iteration \( t \), each \( i \) th particle is based on the global best position founded in group \( gbest^{t - 1} = \left( {gbest_{k}^{t - 1} } \right)_{k = 1, .., D} \), and its best previous position \( pbest_{i}^{t - 1} = \left( {pbest_{i, k}^{t - 1} } \right)_{k = 1, ..,D} \) to calculate its next velocity \( V_{i}^{t} = \left( {v_{i, k}^{t} } \right)_{k = 1, ..,D} \) according Eq. 3, which will take it towards another position \( X_{i}^{t} = \left( {x_{i, k}^{t} } \right)_{{k = 1, ..,{\text{D}}}} \) by formulation of Eq. 4.

$$ V_{i}^{t} =\upomega \times V_{i}^{t - 1} + c_{1} \times r_{1} \times \left( {pbest_{i}^{t - 1} - X_{i}^{t - 1} } \right) + c_{2} \times r_{2} \times \left( {gbest^{t - 1} - X_{i}^{t - 1} } \right) $$
(3)
$$ X_{i}^{t} = X_{i}^{t - 1} + V_{i}^{t} $$
(4)

Where \( \upomega \) is the inertia coefficient constant in interval [0, 1], c1 and c2 are cognitive and social parameters constants in interval [0, 2], and \( r_{1} \), \( r_{2} \) are two generated parameters randomly in [0, 1]. This process keeps repeating until the stopping condition will be reached. As in our case, the objective is to find the best cycle of TSP without exceeding a maximum iteration number, Algorithm 1 resumes this process with a pseudo code of PSO.

figure a

4 Proposed Particle Swarm Optimization with Discrete Operators

The proposed discrete particle swarm optimization with combinatorial operators (DPSO-CO), follows the same method of operators presented by Clerc [11], but they focus mainly on the unification of the dimension of velocity and position, in order to avoid any truncate of velocity so that no data would be lost. This section is devised in three sub-sections, where the first one contains a presentation of the novel operators while in the second one adaptation of DPSO for this operator is to be found.

4.1 Novel Discrete Operators

Before starting representing operators, as it is known, the structure of the position is represented as a permutation π, and velocity is a set of permutation. In this proposed operators, we defined the data structure of velocity like position, which means that the velocity is a permutation of d elements, where d is dimension of research space.

$$ v = \left( {v_{k} } \right)_{k = 1,.., d} $$
(5)

Addition for position and velocity.

The result of addition between velocity and position is a position, which velocity translates the order of item position, that means in each i th item of x will be the (v i ) th item in x′ resumed in Eq. 6, to clarify this operation formulate 7 and Fig. 1. Example of position x plus velocity vshow example of addition between position and velocity.

Fig. 1.
figure 1

Example of position x plus velocity v

$$ x + v = x^{\prime} = \left( {x_{i}^{'} } \right)_{{i = 1,..,{\text{d}}}} = \left( {x_{k} } \right)_{{i = 1,..,{\text{d}}}} / i = v_{k} $$
(6)
$$ \left. {\begin{array}{*{20}c} {x = \left( {1, 4, 3, 5, 2} \right)} \\ {v = \left( {5, 2, 4, 3, 1} \right)} \\ \end{array} } \right\} x + v = \left( {2, 4, 5, 3, 1} \right) $$
(7)

With this operator, if addition of position and two velocities give the same position, it implies that they are equal (Eq. 8), and this redefinition makes more sense of vector translation to velocity.

$$ x + v^{\prime} = x + v^{\prime\prime} \Rightarrow v^{\prime} = v^{\prime\prime} $$
(8)

Addition between two velocities.

The addition of velocity v 1 with another velocity v 2 is a new velocity v 3 with d elements, where each item i in v 3 equal value of (v 1 i ) th item in v 2, this action resumes to translation of v 1 by v 2, where (x + v 1) + v 2 = x + (v 1 + v 2) but v 1 + v 2 not equals v 2 + v 1. Equation 9 resumes an example of this operation.

$$ v^{1} + v^{2} = v^{3} = \left( {v_{i}^{3} } \right)_{i = 1,.., d} = \left( {v_{{\left( {v_{i}^{1} } \right)}}^{2} } \right)_{i = 1,.., d} $$
(9)
$$ \left. {\begin{array}{*{20}c} {v^{1} = \left( {2, 5, 1, 4, 3} \right) } \\ {v^{2} = \left( {3, 1, 2, 5, 4} \right) } \\ \end{array} } \right\} v^{3} = v^{1} + v^{2} = \left( {1, 4, 3, 5, 2} \right) $$
(10)

Subtraction operator.

At the same idea of addition operator, it is applied between two positions, so the x 1 minus x 2 is v where each i th item in v (v th i ) equal the rang k flowing Eq. 11, the result is a velocity v which enables the second position to move towards the first position. And if x 1 - x 2 = v then x 1 = x 2 + v.

$$ x^{1} - x^{2} = v = \left( {v_{i} } \right)_{i = 1,..,\;d} = \left( k \right)_{i = 1,..,d} /x_{k}^{1} = x_{i}^{2} $$
(11)
$$ \left. {\begin{array}{*{20}c} {x^{1} = \left( {3,1,4,2,5} \right) } \\ {x^{2} = \left( {1,4,3,5,2} \right) } \\ \end{array} } \right\} v = x^{1} - x^{2} = \left( {2,3,1,5,4} \right) $$
(12)

Multiplication operator.

In this operator, the coefficient represents a probability parameter of adjustment for velocity, in other way, the particle moves with some velocity which can include some problems that can create a small disruptive impact on its position for the next iteration. Thus, the multiplication between a coefficient c and velocity v. After that the operator checks if a random number between 0 and 1 is less than c, then he applies a random swap in v, otherwise it does nothing (Eq. 13).

$$ cv = \left\{ {\begin{array}{*{20}c} {random\;swap\;of\;v} & {rand() < c} \\ v & {otherwise} \\ \end{array} } \right. $$
(13)

4.2 Discrete Particle Swarm Optimization

The proposal method DPSO-CO based on these new defined operators generates randomly particles population, then starts the search loop, where it detects, in each iteration, gbest and generates the next solution for each particle following Eqs. 3 and 4, where the inertia coefficient constant takes value 1, c1 and c2 are generated for each iteration between 0 and 1. Each new solution gets a small perturbation with 2-opt in order to look for a better neighbour, 2-opt improves a solution by applying iteratively exchanges between two edges resuming in Fig. 2. 2-Opt Swap, where (A) represents the initial case, and (B) is the new perturbation of tour. In TSP case, to accept exchange between two pairs (a, b) and (c, d) to (a, c) and (b, d) the sum of distances of (a, c) and (b, d) must be less than the sum of pairs (a, b) and (d, c). This process continues until DPSO-CO finds the best known solution for the problem or if it exceeds the maximal number of possible iterations.

Fig. 2.
figure 2

2-Opt Swap

5 Experimental Results

In order to validate its performances, the proposed adaptation has been programmed with C++ language and has been made on PC with processor Intel(R) Core(TM) i5-2500 CPU @ 3.30 GHz and 4 Go of RAM, and has been tested on some symmetric benchmarks of TSPLIB library [13].

Table 1 represents the numerical results on twenty instances of symmetric problems, each instance is executed thirty times, the first column represents the name of instance problem, the second column Opt shows the best known optimum of instance, while the third and fourth columns show respectively the best and worst solutions obtained by DPSO-CO. In the fifth column the average length of all solutions is to be found, and in the next column the standard deviation SD of all the results is shown. The seventh and eighth columns represent respectively the percentage relative error of the average PDav and the best solution PDbest according to the optimal known solution presented in second column, this value is calculated following Eq. 14. Finally in the ninth column C1%/Copt, where C1% indicates the number of solutions where relative error is less than 1 and Copt is the number of solutions equal to optimum known solution that means the number of iteration which its relative error is null, and the last one is time column that shows the average of time execution of all iterations in second.

Table 1. Results of the proposed method DPSO-CO for some symmetric instance of TSPLib.
$$ PDsolution = \frac{Solution\;lenght - \;optimal\;known\;lenght }{optimal\;known\;lenght} \times 100{\% } $$
(14)

The experimental results have shown that this proposed method gets the optimum one for the majority of instances, especially for medium-sized category instances where the average execution time is the shortest. To verify its performance according to the current DPSO, we have implemented DPSO presented by Clerc [11] and showed the results in Table 2. Comparison of the proposed algorithm with DPSO proposed by Clerc [11]. When comparing, DPSO gets the best known solutions for some instances, but in general, it does not provide results better than DPSO-CO that give solutions in a shorter time and with less amount of relative errors terms of who get result in small time and with a less reduced amount of relative errors. Both Fig. 3. Comparison average relative error between proposed method and PSO-ACO-3Opt [14] and Fig. 4. Comparison average execution time between proposed method and PSO-ACO-3Opt [14] show the difference of the PDav and the execution time between the proposed DPSO and the latest improved hybrid DPSO with Ant colony and 3-opt called PSO-ACO-3opt [14], from these figures DPSO-CO is more efficient than PSO-ACO-3Opt in terms of time and quality.

Table 2. Comparison of the proposed algorithm with DPSO proposed by Clerc [11].
Fig. 3.
figure 3

Comparison average relative error between proposed method and PSO-ACO-3Opt [14].

Fig. 4.
figure 4

Comparison average execution time between proposed method and PSO-ACO-3Opt [14].

In addition, Table 3. Comparison of the proposed algorithm with DCS [15]. compares results of DPSO-CO with DCS [15], where column Time represents the average time for all iteration average execution, and the last line represents the average of each column. The results show that their performances are very close, and DPSO-CO gives a nice result especially in time of execution for medium-size instances, but when the size of the problem increases, DCS takes the advantage.

Table 3. Comparison of the proposed algorithm with DCS [15].

6 Conclusion

This paper presents a new adaptation of DPSO-CO characterized by a novel definition of discrete operators. This proposed DPSO-CO method has been applied on different symmetric TSP instances from TSPLib. The result was compared in confrontation with the last hybrid PSO known and a recent competitive algorithm DCS. From this obtained study, it can be concluded that the proposed DPSO-CO are effective and more performant than other methods. However, this work opens new horizons for DPSO-CO to solve other combinatorial problems especially when using open discrete operators in new different ways that can be used in the future for others metaheuristic algorithms to test and increase the performance of research.