1 Introduction

In vehicle routing problems with profits, each customer has its own profit and due to some resource limitation constraints, it is often not possible to serve all customers. In general, the aim of solving this category of problems is to determine which customers must be served and how to design the vehicle routes so that the total collected profit is maximized while adhering to resource limitation. Based on the literature, several variants of the vehicle routing problem with profits appeared to depend on the nature of the resource limitation.

The Profitable Tour Problem (PTP) is one of the variant where only one vehicle is considered (Feillet et al. 2005). The objective of solving PTP is to maximize the difference between the collected profit and the total traveling cost. This cost is referred to as the total distance traveled or the operational time consumed. The Capacitated-PTP is the multiple vehicles extension of the PTP where a capacity constraint is imposed on each vehicle (Jepsen et al. 2014). In the Prize Collecting Traveling Salesman Problem (PCTSP), a single vehicle is available and its objective is to minimize the total traveling cost while visiting enough customers to ensure a predefined profit (Feillet et al. 2005). Alternatively, the Orienteering Problem (OP) consists of designing a single route in order to maximize the collected profit that respects a specified travel time limit (Chao et al. 1996). As an extension of OP, by considering multiple vehicles for the service of the customers, the well-known Team Orienteering Problem (TOP) (Kim et al. 2013) was initially introduced by (Butt and Cavalier 1994) as the Multiple Tour Maximum Collection Problem. The aim of solving TOP is to find a set of vehicles routes that maximizes the total collected profit while abiding to the travel time limit for each vehicle.

In this paper, we focus on the Capacitated Team Orienteering Problem (CTOP), a new variant of the Team Orienteering Problem (TOP). CTOP was recently introduced by Archetti et al. (2009) and considered more operational constraints by imposing capacity restrictions to vehicles. Similarly to the Team Orienteering Problem, CTOP is a selective routing problem, where some of the customers cannot be visited due to some resources limitations. In this problem, a set of customers with known profits and demands are to be served by a fleet of vehicles. Each customer can be visited at most once by a single vehicle. Vehicles are initially located at the departure depot and are required to finish at the arrival depot without exceeding a predefined travel time. Moreover, the total demand of customers visited in any route must fit the vehicle capacity. The problem is to design a set of vehicle routes that maximizes the total collected profit and satisfies all resources limitations.

The Capacitated Team Orienteering Problem is related to many real life applications. It is mainly used by industries to model and solve product delivery tasks, aiming at improving products distribution to customers. Sometimes, it is more convenient to outsource the distribution task to other specified companies. The aim of each transportation company is to identify convenient customers to add to their pre-assigned routes while respecting resources limitations, i.e. number of vehicles, capacity and travel time limit of each vehicle. Therefore, the problem can be formulated as a capacitated team orienteering problem where the visited customers have to be selected and assigned to the available vehicles so that the total collected profit is maximized while respecting all the operational restrictions. Despite its practical importance, few researchers had previously addressed CTOP in the literature so far. Archetti et al. (2009) proposed an exact method, which is based on the branch-and-price algorithm developed by Boussier et al. (2007) for TOP. The column generation algorithm deals with a set covering problem, and the dynamic programming algorithm is used to solve the elementary shortest path problems that model the subproblems. Furthermore, Archetti et al. (2009) proposed two Tabu Searches (TSF and TSA) and a variable neighborhood search (VNS) algorithm for CTOP. The main idea is based on assigning all customers to various routes. Customers are iteratively shifted between the routes until the most profitable routes are chosen to build the solution. In addition to feasible solutions search space, they explore admissible solutions search space where the most profitable routes of a solution can violate length and capacity constraints. Finally, repair and jump mechanisms are used. Both Tabu Search methods explore a small region in the solution space, unlike the VNS method where several moves are performed to explore a larger region.

Archetti et al. (2013) proposed an exact method based on the branch-and-price algorithm to solve CTOP. Later, Tarantilis et al. (2013) designed a two-phase heuristic method called Bi-Level Filter-and-Fan method (BiF&F). In their algorithm, two heuristics are alternatively applied: a Tabu Search procedure and a heuristic decomposition scheme. The search is performed according to two levels. The upper level deals with a profit oriented search while the lower one aims to minimize the travel time. Recently, Luo et al. (2013) proposed an efficient heuristic method called ADaptive Ejection Pool with Toggle-Rule Diversification (ADEPT-RD). In their method, at each iteration, only one customer is inserted in the solution from a maintained priority queue of unvisited customers. During the search, the priority queue is updated with an adaptive mechanism and reinitialized with different sorting rules to avoid local optima. The method iteratively inserts customers having the highest valuations from the priority queue into the solution. Then, a local search technique is performed to restore the feasibility of the solution. Consequently, their approach integrates a post-optimization heuristic to improve the quality of the best solution obtained.

This paper presents a Variable Space Search (VSS) heuristic that alternates between the routes and the giant tour search spaces, under the framework of a Greedy Randomized Adaptive Search Procedure (GRASP) combined with an Evolutionary Local Search (ELS). This algorithm proved successful as a result of alternation between the two search spaces, and to the improvements performed on the routes and thegiant tours by the local searches applied in both spaces. Our main contributions are summarized as follows.

  • One of the key benefits of our proposed algorithm is the fact that it operates in two search spaces, the giant tour search space and the feasible CTOP solution represented by the routes search space, where several improvements were integrated in both spaces using different types of local searches. A powerful optimal split procedure is developed to extract the optimal feasible CTOP solution from the giant tour.

  • Our algorithm makes use of an efficient framework combining the Greedy Randomized Adaptive Search Procedure (GRASP) and the Evolutionary Local Search (ELS). Our approach synthesizes the positive aspects of all the involved techniques. GRASP explores different regions from the search spaces and ELS extensively explores one region before moving to the next one. An initial solution of good quality is given to GRASP using an adaptive Iterative Destruction/Construction Heuristic (IDCH).

  • Several adaptive local searches are performed to improve the quality of the solutions, as well as the quality of the giant tours. These local searches allow ELS to avoid local optima while exploring a certain region. Besides the constructive/destructive heuristic applied on the solution of CTOP, a Tabu Search and a simulated annealing (SA) procedures were also involved to further improve the quality of the solutions. Therefore, we proposed two versions of our VSS algorithm, VSS-Tabu that uses the Tabu Search as one of its local searches, and VSS-SA that integrates the simulated annealing instead.

  • Several experiments were conducted on the benchmark of CTOP proposed by Archetti et al. (2009) and Tarantilis et al. (2013). Our two versions of VSS algorithm are able to outperform the results found in the literature in terms of gap and computational time. VSS-Tabu had found 74 new best solutions with a gap of \(-\,0.242\), while VSS-SA had reached a gap of \(-\,0.227\) with 77 new improvements.

The remainder of this paper is set as follows. In Sect. 2, we provide a formal description of CTOP. Our VSS algorithm is described in Sect. 3 where the initialization algorithm, the alternation between the two search spaces and several components are explained in details. Computational experiments are reported in Sect. 4, followed by conclusions and further development.

2 Problem definition

Given a set \(V^-= \{1,\dots , n\}\) of n customers and a fleet of m vehicles located initially at the depot 0, CTOP can be modeled with a complete graph \({G=(V, E)}\), where \(V = V^- \cup \{0\}\) is the set of vertices and \(E = \{(i, j) | i, j \in V \} \) is the set of edges. A travel time \(C_{ij}\) is associated with each edge \((i,j) \in E\) and respects the triangle inequalities. In addition, a profit \(p_i\) is associated with each customer \(i \in V^-\) and is collected once its demand \(d_i\) is delivered. Furthermore, each customer i is assigned a service time \(s_i\). Having that split deliveries are not allowed, a customer can be visited at most once. For each vehicle, a maximum capacity Q and a maximum traveling time limit \(T_{max}\) are assigned to its trip from the starting to the ending depot. Due to resource limitations, some customers might remain unserved. Therefore, solving CTOP consists of selecting the subset of customers to be visited and organizing the vehicles routes in order to maximize the total collected profit while accounting for all resource limitation constraints related to the capacity of vehicles and to the maximum traveling time.

A feasible solution S of CTOP is composed of m routes \(r_k, k=1,\dots , m\). We denote \(r_k\) the ordered list of \(n_k\) customers visited by vehicle k. A feasible route \(r_k\) must comply with the following constraints:

  • Each route should start and end at the depot: \(r_k=(0, r_k[1], \dots , r_k[n_k], 0)\)

  • The time needed to visit the customers of each route should not exceed the maximum travel time (\(T-constraint\) for brevity): \(\textsc {T}(r_k) = C_{0\;r_k[1]} + \sum _{i=1}^{n_k-1}\)\( (C_{r_k[i]\;r_k[i+1]} + s_{r_k[i]} ) + C_{r_k[n_k]\;0}+s_{r_k[n_k]}\le T_{max}\)

  • The total demand of customers visited in each route is upper bounded by the vehicle capacity (\(C-constraint\) for brevity): \(\displaystyle \textsc {D}(r_k)=\sum _{i=1}^{n_k} d_{i} \le Q\)

3 VSS heuristic for CTOP

In this section, our global VSS algorithm is described. This algorithm ensures the alternation between the two search spaces under the framework of GRASP-ELS, where adaptive local searches are applied in both spaces.

3.1 Global scheme of the GRASP-ELS framework

The efficiency of our algorithm is the alternation between two search spaces, as shown in Fig. 1: the first one for CTOP solutions, each encoded as a set of routes that respect all the resource limitations and the second one for solutions encoded as giant tours defined as a sequence of the n customers. This principle is shown to be successful on many combinatorial optimization problem (Hertz et al. 2008), and more precisely on vehicle routing problems (Duhamel et al. 2010, 2011; Lacomme et al. 2013). In the GRASP-ELS framework, ELS is applied as local search on each initial solution given by GRASP, while SA and Tabu Search are applied in ELS to decide which region to explore. We note that during the transition between the two search spaces, we ensure that there is no loss of information concerning the quality of the solution.

Fig. 1
figure 1

VSS general structure

A CTOP solution is converted into a giant tour using the Concat procedure, where the depot is first removed from each route. Resulting routes are then concatenated with a random order and unrouted customers are then inserted randomly in between the routes. The reverse operation is done by a dedicated Split procedure described in Sect. 3.4.1. This procedure is used to convert giant tour into an optimal CTOP solution with respect the customers order in the giant tour. Our VSS algorithm is summarized in Algorithm 1.

figure a

In this algorithm, the outer loop controls the structure of GRASP, where \(n_{init}\) solutions are generated and considered as starting points for ELS. These CTOP initial solutions \(S_i\) are obtained using an Adaptive Iterative Destruction/Construction Heuristic (AIDCH), described in Sect. 3.2.1. The middle loop corresponds to the ELS loop where \(n_{els}\) iterations are performed. The best solution obtained after l iterations is recorded in \(S_l\). During the inner loop, \(n_{child}\) parallel child solutions are generated by making \(n_{child}\) copies from \(S_l\). Each child \(S_c\) is converted into a giant tour \(T_c\) using the Concat procedure. Each giant tour is evaluated with the Split procedure and is improved using the local search giant tours space procedure. The resulting giant tour \(T_c^{\prime }\) is evaluated with the split procedure to obtain the CTOP solution \(S_c^{\prime }\). To further explore the neighborhood of the new solution, a local search in routes search space is performed on \(S_c^{\prime }\) either by applying a Simulated Annealing procedure or a Tabu Search as described in Sect. 3.3. Finally, the resulting solution \(S_c^{\prime \prime }\) updates the best child \(S_{bestChild}\). The incumbent solution is retained during the whole algorithm to be returned at the end of the algorithm as the best solution found \(S^*\).

3.2 Routes search space

The first search space explored by our VSS algorithm is the routes search space, in which only feasible CTOP solutions that respect all the imposed constraints are considered. GRASP starts in this search space with an initial solution of good quality generated by AIDCH. After performing several improvements to this solution in the giant tour search space, a split procedure is applied on the resulting giant tour to extract from it the optimal CTOP solution on which other local searches are then applied in the routes seach space as the SA procedure and the Tabu Search.

3.2.1 Adaptive iterative destruction/construction heuristic

To generate initial solutions for our GRASP, we propose an Adaptive Iterative Destruction/Construction Heuristic (AIDCH) composed of two main procedures: an adaptive construction algorithm and an adaptive diversification procedure. In our adaptive heuristic, several parameters are changed according to the solution process.

The core component of the adaptive construction procedure is the Best Insertion Algorithm (BIA). It starts first with an incomplete solution S and a subset U of unrouted customers, then begins to insert customers from U in the solution one by one. The process ends when no possible insertions exist or no customers remain unserved. At each iteration a new customer is chosen to be inserted in the solution if its insertion preserves the capacity and the travel time restrictions of all the routes and has the lowest insertion cost. For this reason, we evaluate at each iteration, the insertion of each costumer c from U in every position p in S. In order to determine the best customer to be inserted in the best position, a ratio is calculated for each possible insertion:

$$\begin{aligned} Ratio_{cp} = \frac{\left( \frac{\varDelta L_{cp}}{L_{max}}\right) ^\beta * \left( \frac{d_c}{Q}\right) ^\gamma }{\left( \frac{p_c}{P_{max}}\right) ^\alpha } \end{aligned}$$
(1)

\(\varDelta L_{cp}\) is defined as the difference of travel time resulting from inserting customer c in position p and \(P_{max}\) denotes the highest profit among all customers. The proposed ratio \(Ratio_{cp}\) encompasses three components that are related respectively to the difference of travel time occurred after the insertion, to the capacity consumption and to the profit collected. For the three components, we associate the specific weights \(\beta \), \(\gamma \) and \(\alpha \) to control their relative importance in the ratio. The three components are normalized to be treated in the same manner. Our ratio attempts to maximize the profit and to minimize the resources consumption. Therefore, the insertion having the lowest ratio is selected.

The adaptive construction mechanism consists of generating the best triplet (\(\alpha \), \(\beta \), \(\gamma \)) at each iteration. For this reason, we apply six constructive heuristics \(h_{j}, j \in \{1,\dots ,6\}\) with different combinations of (\(\alpha , \beta , \gamma \)) on six copies of S by applying BIA. Among the six resulting solutions, the solution having the maximum total profit is retained. Starting from a partial or an empty solution S, we initialized \(\alpha \), \(\beta \) and \(\gamma \) to 0.5 then assigning alternate values of [0, 1] by slightly modifying the last chosen ones from the previous iteration. In the first five heuristics, \(\alpha \) is set to 1 since the objective of our problem is to maximize the total collected profit and the profit of the chosen customer must be more crucial when taking decisions.

The new combinations of \(\beta \) and \(\gamma \) are chosen among the four vertices of the square having the previous combination as a center and a side length equal to 0.1 (Ben-Said et al. 2016), in a way that all three parameters keep their values in [0, 1]. The fifth heuristic chooses randomly the values of \(\beta \) and \(\gamma \) in [0, 1] while keeping alpha set to 1. Finally, the last heuristic chooses randomly all three parameters between 0 and 1. Each heuristic returns the best solution obtained and saves its corresponding triplet (\(\alpha \), \(\beta \), \(\gamma \)) for the next iteration. The aim of the adaptive construction mechanism is to perform separately parallel searches in the neighborhood of the current solution which lead to faster convergence toward solutions with good quality.

In order to escape from local optima, a perturbation phase must be applied on the solution. Therefore, at the end of each construction phase, a random number of customers is removed from the resulting solution. Initially, the number of customers removed, \(d_{max}\), is limited to 3, then, at each non-improving iteration, \(d_{max}\) is incremented by one until reaching the value of n / m, which represents the average number of customers per route (Afifi et al. 2016). The aim of the adaptive destruction mechanism is to discover more distant solutions in the search space. Therefore, when improving a certain solution, we reset \(d_{max}\) to its initial value 3 in order to thoroughly explore the neighborhood of the new optimum.

Our algorithm AIDCH starts with a set of empty routes and initializes the parameters of the construction and the destruction steps. At each iteration of AIDCH, the construction phase is applied until reaching a local optimum S. If S is the best solution found so far, it is retained as the result of AIDCH. This step is immediately followed by the destruction phase, where a limited random number of customers is removed from the obtained solution. To arrange the solution and reduce the travel time, a 2-opt operator is applied. These steps are repeated until reaching the maximum number of iterations without improvement of the best solution. The stopping condition set to n and the maximum number of customers to be removed set to n / m are two predefined parameters for our AIDCH (Ben-Said et al. 2016).

3.3 Local search: routes search space

To improve the quality of CTOP solutions in the routes search space, several local searches are applied. We performed two versions of our VSS algorithm, the first one is done using the Simulated Annealing as one of the local searches, and the other version is by applying a Tabu Search.

3.3.1 Simulated annealing

The simulated Annealing procedure is involved in the routes search space. After extracting a solution from a giant tour, our SA algorithm tries to improve the quality of the new solution. It explores the neighborhood of a solution by applying a fast version of the AIDCH algorithm. The adaptive destruction and construction mechanisms are used. These operators aim to maximize the total collected profit by removing some customers from the solution in order to insert other customers that increase the profit of the resulting solution. Furthermore, the 2-opt operator is applied to reduce the total travel time by exchanging two arcs belonging to the same route. Each time one neighbor is produced, it will be accepted with a certain probability. AIDCH is applied to the child S. If the resulting solution \(S^{\prime }\) improves S, it replaces the child S with the best solution \(S_{SA}\) in the next iterations of the local search. Otherwise, SA accepts \(S^{\prime }\) with the probability \(e^{-\frac{\varDelta _p}{Temp}}\), where Temp represents the current temperature and \(\varDelta _p=P(S)-P(S^{\prime })\) the difference between the total collected profit in the solution S and the modified one \(S^{\prime }\). For each child S, the temperature Temp is initialized with the initial temperature \(Temp_0\) and evolves during the exploration of the neighborhood of S with a geometric progression. After each iteration, we update Temp with the value k*\(Temp_0\), where k is the cooling speed. In Sect. 4.3 we present the computational techniques used to calculate \(Temp_0\) and to choose the best value of k. At the end, SA algorithm stops when all neighborhoods fail to produce any new solution to replace the child S.

3.3.2 Tabu search

In the same perspective, a Tabu Search (TS) is embedded in our algorithm in order to avoid to be trapped in local optima. Compared to the basic local search where only moves towards improved solutions are permitted, TS may accept degraded solutions without falling back into recently emerged ones. For this purpose, in our implementation, when a customer c visited between customers i and j is removed from the solution, a temporary tabu status is assigned to the insertion of c in the position (i, j). TS makes use of a short term memory (known as tabu list) to keep a track on the most recently tabu moves and forbid applying them for some upcoming iterations. The tabu status is maintained as in Gendreau et al. (1998) for \(\theta \) iterations, where \(\theta \) is randomly chosen in the range [5, 25] within a discrete uniform distribution. Since the tabu status may forbid some interesting moves probably leading to a new solution better than the current local optimum, we use an aspiration criterion to accept solutions containing tabu insertions only if they present an evaluation higher than the best solution found so far. The overall procedure iterates for around 10 iterations without getting any further improvement.

3.4 Giant tours search space

After providing a starting solution by GRASP, ELS algorithm alternates between the two spaces, the routes search space and the giant tours search space. In the later one, several local searches are applied on the sequence of customers in order to improve the quality of the extracted solution using our optimal split procedure.

3.4.1 Split procedure

Our VSS algorithm performs several local searches in the giant tour search space to improve the quality of the sequence. In order to evaluate the resulting sequence, we developed an optimal split procedure adapted from the split procedure proposed by Dang et al. (2013) for TOP. The objective of the split is to extract from a given sequence, the CTOP optimal solution composed of m routes that respect all the resource limitation constraints.

Let \(T=(T[1],\dots , T[n])\) represents the sequence of n customers. Our optimal split procedure aims to extract from the sequence T, m subsequences that do not have any common customer in between. Choosing the subsequences must respect at the same time the C-constraint and the T-constraint and in such a way that the total collected profit from the visited customers is maximized.

The order of the customers in the chosen subsequences must be followed; therefore, we can identify the subsequences by their starting customers and their lengths. For this reason, we define the subsequence by \({\langle i,\ell _i\rangle }_T\) composed of \(\ell _i\) consecutive customers starting with customer T[i] in the sequence T. The final solution is then represented by the routes \(({\langle i_1,\ell _{i_1}\rangle }_T,\dots ,{\langle i_k,\ell _{i_k}\rangle }_T)\), where \(k\le m\) and \(i_{p+1}>i_p+\)\(l_{i_p}\) for all \(p \in \{1,\dots ,k-1\}\).

The first step of our split procedure is to extract all feasible routes starting from all customers in the sequence T, then choosing among them, the m routes that maximize the total collected profit. As proven in Dang et al. (2013) for TOP, enumerating only saturated routes is relevant to obtain the optimal solution extracted from a given giant tour. Therefore, we define the saturated route as the longest feasible route that complies the C-constraint and the T-constraint.

Definition 3.1

Starting with the customerT[i] in the giant tour, the corresponding saturated route\((T[i],\dots , T[i+\)\(l_i^{max}])\) is obtained by including all customers following T[i] in T as long as the \(C-constraint\) and the \(T-constraint\) are respected.

By limiting our extraction to only saturated routes, we start our splitting procedure by extracting the n saturated routes from the giant tour in order to select the m routes that do not have any common customers among them. This problem can be formulated as a knapsack problem with conflicts (KPC) with a maximum volume limited to m (Yamada et al. 2002). The items of KPC represent the n enumerated routes having each a unitary volume and an item’s weight that represents the profit of the route. The existence of common customers between routes leads to a conflict between the corresponding items. Therefore, we modeled our problem using an interval graph, where each saturated route \((T[i],\dots , T[i+\)\( l_i^{max}\) ]) represents an interval \([i, i+\)\( l_i^{max}\)] using the positions of its customers in the giant tour. The conflict between two routes is represented in the graph with an arc linking these routes. As proposed by Dang et al. (2013), the resulting KPC can be solved using dynamic programming in O(m.n) time and space.

Figure 2 depicts the mapping of a split problem to a KPC. A giant tour of 8 customers is given in the first graph. In this problem, the maximum travel time is set to 70 and the maximum capacity to 50 for each vehicle among the two available ones. Fig. 2b shows the interval model. Eight saturated routes are extracted. As an example, the first saturated route \(T_1=(1, 2, 3)\) has its time \(T(T_1)=70\) and its capacity \(D(T_1)=50\). We cannot include the customer 4 in this route since it violates the \(T-constraint\) and the \(C-constraint\).

Fig. 2
figure 2

The extraction of saturated subtours from a giant tour. a A split problem, b the weighted interval model

At the end, solving KPC leads to determining among the n feasible extracted routes, the m ones that maximize the total profit of the solution, which will represent the result of our split procedure.

3.4.2 Local search giant tour space

During the giant tours space exploration, we apply in our ELS algorithm two local search operators to improve the quality of the sequence: the swap and the Construction/Destruction operators. At each iteration of our ELS algorithm, the two operators are applied to the giant tour in a random order. Then, the sequence is evaluated using the optimal split procedure to extract the solution from the giant tour. When at least one of the operators makes an improvement in terms of profit on the extracted solution, the local search procedure reiterates on the new sequence. Otherwise, when both operators fail to improve the quality of the sequence, the local search procedure stops. These two operators work as follows:

  • swapoperator It evaluates each sequence obtained by exchanging the positions of customers in the giant tour. We limited the number of swapped customers to at most 10, chosen randomly from the sequence, in order to reduce the computational time. Each customer is first removed from the sequence, then inserted in all other positions in the sequence while evaluating each resulting sequence by extracting the optimal solution from it. If the resulting solution is improved, the sequence is reinitialized with the new solution to reiterate the local search phase, otherwise a new couple of customers is considered for the swap.

  • construction / destructionoperator First, CTOP solution is extracted from the giant tour using our optimal split procedure. This operator is applied on CTOP solution by removing some customers from the routes then trying to insert the unrouted customers in the solution. The two adaptive mechanisms of destruction and construction described in Sect. 3.2.1 are used.

The extracted solution derived from each modified sequence is then improved using the local searches of the routes search space as described in Sect. 3.3.

4 Computational experiments

Our VSS Algorithm is coded in C++ and compiled in a Linux environment using GNU GCC. Computational experiments are carried out on an Intel Xeon 2.6 GHz processor. In the following sections, we show some experiments performed to tune the parameters of our algorithm, then we examine the effect of each component, to end up by giving a detailed comparison to the literature on all instances of the CTOP benchmark.

4.1 Benchmark of CTOP

We tested our algorithm on the benchmark of CTOP composed of two sets of instances.

  • The first dataset was proposed by Archetti et al. (2009) and was deduced from ten instances of the benchmark of the Capacitated VRP (Christofides et al. 1979). The number of customers in each instance ranges between 50 and 99. For each customer c, its profit \(p_c\) is set to \((0.5 + h)*d_c\), where \(d_c\) is the customer’s demand and h is a random number selected uniformly in the interval [0, 1]. By considering different combinations of the maximum travel time \(T_{max}\), the vehicle capacity Q and the fleet size m, 13 subsets of 10 CTOP instances each are generated.

  • The second dataset was introduced by Tarantilis et al. (2013). This set was deduced from ten instances from the benchmark of the Periodic VRP (Pirkwieser and Raidl 2010) using the same technique proposed by Archetti et al. (2009). In this dataset, all instances are large, where the number of customers ranges from 337 to 577. In total, a new large scale benchmark of 130 instances are proposed for CTOP.

The characteristics of the instances are shown in Table 1, where we present for each subset of instances, the number of available vehicles m, the capacity limit Q and the maximum travel time \(T_{max}\) for each vehicle.

Table 1 Benchmark of CTOP

For convenience, we represent each instance with a label “\(base-n-m-Q-T_{max}\)”, where base is the name of the original instance taken from the VRP benchmark, n the number of customers, m the number of vehicles, Q the capacity of the vehicles and \(T_{max}\) the maximum travel time of each vehicle.

4.2 Protocol and performance metrics

Since VSS is a randomized search method, we applied the same protocol used in Dang et al. (2013) and Zhang et al. (2013), where we solved each instance ten times and we retained the best result obtained among the ten executions, denoted by \(Z_{max}\).

To evaluate all methods performed for CTOP, we denote the best result obtained in the literature by \(Z_{best}\), and we define the relative percentage error (RPE) as follows: \(RPE = 100*\frac{Z_{best}-Z_{max}}{Z_{best}}\).

The performance time of the methods is evaluated using the Time To Best (TTB) protocol used in Archetti et al. (2009), Tarantilis et al. (2013), and Luo et al. (2013). TTB is defined as the CPU time required to reach the best solution within one run.

4.3 Parameter tuning

In our proposed method there are three main parameters that must be well chosen in order to get the best performance of our VSS framework. Two other parameters are to be set in the SA component. We carried out the parameter tuning experiments using a subset of 5 instances chosen randomly among the large instances of the benchmark of CTOP. The instances are “4-481-8-75-100”, “1-337-6-150-200”, “3-433-6-150-200”, “8-433-7-200-400” and “2-385-6-350-713”.

4.3.1 VSS parameters

Starting with our VSS framework, three parameters should be well tunned to obtain the best results for our overall algorithm. These parameters are the number of initial solutions \(n_{init}\) given to GRASP, the number of child solutions \(n_{child}\) and the number of iterations \(n_{els}\) of the evolutionary local search. Several tests have been launched to determine the values of these parameters offering the best compromise between the quality of the obtained solutions and the computational time. Therefore we build a Pareto test with different combinations of \(n_{init}\), \(n_{child}\) and \(n_{els}\) ranged from (5, 5, 5) up to (15, 15, 15) conducting to a total number of 27 tests. In these tests, the two other parameters \(Temp_0\) and k were set based on the studies of Afifi et al. (2016). The results of parameters combinations are depicted in Fig. 3, where we plot for each combination the average RPE and the average TTB of the solutions obtained for the 5 tested instances.

Fig. 3
figure 3

Pareto front of results obtained with different parameters combinations

Based on the results obtained in Fig. 3, we have chosen the configuration that gives the best compromise between the quality of the solutions and the computational time. This configuration is represented by the point covering the largest volume in the objective space. The resulting parameters are: \(n_{init} = 5\), \(n_{child} = 10\) and \(n_{els} = 10\). We note that, for the rest of our tests, we used this configuration in our VSS framework.

4.3.2 Simulated annealing parameters

The objective of applying SA algorithm in our approach is to accept non-improved solutions at the beginning of the solution process before starting to completely reject them after several iterations. For this reason, the initial temperature \(Temp_0\) is calculated in such a way to obtain a high probability of accepting a non-improved solution at the beginning of each GRASP iteration is sufficiently high. This probability is set to 0.95. During AIDCH, we record the maximum and the minimum profit values obtained denoted by \(score_{max}\) and \(score_{min}\) respectively. \(Temp_0\) is then computed from the following expression: \(e^{-\frac{\varDelta _{p_0}}{Temp_0}}=0.95\) (Afifi et al. 2016), where \(\varDelta _{p_0}=score_{max} - score_{min}\).

The second parameter that should be tunned in the SA algorithm is the cooling speed k. In order to choose the best value of k, we performed several tests with different values of k chosen from the subset \(\{0.1\), 0.3, 0.5, 0.7, 0.9, 0.95, 0.99, \(1\}\). Figure 4 represents the evolution of the average RPE with respect to value of k.

Fig. 4
figure 4

Effect of the k parameter in the VSS-SA performance

The convex shape of the graph shown in Fig. 4 explains the fact that choosing a small or a substantially high value of k leads to a decrease in the performance of VSS-SA algorithm. This behavior is due to the fact that when choosing a small value for k, our SA algorithm might stop accepting non-improved solutions in the early stage, leading to local optimum. Moreover, choosing a big value for k increases the probability of accepting non-improved solutions all the time, which might cause the intense diversification of the solutions. Therefore, we have chosen \(k=0.95\) since it gave us the best average RPE.

4.4 Components evaluation

In this section, we evaluate the effectiveness of each component of our algorithm. First we tested our algorithm using only the routes search space, then we considered the basic VSS framework, with several other alternative versions where in each version one additional component is evaluated as follows.

  • routes-SS it represents our global scheme in which only CTOP routes search space is explored.

  • VSS this is the basic version. VSS scheme is launched while considering CTOP routes search space and giant tour search space.

  • VSS-SA this version is the combination of VSS with SA component.

  • VSS-Tabu it represents the combination of VSS with the tabu component.

We tested these versions with the same parameters configuration. Figure 5 summarizes the performance of each method, where we compared their average RPE and TTB.

Fig. 5
figure 5

Performance of the combinations of the components of the VSS. a Analysis of the average RPE, b analysis of the time TTB

Based on the results obtained in Fig. 5, we notice that the average RPE obtained with the version VSS is much better than that obtained with routes-SS which leads us to deduce that exploring the two search spaces provides a better performance for our VSS algorithm. Finally, incorporating SA and the tabu components into our VSS algorithm hugely improves the quality of the obtained solutions, although the two alternatives consume slightly more computational time.

4.4.1 Comparison with the literature on the first dataset (Archetti et al. 2009)

The exact method proposed by Archetti et al. (2009) was able to find the optimal solutions for some of the small instances of the first dataset. These results were later outperformed by the branch and price algorithm proposed by Archetti et al. (2013). For the remaining instances that do not have an optimal solution yet, we compare our results to those obtained by the best heuristic methods proposed in the literature, which are the VNS algorithm proposed by Archetti et al. (2009), the BiF&F algorithm proposed by Tarantilis et al. (2013) and the ADEPT-RD algorithm of Luo et al. (2013). We present in Table 2 the detailed results obtained by these three methods and our two algorithms. In this table, column \(Z_{best}\) represents the best known results obtained in the literature, where the values with an asterisk are the optimal solutions. For each method, the column \(Z_{max}\) provides the profit of the best solution given for each instance, column RPE reports the relative percentage error from the best known solutions and column TTB represents the time to best which is the time (in seconds) retained when the best solution is obtained. We used dash mark ‘–’ when the corresponding method did not provide a solution for the instance. The improvements performed by our algorithms VSS-Tabu and VSS-SA are marked in bold. Finally, the last line of the table presents the average TTB and the average RPE for all methods. To compare the cpu performance of the different machines used to test these methods, we used the recommended Super PI protocol (Luo et al. 2013). Super PI evaluates the computational power by running in all machines a single threaded program to compute a specific number of digits of \(\pi \). We note that VNS algorithm of Archetti et al. was run on a personal computer Intel Pentium 4 CPU 2.80 GHz and 1.048 GB Ram. BiF&F of Tarantillis et al. was performed on a 2.83 Giga Hertz Intel Core2 Quad PC over a single thread, while Luo et al. mentioned in their paper that their CPU is roughly four times faster than that of Archetti et al. according to the publicly available SuperPi benchmark. Our Computational experiments are carried out on an Intel Xeon 2.6 GHz processor. According to the obtained scores on Super PI, we notice that our machine presents almost similar cpu speed as the machines used by Tarantilis et al. (2013) and Luo et al. (2013) and outperforms by three times the machine used by Archetti et al. (2009).

Table 2 Comparison between our results and those obtained in the literature on the first dataset (Archetti et al. 2009)

Based on the results obtained in Table 2 the competitiveness of our methods is clearly noticed with respect to the other best heuristic methods available in the literature. Our two algorithms VSS-Tabu and VSS-SA provide all the best known results in the literature with the smallest TTB. Furthermore, they improve the solution quality of one of the hardest instances of the benchmark, where they both reach a relative percentage error of \(-\,0.0003\%\) with respect to the best known result in the literature. More precisely, this table reveals that our algorithms achieve better results compared to those found by the VNS algorithm which registered an average RPE of \(0.08\%\). Even with the difference in cpu speed between our machine and the one used by Archetti et al. (2009), the TTB of our algorithms is substantially lower than that of the VNS algorithm. Moreover, our results exceed those scored by BiF&F algorithm which resulted in an average RPE of \(0.01\%\). In terms of computational time performance, the TTB of VSS-Tabu is slightly higher than that of BiF&F algorithm but the TTB of VSS-SA is much lower. Furthermore, our algorithms obtain the same solutions obtained by ADEPT-RD algorithm—the best known results of the literature. We found that VSS-Tabu algorithm outperforms ADEPT-RD in terms of computational time while VSS-SA outperforms it within a TTB almost 3 times smaller than the one consumed by ADEPT-RD.

4.4.2 Comparison with the literature on the second dataset (Tarantilis et al. 2013)

Tarantilis et al. (2013) proposed two heuristic methods, which are the slow and the fast versions of the Bi-Level Filter and Fan method BiF&F and tested their algorithms on a new large scale instances. Table 3 summarizes the comparison between our VSS-Tabu and VSS-SA algorithms with the fast and the slow versions of the BiF&F method. In this table column Instance represents the name of the instance, and column \(Z_{best}\) presents the best known solution in the literature. To compare our results to those of BiF&F-f and BiF&F-s algorithms, this table presents the maximum score obtained by each method \(Z_{max}\) and the CPU time in seconds elapsed to identify the best solution (TTB). We mark in bold all the improvements obtained by our algorithms compared to two versions of BiF&F algorithm.

Table 3 Comparison between our results and the literature on the second dataset (Tarantilis et al. 2013)

The results obtained in Table 3 clearly prove the competitiveness of our VSS-Tabu and VSS-SA algorithms in terms of quality of the obtained solutions and computational time consumed during the solution process. We notice from Table 3 that our VSS-Tabu algorithm reached the best known results for 119 instances and improved 74 instances among the 130 of the second dataset. Compared to the fast version of BiF&F algorithm, VSS-Tabu presents 91 strict improvements. To achieve these performances VSS-Tabu requires a higher TTB than that of BiF&F-f. Compared to the slow version of BiF&F algorithm, our VSS-Tabu algorithm is able to outperform this method in terms of quality of the solutions and computational time, where the average RPE of our VSS-Tabu was \(-\,0.242\%\), compared to the slow BiF&F algorithm that found \(0.02\%\). To obtain these results, our VSS-Tabu algorithm consumed a lower TTB compared to that consumed by the slow BiF&F algorithm.

Moreover, our VSS-SA algorithm reached a better average RPE of \(-\,0.227\%\) with respect to the best known results in the literature. More precisely, it obtained the best results for 124 instances and improved the results of 77 solutions. Apart from effectiveness, VSS-SA demonstrates its scalability and speed since it is two times faster compared to BiF&F-s.

Table 4 Methods improvements

Table 4 presents the improvements of each method compared to the other methods separately. This table shows that our VSS-SA and VSS-Tabu algorithms improved the results of the slow BiF&F by respectively 79 and 76 instances and that of the fast BiF&F by 96 and 91 instances, while BiF&F-s algorithm had scored 6 and 11 improvements with respect to our results and BiF&F-f algorithm had found only 3 and 4 improvements.

Overall, the obtained results prove the efficiency and rapidity of our two algorithms compared to all other heuristics proposed to solve CTOP. We can also conclude that they present highly competitive performances. More precisely, VSS-Tabu obtains 27 improvements compared to the results of VSS-SA against 30 improvements for the latter. On the other hand, VSS-Tabu outperforms VSS-SA in term of average RPE since it improves the best known results by \(-\,0.242\%\) while VSS-SA obtains an average RPE of \(-\,0.227\%\).

5 Conclusion

The Capacitated Team Orienteering Problem is a new variant of the Vehicle Routing Problem, in which the total collected profit from the visited customers must be maximized, while adhering to all resource limitation constraints related to the capacity of the vehicles and to the predefined travel time limit. In this paper, we propose a Variable Space Search algorithm that combines two search spaces to solve CTOP. Our algorithm is based on the alternation between the giant tours and the routes search spaces in order to further improve the quality of the obtained solutions. An Adaptive Iterative Destructive/Constructive Heuristic is developed to start our solution process with high quality solutions. Several local search techniques and perturbation procedures are developed in each search space in order to improve the quality of the initial solutions constructed. Furthermore, using a Tabu Search and a Simulated Annealing procedure in our framework appears to be more efficient since they avoid being stuck in local optima. Computational results performed on the CTOP benchmark show the effectiveness and the competitiveness of our approaches since we are able to find substantial improvements with respect to the other results found in the literature, within a considerably lower computational time.

In order to further improve the performance of our VSS algorithms, additional local searches could be applied in the routes search space and the giant tours search space. Several improvements could also be applied on our AIDCH in order to provide initial solutions with better quality. Future research can extend our methodology and adopt it to solve other variants of the Vehicle Routing Problem with Profit, as the Capacitated Profitable Tour Problem (CPTP). The objective in this variant is to maximize the difference between the total collected profit and the traveled distance while respecting the limited capacity. In addition, we will work on developing exact methods in order to optimally solve CTOP.