1 Introduction

There is a vast number of combinatorial problems which look for the best permutation of a set of elements assuming a certain criterion. If the optimal permutation is the one which minimizes the cost of the path induced by the permutation we have the Linear Ordering Problem (LOP) [1719] and when the optimal permutation is the closest to a given set of permutations and the distances among permutations are measured with the Kendall Tau distance, the problem results in the Rank Aggregation Problem (RAP), also known as the Kemeny problem. The RAP is a classic problem in combinatorial optimization: some papers in the literature concentrate on giving good algorithms for solving it [1, 3, 10, 11, 21, 22], some concentrate on giving properties of the solution [4, 9] and some others compare different strategies for ranking [2, 12]. LOP and RAP are closely related and it can be shown that the RAP reduces to the LOP under a suitable transformation. In this paper we propose a variation of these problems when the set of elements must be ranked in a cyclic way. First of all, we start by pointing out three applications of this novel rank aggregation in cyclic sequences. In the same way that scheduling with preferences is an application of the RAP [17], cyclic scheduling with preferences is an application of rank aggregation in cyclic sequences. The cyclic scheduling problem is the scheduling problem that appears when the set of tasks is to be repeated formally speaking infinity many times. A second application of this problem is the position of the slices in a pie chart when there exists a matrix of preferences, even if any permutation represents the same fractions, only some of them respect the aggregation of preferences [8]. The third application is to aggregate preferences in routing problems. This application is related to the so-called Target Visitation Problem which consists of finding a tour which maximizes the difference of the sum of the met preferences and the total travel costs [15]. Therefore, all the applications of the Target Visitation Problem, as for example environment assessment, combat search, rescue and disaster relief [14] and applications to the delivery of emergency supplies [7] can also be shown as examples of application of the Rank Aggregation Problem in cyclic sequences.

In this paper we introduce the Rank Aggregation Problem in Cyclic sequences (RAPC) and a variant of it, that we call the Permutated Asymmetric Traveling Salesman Problem (\(\rho \)ATSP). On the one hand, the Rank Aggregation Problem in Cyclic sequences consists in finding the cyclic sequence that is at minimum distance from a given set of cyclic sequences assuming that distances are measured with the Kendall Tau metric. On the other hand, the Permutated Asymmetric Traveling Salesman Problem consists in finding the cyclic sequence with smallest expected cost with respect to a pre-specified probability distribution. In this paper we also show that an optimal solution of the \(\rho \)ATSP is an optimal solution of RAPC considered in reverse order.

The main contribution of this paper is to provide a compact formulation for the Rank Aggregation Problem in cyclic sequences and the proof of equivalence of the Rank Aggregation Problem in cyclic sequences and the Linear Ordering Problem. The paper is organized as follows. In Sects. 2 and 3 we introduce the RAPC and the \(\rho \)ATSP respectively and we propose compact formulations for them. Section 4 is devoted to establish the relationships among the LOP, RAP, RAPC and \(\rho \)ATSP.

2 The rank aggregation problem in cyclic sequences

Table 1 Kendall Tau distance matrix, \(\Delta _{3!\times 3!}\)

Let \(G=(V,A)\) be a complete directed graph with arc weight \(c_{ij}\) for each pair \(i,j\in V\) and \(V=\{ 1,\ldots , n\}\) with \(n\ge 3.\) Let S be the full set of permutations in V and \(\sigma \) a permutation. We can define the distance \(d(\sigma _1, \sigma _2)\) like the number or pairwise disagreements between the two permutations \(\sigma _1\) and \(\sigma _2\). This distance is known as Kendall Tau distance.

Definition 1

The Kendall Tau distance between two permutations \(\sigma _1\) and \(\sigma _2\) is given by:

$$\begin{aligned} d(\sigma _1,\sigma _2)= & {} |\{ (i,j): i<j, ((\sigma _1(i)< \sigma _1(j) \wedge \sigma _2(i)> \sigma _2(j) )\nonumber \\&\qquad \vee ( \sigma _1(i) > \sigma _1(j) \wedge \sigma _2(i) < \sigma _2(j))) \} | \end{aligned}$$

where, \(\sigma _1(i)\) and \(\sigma _2(i)\) are the positions of the element i in \(\sigma _1\) and \(\sigma _2\) respectively.

The Kendall Tau distance is a permutations metric that counts the number of pairwise disagreements between two ranking lists. The larger the distance is, the more different the permutations are. For instance, if we had three elements, the distance to the permutation 123 from 132, 231 and 321 will be 1, 2 and 3 respectively. Kendall Tau distances between all the permutations of three elements are shown in Table 1. In the following, we call \(\Delta _{n!\times n!}=(\delta _{rs})\) with \(\delta _{rs}=d(r,s)\) \(\forall r,s\in S\) the Kendall Tau distance matrix for the permutations of n elements.

We can define a cyclic ordering of a finite set V to be a permutation \(\sigma \in S\) with exactly one orbit. Cyclic orderings split the set of linear orderings S into a set of equivalence classes. Similar to Definition 1, we can define the Kendall Tau distance between two cyclic sequences as the Kendall Tau distance between the representative cyclic sequences of each equivalent class. We denote by R the set of equivalence class induced by S, when S is the set of all permutations in V (any \(\rho \in R\) represents a hamiltonian tour in V). We denote by C(r) the aggregation weights of the cyclic sequence \(r\in R\) and we denote by \(\rho \in R\) one element in R.

Based on the above elements, we define the Rank Aggregation problem in cyclic sequences as the problem of finding the cyclic sequence closest to all cyclic sequences in the complete directed graph \(G=(V,A)\):

$$\begin{aligned} \text {(RAPC)}\,\, \arg \min _{\rho }&\sum _{r\in R} C(r) d(\rho ,r). \end{aligned}$$
(1)

In order to exactly solve the RAPC we propose a compact formulation. Let us introduce the following set of binary variables, called z-family:

$$\begin{aligned} z_{ij}=\left\{ \begin{array}{l@{\quad }l} 1 &{}\quad \text {if }j\text { is visited before }i\\ 0 &{}\quad \text {otherwise}\\ \end{array}\right. \end{aligned}$$

for all \(i,j\in V\) \(i\ne j.\)

If \(V=\{ 1,2,3,4\},\) the cyclic sequences 1234 and 1342 will be associated to the solutions

$$\begin{aligned} z_{ij}=\left( \begin{array}{l@{\quad }l@{\quad }l@{\quad }l} - &{} 0 &{} 0 &{} 0\\ 1 &{} - &{} 0 &{} 0\\ 1 &{} 1 &{} - &{} 0\\ 1&{} 1 &{} 1 &{} -\\ \end{array}\right) \quad \text {and} \quad \,\, z_{ij}=\left( \begin{array}{l@{\quad }l@{\quad }l@{\quad }l} - &{} 0 &{} 0 &{} 0\\ 1 &{} - &{} 1 &{} 1\\ 1 &{} 0 &{} - &{} 0\\ 1&{} 0 &{} 1 &{} -\\ \end{array}\right) . \end{aligned}$$

respectively.

The first contribution of this paper gives an explicit expression for the objective function of RAPC. Later, we shall substantially simplify it by using properties of the z-variables.

Proposition 1

The objective function of RAPC is

$$\begin{aligned} \sum _{i,j\in V-\{ 1\}:i\ne j}c_{ij} \left( (n-2)! z_{ij}+\sum _{k,t\in V-\{i,j,1\}:k<t} \displaystyle \frac{(n-2)!}{2} (z_{kt}+z_{tk})\right. \nonumber \\ +\left. \sum _{k\in V:k\ne i,j,1}\displaystyle \frac{(n-2)!}{2}(z_{ki}+z_{ik}+z_{jk}+z_{kj})\right) \nonumber \\ +\sum _{j\in V-\{ 1\}}c_{1j}\left( (n-2)!\sum _{k\in V-\{1,j\}} z_{jk}+ \sum _{k,t\in V-\{j,1\}:k<t} \displaystyle \frac{(n-2)!}{2} (z_{kt}+z_{tk})\right) \nonumber \\ +\sum _{j\in V-\{ 1\}}c_{j1}\left( (n-2)!\sum _{k\in V-\{1,j\}} z_{kj}+ \sum _{k,t\in V-\{j,1\}:k<t} \displaystyle \frac{(n-2)!}{2} (z_{kt}+z_{tk})\right) \end{aligned}$$
(2)

The idea of the proof is to compute the number of times that each cost \(c_{ij}\) appears in the objective function. For instance, consider a problem with \(n=5\) in which the cheapest cyclic sequence is 12345. Cost \(c_{53}\) appears in permutations 12453, 12534, 14253, 14532, 15324 and 15342 which have disorders 2, 2, 3, 5, 4, 5. Thus, the coefficient of \(c_{53}\) in the objective function is 21 (\(2+2+3+5+4+5\)). And also \(21=6z_{53}+3(z_{32}+z_{42}+z_{52}+z_{43}+z_{54}),\) i.e., six times we add the disorder of visiting node 5 before node 3, three times we add the disorder of visiting node 3 before node 2 and so on.

Proof of Proposition 1

Since all the cyclic sequences start at node 1, we distinguish between \(c_{ij}\) with \(i,j\in V{\setminus }\{1\},\) \(i\ne j\) and \(c_{1j}\) or \(c_{j1}.\) We say that an ordered set of two nodes \(\{i,j\}\) incurs in disorder if j is visited before i in the solution. Recall that the number of nodes is be greater than 2, i.e., \(n\ge 3\).

We distinguish three cases:

  • \(i,j\in V{\setminus }\{ 1\},\) \(i\ne j.\) \(c_{ij}\) is added to the objective function when the arc (ij) belongs to the cyclic sequence and either \(\{i,j\}\) incurs itself in disorder or any of the sets \(\{k,i\},\{i,k\},\{k,j\},\{j,k\}\) for \(k\in V-\{ 1,i,j\}\) or \(\{k,t\},\{t,k\}\) for \(k,t\in V-\{ 1,i,j\},\) \(k\ne t\) incurs in disorder. Therefore, we need to compute: (i) the number of permutations to which the arc (ij) belongs to; (ii) how many permutations among them visit k before and after i and before and after j for all \(k\in V-\{ 1,i,j\}\); (iii) how many permutations among them visit k before t and viceversa with \(k,t\in V-\{ 1,i,j\},\) \(k\ne t.\)

  • The arc (ij) belongs to \(\{n-2\}!\) permutations starting at 1: there are \(n-2\) positions for i if it must be followed by j. Once i and j are assigned the other \(n-3\) positions can be sorted anyhow, thus \(\{n-2\}\times \{n-3\}!=\{n-2\}!.\) The first addend of the coefficient of \(c_{ij}\) is \((n-2)!z_{ij}.\)

  • By symmetry, in half of the \(\{n-2\}!\) permutations having arc (ij) k is visited before t and in half of the permutations t is visited after k for \(k,t\in V-\{ 1,i,j\},\) \(k\ne t.\) The second addend of the coefficient of \(c_{ij}\) is

    $$\begin{aligned} \sum _{k,t\in V-\{i,j,1\}:k<t} \displaystyle \frac{(n-2)!}{2} (z_{kt}+z_{tk}) \end{aligned}$$
  • By symmetry, in half of the \(\{n-2\}!\) permutations having arc (ij),  k is visited before i and in half of the permutations k is visited after i for \(k\in V-\{ 1,i,j\} .\) Analogously, for the number of times that k is visited before and after j. The third addend of the coefficient of \(c_{ij}\) is

    $$\begin{aligned} \sum _{k\in V:k\ne i,j,1}\displaystyle \frac{(n-2)!}{2}(z_{ki}+z_{ik}+z_{jk}+z_{kj}). \end{aligned}$$
  • \(j\in V{\setminus }\{ 1\}.\) \(c_{1j}\) is added to the objective function when the arc (1, j) belongs to the cyclic sequence and any of the sets \(\{j,k\}\) for \(k\in V-\{ 1,j\}\) or \(\{k,t\},\{t,k\}\) for \(k,t\in V-\{ 1,j\},\) \(k\ne t\) incur in disorder. It is similar to the case for \(c_{ij}\) with \(i>1\) but with two differences: (i) Since 1 is always the first node, the set \(\{ 1,j\}\) never incurs itself in disorder; (ii) the number of permutations among those with the arc (1, j) such that \(k\in V-\{ 1,j\}\) is visited before j is zero. Therefore, we need to compute the number of permutations to which the arc (1, j) belongs and how many among them visit k before t and viceversa with \(k,t\in V-\{ 1,i,j\},\) \(k\ne t.\) It is easy to observe that the number of permutations with the arc (1, j) is \((n-2)!\) and that half of them visit k before t.

  • \(c_{j1}\) for \(j>1.\) Analogous to the previous case.

\(\square \)

Applying Proposition 1, we can formulate the RAPC as:

$$\begin{aligned} \text {(RAPC)}\,\, \min&(2) \nonumber \\ \text {s.t. }&z_{ij}+z_{ji}=1 \quad \forall i,j\in V: i< j \end{aligned}$$
(3)
$$\begin{aligned}&z_{ji}+z_{kj}+z_{ik}\ge 1 \quad \forall i,j,k\in V:i\ne j, j\ne k, i\ne k \end{aligned}$$
(4)
$$\begin{aligned}&z_{j1}=1 \quad \forall j\in V-\{ 1\} \end{aligned}$$
(5)
$$\begin{aligned}&z_{ij}\in \{ 0,1\} \quad \forall i,j\in V:i\ne j \end{aligned}$$
(6)

Constraint (3) translates the fact that any element i is ranked before or after j but both cases cannot occur simultaneously. It implies that both values \(z_{ji}=0\) and \(z_{ij}=1\) represent that element j is ranked before element i. Constraint (4) describes the transitive relationship between the position of three elements in the permutation. The message is that if j goes after k and k goes after i, (i.e., \(z_{kj}=z_{ik}=0\)) then j must go after i, i.e., \(z_{ji}=1\). Any solution of the system (3), (4), (6) is a permutation of the elements in V: all the nodes are relatively sorted and transitivity holds. (5) forces node 1 to be the first node of any cyclic sequence. Indeed, the set of constraints (3), (4), (6) is the set of constraints for the RAP in the literature [1, 9].

Since any solution of the RAPC satisfies (3), we can simplify the objective function. In particular, it reduces to the following expression:

$$\begin{aligned} (n-2)!\left( \sum _{i,j\in V-\{ 1\}:i\ne j}c_{ij} \left( z_{ij}+\sum _{k,t\in V-\{i,j,1\}:k<t} 0.5 +\sum _{k\in V:k\ne i,j,1} 1\right) \right. \\ +\sum _{j\in V-\{ 1\}}c_{1j}\left( \sum _{k\in V-\{1,j\}} z_{jk}+ \sum _{k,t\in V-\{j,1\}:k<t} 0.5\right) \\ +\left. \sum _{j\in V-\{ 1\}}c_{j1}\left( \sum _{k\in V-\{1,j\}} z_{kj}+ \sum _{k,t\in V-\{j,1\}:k<t} 0.5\right) \right) \end{aligned}$$

Ignoring the factor \((n-2)!\) and the constant, those cyclic sequences which minimize the above expression are the same cyclic sequences which minimize

$$\begin{aligned} (\text {RAPC}^*)\,\, \min \sum _{i,j\in V-\{ 1\} :j\ne i} \,\, (c_{ij}+c_{1i}+c_{j1} )z_{ij}.\\ \text {s.t.} \, (3), (4), (5), (6) \nonumber \end{aligned}$$
(7)

Therefore, it is sufficient to consider the objective function given in (7) to obtain the optimal solution of the RAPC.

In order to recover the optimal value of the RAPC, (1), the constant and the factor should also be computed:

$$\begin{aligned}&v^*(\text {RAPC})= \frac{(n-2)!}{2}\left( 2\, v^*(\text {RAPC}^*) \right. \nonumber \\&\quad +\left. \left( \left( {\begin{array}{c}n-3\\ 2\end{array}}\right) +2(n-3)\right) \sum _{i,j\in V-\{ 1\}:i\ne j}c_{ij}+ \left( {\begin{array}{c}n-2\\ 2\end{array}}\right) \sum _{j\in V-\{ 1\}}(c_{j1}+c_{1j})\right) \nonumber \\ \end{aligned}$$
(8)

where \(v^*\)(RAPC\(^*\)) is the optimal value of (7).

Even if RAPC is strongly similar to LOP, one of the contributions of this paper is to prove that the optimal value of RAPC is exactly given by (8). Moreover, since solving the RAPC implies solving the LOP, the RAPC is NP-hard.

3 The permutated asymmetric traveling salesman problem

Let us consider the situation in which the optimal tour given when solving the Asymmetric Traveling Salesman Problem (ATSP) needs to be modified because of unexpected events. We allow the ATSP to admit a source of uncertainty and we wish to hedge against it assuming a robust alternative criterion. If our aim is that tours with large disagreements are less likely to occur than tours with small disagreements we can define accordingly a probability that a tour occurs as

$$\begin{aligned} p(\rho ,r)=\frac{\mathcal {B}(n-1)-d(\rho , r)}{\mathcal {A}(n-1)}\forall \rho , r \in R \end{aligned}$$

where \(\mathcal {A}(n)\) is the total number of disagreements in all the permutations of n and \(\mathcal {B}(n)\) is the maximum number of disagreements that may occur from any permutation pair of n elements and both values are known to be

$$\begin{aligned} \mathcal {A}(n)=n!n(n-1)/4,\quad \mathcal {B}(n)=n(n-1)/2. \end{aligned}$$

All the above defined probabilities are non-negative because \(\mathcal {B}(n)\) is the maximum number of possible disagreements, and the addition of any row of the probability matrix is 1:

$$\begin{aligned} \sum _{r\in R}p(\rho ,r)=\sum _{r\in R} \frac{\mathcal {B}(n-1)}{\mathcal {A}(n-1)} -\sum _{r\in R} \frac{d(\rho ,r)}{\mathcal {A}(n-1)} = (n-1)!\frac{\mathcal {B}(n-1)}{\mathcal {A}(n-1)} -1 = 1 \end{aligned}$$

In our example, probabilities are those in Table 2, which follow from applying formula \((3-\delta _{ij})/9\) to disagreements in Table 1.

Table 2 Probability matrix; columns are the potential tours which start at 1 and rows are all the possible permutations

Now, we can define the Permutated Asymmetric Salesman Problem as the problem of finding the optimal tour \(\rho \) in R for the following aggregation function:

$$\begin{aligned} (\rho \text {ATSP})\,\, \arg \min _{\rho }&\sum _{r\in R} C(r) p(\rho ,r). \end{aligned}$$
(9)

According to the definition of the probabilities, \(\rho \)ATSP is equivalent to RAPC, changing maximization by minimization in the definition of the problems.

$$\begin{aligned} {(\text {RAPC}_{max})}\,\, \arg \max _{\rho }&\sum _{r\in R} C(r) d(\rho ,r) \end{aligned}$$
(10)

In fact, if we denote by \(v^*(\rho ATSP)\) the optimal objective value of \(\rho \)ATSP, \(v^*(\rho ATSP)=2\sum _{r\in R} C(r) /(n-1)! - v^*(\textit{RAPC}_{max})/\mathcal {A}(n-1)\).

Table 3 Left: asymmetric cost matrix, right: route costs and expected tour costs

Table 3 shows the objective value of ATSP and \(\rho \)ATSP for all the possible tours for the given cost matrix. In our example, \(v^*(\rho ATSP)=6.11\), \(\sum _{r\in R}C(r)=40\) and \(v^*(\text {{ RAPC}}_{max})=65\).

It is easy to see that an optimal tour for RAPC is an optimal tour for RAPC\(_{max}\) in the reverse direction. Given a tour \(r\in R\) and the tour in the reverse direction \(\bar{r}\) it is always satisfied that

$$\begin{aligned} \delta _{rs}+\delta _{\bar{r}s}=\mathcal {B}(n-1). \end{aligned}$$
(11)

From Table 1 we can observe that \(\delta _{1234,s}+\delta _{1432,s}=3\) for all \(s\in \{ 1234, 1243, 1324, 1342, 1423, 1432\},\) analogously \(\delta _{1243,s}+\delta _{1342,s}=3\) and \(\delta _{1324,s}+\delta _{1423,s}=3\) for all \(s\in \{ 1234, 1243, 1324, 1342, 1423, 1432\}.\) Thus, the reverse tour to an optimal tour for RAPC\(_{max}\) is an optimal tour for RAPC and viceversa. Moreover, \(v^*(\textit{RAPC})=\mathcal {B}(n-1) \sum _{r\in R} C(r) -v^*(\textit{RAPC}_{max})\) and \(v^*(\textit{RAPC})/\mathcal {A}(n-1)=v^*(\rho \textit{ATSP}).\)

On the other hand, we would like to emphasize that optimal solutions to the ATSP are not related with optimal solutions to \(\rho \)ATSP (see Table 3 where \(v^*(\textit{ATSP})=5\) and \(v^*(\rho \textit{ATSP})=6.11\)). If we compute the objective value of \(\rho \)ATSP for the optimal value of the ATSP it might strongly differ from the optimal value of \(\rho \)ATSP. In order to illustrate this fact, we have solved some of the ATSP instances in TSPLIB [20] and the results are shown in Table 4: \(v(\mathrm{ATSP})\) is the objective value in \(\rho \)ATSP for the optimal solution of the ATSP and \(\mathrm{\%diff}\) stands for

$$\begin{aligned} \mathrm{\%diff}=100(v(\mathrm{ATSP})-v^*(\rho \mathrm{ATSP}))/v^*(\rho \mathrm{ATSP}). \end{aligned}$$

For the 22 instances presented in the table, the percentage of difference goes from 0 to 9.82 \(\%\) (the 0 \(\%\) corresponds to a special case in which both solutions, the one for the \(\rho \)ATSP and the one for the ATSP coincide). The average difference for the ftv instances is around 6 % and for all the instances presented in the table it is of a 4.8 %. These differences indicate that the solutions provided by the ATSP model are not adequate as solutions to the \(\rho \)ATSP.

Table 4 Percentage of deviation for the 22 ATSP instances in the TSPLIB [20]

4 Relationships among the different problems

In this section we prove that we can obtain the optimal solution of the RAPC from the optimal solution of the LOP or from the optimal solution of the \(\rho \)ATSP and viceversa. Since LOP and RAP were also proved to be equivalents in [6], we state that LOP, RAP, RAPC and \(\rho \)ATSP are equivalent.

Theorem 1

LOP, RAPC and \(\rho \)ATSP are equivalent.

Proof of Theorem 1

Let \(\hat{G}=(\hat{V},\hat{A})\) be a completed directed graph with arc weight \(w_{ij}\) for each pair \(i,j \in \hat{V}.\) Let \(x_{ij}\) be a binary variable which takes the value of 1 iff i goes before j for all \(i,j\in \hat{V}.\) The IP formulation of the LOP [17] is:

$$\begin{aligned} \text {(LOP)}\,\, \max&\sum _{(i,j)\in \hat{A}} w_{ij}x_{ij} \\ \hbox {s.t. }&x_{ij}+x_{ji}=1 \quad \forall i,j\in \hat{V}: i< j \\&x_{ij}+x_{jk}+x_{ki}\le 2 \quad \forall i,j,k\in \hat{V}:i\ne j, j\ne k, i\ne k \\&x_{ij}\in \{ 0,1\} \quad \forall i,j\in \hat{V}:i\ne j \end{aligned}$$
  • LOP and RAPC are equivalents:

    1. (i)

      \(\sigma \) is an optimal permutation for the LOP in the complete graph \(\hat{G}\) with nodes \(\hat{V}=\{ i_1,\ldots ,i_n\}\) and weights \(w_{ij}\) iff the tour \(\rho =(1,\sigma )\) is an optimal tour for the RAPC in the complete graph with nodes \(\{ 1\}\cup \hat{V}\) and weights \(c_{ij}=w_{ij}\) for all \(i,j\in \hat{V}\) and \(c_{j1}=c_{1j}=0\) for all \(j\in \hat{V}.\) As proved in Sect. 2, RAPC and RAPC\(^*\) have the same set of optimal solutions and because of definition \(x_{ij}=1-z_{ij}\) (analogously \(x_{ij}=z_{ji}\)). Then, optimal solutions for the RAPC in the complete graph with nodes \(\{ 1\}\cup \hat{V}\) and weights \(c_{ij}\) are the solutions of the problem

      $$\begin{aligned} \min&\sum _{i,j\in \hat{V} :j\ne i} \,\, (w_{ij}+0+0 )(1-x_{ij}) \\ \hbox {s.t. }&x_{ij}+x_{ji}=1 \quad \forall i,j\in \{ 1\}\cup \hat{V}: i< j \\&x_{ji}+x_{kj}+x_{ik}\le 2 \quad \forall i,j,k\in \{ 1\}\cup \hat{V}:i\ne j, j\ne k, i\ne k \\&x_{1j}=1 \quad \forall j\in \hat{V} \\&x_{ij}\in \{ 0,1\} \quad \forall i,j\in \{ 1\}\cup \hat{V}:i\ne j \end{aligned}$$

      which optimal routes start by 1 and are followed by optimal permutations of the LOP in \(\hat{G}.\)

    2. (ii)

      \(\rho =(1,\sigma )\) is an optimal solution for the RAPC in the complete graph \(\hat{G}\) with nodes \(\hat{V}=\{ 1,\ldots ,n\}\) and weights \(c_{ij}\) iff \(\sigma \) is an optimal solution for the LOP in the complete graph with nodes \(\{ 2,\ldots ,n\}\) and weights \(w_{ij}=c_{ij}+c_{1j}+c_{i1}\) for all \(i,j\in \{ 2,\ldots ,n\} .\) Replacing \(x_{ij}\) by \(1-z_{ij}\) and the weights by its value, the LOP is

      $$\begin{aligned} \max&\sum _{(i,j)\in \{ 2,\ldots ,n\}:i\ne j} (c_{ij}+c_{1j}+c_{i1})(1-z_{ij}) \\ \hbox {s.t. }&z_{ij}+z_{ji}=1 \quad \forall i,j\in \{ 2,\ldots ,n\}: i< j \\&z_{ij}+z_{jk}+z_{ki}\ge 1 \quad \forall i,j,k\in \{ 2,\ldots ,n\}:i\ne j, j\ne k, i\ne k \\&z_{ij}\in \{ 0,1\} \quad \forall i,j\in \{ 2,\ldots ,n\}:i\ne j. \end{aligned}$$

      Since \(z_{j1}\) is not in the objective function, we can add \(z_{j1}=1\) to the set of constraints and also extend the rest of constraints to \(\hat{V}:\) \(z_{j1}+z_{1j}=1+0=1\) and \(z_{1j}+z_{jk}+z_{k1}=1+z_{jk}\ge 1\) for all \(j,k \in \{ 2,\ldots ,n\}:j\ne k.\) Which gives the optimal solutions of the RAP in \(\hat{G}.\)

  • RAPC and \(\rho \)ATSP are equivalent:From the discussion in Sect. 3, the optimal solution \(\rho ^*\) of \(\rho \)ATSP is the same cyclic sequence as the optimal solution of RAPC but in the reverse direction:

    $$\begin{aligned} \rho ^*= & {} \arg \min _{\rho } \sum _{r\in R} C(r) p(\rho ,r)= \arg \min _{\rho } \sum _{r\in R} C(r) \frac{\mathcal {B}(n-1)-d(\rho , r)}{\mathcal {A}(n-1)}\\= & {} \arg \min _{\rho } \sum _{r\in R} C(r)\frac{d(\bar{\rho }, r)}{\mathcal {A}(n-1)}=\arg \min _{\rho } \sum _{r\in R} C(r)d(\bar{\rho }, r)\\= & {} \arg \min _{\bar{\rho }} \sum _{r\in R} C(r)d(\rho , r). \end{aligned}$$

    The third equality follows from Eq. (11).\(\square \)

Theorem 1 implies that the RAPC and the \(\rho \)ATSP inherit all the properties of the LOP. For instance, for symmetric weight matrices all the solutions are optimal, among many others.

Since RAP is equivalent to LOP (see [6]), the following Corollary holds.

Corollary 1

LOP, RAP, RAPC and \(\rho \)ATSP are equivalents.

From Corollary 1 it follows that we have introduced two new applications of the LOP devoted to aggregation of preferences and robustness of cyclic sequences.

One consequence of Theorem 1 and Corollary 1 is that the set of constraints (3), (4), (6) are appropriate for formulating the new problems RAPC and \(\rho \)ATSP because they represent a good formulation for the LOP. From the polyhedral analysis of the LOP in the literature follows that: (3) is a minimal equation system [13], (4) is a facet [13] and there are other facets [5, 13, 16] that can be considerer in a branch and cut algorithm.