1 Introduction

Data uncertainty appears in many optimization problems. In recent decades Robust (Aissi et al. 2009; Averbakh 2005; Candia-Véjar et al. 2011; Kouvelis and Yu 2013; Chassein and Goerigk 2016; Kasperski and Zielinski 2016), Stochastic (Li and Liu 2016), Multiparametric (Oberdieck et al. 2016) and fuzzy programming (Carlsson and Fuller 2012) approaches have been developed to deal with such uncertainties. In this paper we use the Robust approach.

There are several robust optimization concepts you may select. Some examples are (Chassein and Goerigk 2016): classic robustness, absolute or relative regret robustness, adjustable robustness, recoverable robustness, light robustness, soft robustness, lexicographic \(\alpha \)-robustness, recovery-to-optimality, or similarity-based robustness. In this paper we consider the classic robustness and absolute or relative regret.

Let P be a combinatorial optimization problems in (\(\mathbf {x}\)) with uncertainty in the cost vector as follows:

$$\begin{aligned} (P(\mathbf {c)})\;\; \underset{\mathbf {x} \in X}{\min }\;\;\mathbf {c}^t \mathbf {x} \end{aligned}$$

where \(\mathbf {c} \in \varOmega \), \(\varOmega \subseteq \mathbb {R}^n\), \(X \subseteq {\lbrace 0,1 \rbrace }^n\), X is not an empty set.

A few words about our notation: If R is an optimization problem then v(R) is its optimal value and let \([n] = \lbrace 1,\ldots ,n \rbrace \) denote a set of indices.

A first approach of robust optimization (classic approach) is to find a solution that is optimal in the worst case, known as robust solution, by solving the next problem in (\(\mathbf {x}\)):

$$\begin{aligned} \underset{\mathbf {x} \in X}{\text{ min }}\;\; \underset{\mathbf {c} \in \varOmega }{\text{ max }}\;\;\mathbf {c}^t \mathbf {x}. \end{aligned}$$

Recently a novel approach has been presented considering a set of k solutions instead of a single one (Buchheim and Kurtz 2016, 2017; Chassein et al. 2019). The problem in (H) to be solved is:

$$\begin{aligned} \underset{H \subseteq X}{\min } \left\{ \underset{\mathbf {c} \in \varOmega }{\max }\;\; \underset{\mathbf {h} \in H}{\min }\;\;\mathbf {c}^t \mathbf {h}:\;\;\vert H \vert = k \right\} . \end{aligned}$$

The idea behind this approach is to find a set of solutions and choose the best one each time a new scenario \(\mathbf {c}\) appears.

As an application, imagine an emergency service designed based on the p-Medians problem (Boffey and Karkazis 1984). Each time that changes the current situation a new set of Medians could be computed. However, this may be a hard task. Even if the computational effort is not large, an excessive number of solutions may be unacceptable for human users. Instead, in this new approach a set of solutions is computed once and then we can choose the best one in real time taken from a relatively small set of solutions yielding better performance compared to the min max approach.

The regret approach, instead of classical robust approach, has been considered by many authors (Chassein and Goerigk 2016) as a valid option for the \(k=1\) case in order to take into account the deviation from the optimal values.

An approach of robust optimization taking into account the absolute deviation from the optimal values is to find a solution with a minimum regret known as an absolute robust solution. The regret of \(\mathbf {x}\) is equal to

$$\begin{aligned} \underset{\mathbf {c} \in \varOmega }{\max }\;\; \mathbf {c}^t \mathbf {x} - v(P(\mathbf {c})) \end{aligned}$$

In this case the problem to be solved is the min max regret problem in (\(\mathbf {x}\)):

$$\begin{aligned} \underset{\mathbf {x} \in X }{\min } \;\;\underset{\mathbf {c} \in \varOmega }{\max }\;\; \mathbf {c}^t \mathbf {x} - v(P(\mathbf {c})) \end{aligned}$$

An approach of robust optimization taking into account the relative deviation from the optimal values is to find a solution with a minimum relative regret known as a relative robust solution. The relative regret of \(\mathbf {x}\) is equal to

$$\begin{aligned} \underset{\mathbf {c} \in \varOmega }{\max } \frac{\mathbf {c}^t \mathbf {x} - v(P(\mathbf {c}))}{v(P(\mathbf {c}))} \end{aligned}$$

.

In this case the problem to be solved is the min max relative regret problem in (\(\mathbf {x}\)):

$$\begin{aligned} \underset{\mathbf {x} \in X}{\text{ min }}\;\; \underset{\mathbf {c} \in \varOmega }{\max }\;\; \frac{\mathbf {c}^t \mathbf {x} - v(P(\mathbf {c}))}{v(P(\mathbf {c}))} \end{aligned}$$

To the best of our knowledge the regret approach with \(k > 1\) solutions was not studied so far. Our main contribution is to close this gap . In this paper, we extend the novel approach presented above considering a set of k solutions and take into account the absolute and relative deviations from the optimal values.

Let \(H \subseteq X\) with \(\vert H \vert \ge 1\). Let Q(H) be a problem in (\(\mathbf {c}\)) defined as follows:

$$\begin{aligned} (Q(H))\;\;\underset{\mathbf {c} \in \varOmega }{\max }\;\; \underset{\mathbf {h} \in H}{\min }\;\; \mathbf {c}^t \mathbf {h} - v(P(\mathbf {c})) \end{aligned}$$

The regret of H is equal to v(Q(H)).

Let \(Q_r(H)\) be a problem in (\(\mathbf {c}\)) defined as follows:

$$\begin{aligned} (Q_r(H))\;\;\underset{\mathbf {c} \in \varOmega }{\max }\;\; \frac{\underset{\mathbf {h} \in H }{\min }\;\; \mathbf {c}^t \mathbf {h} - v(P(\mathbf {c}))}{v(P(\mathbf {c})) } \end{aligned}$$

The relative regret of H is equal to \(v(Q_r(H))\).

We are looking for the set with cardinality equal to k and with the minimum regret (minimum relative regret) by solving the min max min robust (relative) regret problems, namely: min set-regret(k) problem (min relative set-regret(k) problem) in (H), defined as follows:

$$\begin{aligned} \text{(MSR(k)) }\;\;\underset{H \subseteq X}{\min } \lbrace v(Q(H)):\vert H \vert= & {} k \rbrace \\ \text{(MrelSR(k)) }\;\;\underset{H \subseteq X}{\min } \lbrace v(Q_r(H)):\vert H \vert= & {} k \rbrace \end{aligned}$$

Note that if we use \(1 \le \vert H \vert \le k\) instead of \(\vert H \vert =k\) we have exactly the same problems.

There are several options to model the uncertainty. Some examples are (Chassein and Goerigk 2016): finite uncertainty, interval uncertainty, bounded uncertainty, ellipsoidal uncertainty and polytopal uncertainty.

Let us consider the interval uncertainty case as follows: Let \(\varOmega = \lbrace \mathbf {c}:\mathbf {L} \le \mathbf {c} \le \mathbf {U} \rbrace \), \(\mathbf {L} \in \mathbb {R}^n\),\(\mathbf {U} \in \mathbb {R}^n\) with \(\mathbf {0} \le \mathbf {L} \le \mathbf {U}\). Let us suppose that \(\mathbf {c}^t \mathbf {x} > 0\) for all \(\mathbf {c} \in \varOmega \) and for all \(\mathbf {x} \in X\). Note that \(v(P(\mathbf {c})) > 0\) for all \(\mathbf {c} \in \varOmega \).

For the interval uncertainty case the min max approach (for \(k=1\)) and the min max min approach (for \(k > 1\)) can be reduced easily to solve \(P(\mathbf {U})\).

In Sect. 2, we present an \(\epsilon \)-optimal algorithm (with \(\epsilon \ge 0\) a predefined tolerance) to solve the MSR(k) problem that may be considered a generalization of a classical approach where \(k=1\) (Aissi et al. 2009). Solving the MSR(k) problem it could be a task that consumes a lot of computing time and a lot of memory, therefore it may be useful to consider two greedy approaches, which will be presented in Sect. 3.

In Sect. 4, we consider the MrelSR(k) problem and we present an \(\epsilon \)-optimal algorithm to solve it. Our presentation follows the same scheme that we used for the MSR(k) problem and the proof of some lemmas are analogous. Thus, for the sake of simplicity and in order to save space, we omit some proofs. Again, two greedy approaches will be presented in Sect. 5.

Computational results are presented in Sect. 6 for the Shortest Path problem (Montemanni and Gambardella 2005) and the p-Medians problem. Conclusions and further extensions are presented in Sect. 7.

2 An algorithm to solve the MSR(k) problem

In this section, we present an algorithm to obtain an optimal solution for the \(\text{ MSR }(k)\) problem. If \(H \subseteq X\) with \(\vert H \vert = k\) then v(Q(H)) is an upper bound of \(v(\text{ MSR }(k))\). Let \(Y \subseteq X\). We present later a problem S(Y) in (H) to obtain a lower bound. The algorithm iterates between solving the problems S(Y) (to generate a new H) and Q(H) (to generate a new \(\mathbf {y}\) to be added to Y) until the gap is closed. If \(k=1\) the algorithm becomes a known classical algorithm to obtain an absolute robust solution. Some lemmas and auxiliary problems are necessary.

Let \(\mathbf {x} \in X\). The best scenario for \(\mathbf {x}\) is defined as follows:

$$\begin{aligned} {\mathbf {c}^{+}(\mathbf {x})}_j = \mathbf {L}_j \mathbf {x}_j + \mathbf {U}_j(1- \mathbf {x}_j) \text { for all } j \end{aligned}$$

We need some well known elementary properties as follows (for the sake of completeness we present a proof):

Proposition 1

  1. (i)

    \(\mathbf {c}^t \mathbf {h} - \mathbf {c}^t \mathbf {x} \le {\mathbf {c}^{+}(\mathbf {x})}^t \mathbf {h} - {\mathbf {c}^{+}(\mathbf {x})}^t \mathbf {x}\;\;\forall \mathbf {c} \in \varOmega \;\;\forall \mathbf {x},\mathbf {h} \in X\)

  2. (ii)

    If \({\mathbf {c}^+(\mathbf {x})}^t \mathbf {x}> v(P(\mathbf {c}^+(\mathbf {x})))\;\;\text{ then }\;\;\mathbf {c}^t \mathbf {x} > v(P(\mathbf {c}))\;\;\forall \mathbf {x} \in X\;\;\forall \mathbf {c} \in \varOmega \)

  3. (iii)

    \(\mathbf {c}^{+}(\mathbf {x})^t \mathbf {x} = \mathbf {L}^t \mathbf {x} \le \mathbf {c}^t \mathbf {x}\;\;\forall \mathbf {c} \in \varOmega \;\;\forall \mathbf {x} \in X\)

Proof

See “Appendix A”.

As an immediate consequence of Proposition 1(ii) we have that \(\mathbf {x}\) is an optimal solution for \(P(\mathbf {c})\) for some \(\mathbf {c} \in \varOmega \) if and only if \(\mathbf {x}\) is an optimal solution for \(P(\mathbf {c}^+(\mathbf {x}))\). That suggests to solve Q(H) the search can be restricted to scenarios of the type \(\mathbf {c}^+(\mathbf {x})\) such that \(\mathbf {x}\) is an optimal solution for \(P(\mathbf {c}^+(\mathbf {x}))\). That is proved in Lemma 1(i), (ii). In order to solve Q(H) we present in Lemma 1(iii) a reformulation named Q2(H) by using a level set transformation. The relation between the optimal solutions for both problems is presented in Lemma 2. \(\square \)

Lemma 1

Let \(H \subseteq X\) with \(\vert H \vert \ge 1\), then:

  1. (i)

    \(v(Q(H)) = \underset{\mathbf {x} \in X }{\max }\;\; \underset{\mathbf {h} \in H}{\min } \;\; \mathbf {c}^{+}(\mathbf {x})^t \mathbf {h} - \mathbf {c}^{+}(\mathbf {x})^t \mathbf {x}\)

  2. (ii)

    Let \(\mathbf {x}^* \in X\). If \(v(Q(H)) = \underset{\mathbf {h} \in H}{\min } \;\; \mathbf {c}^{+}(\mathbf {x}^*)^t \mathbf {h} - \mathbf {c}^{+}(\mathbf {x}^*)^t \mathbf {x}^*\) then \(\mathbf {x}^*\) is an optimal solution for \(P(\mathbf {c}^+(\mathbf {x}^*))\).

  3. (iii)

    Given the problem Q2(H) in \((\sigma ,\mathbf {x})\) defined as follows:

    $$\begin{aligned} (Q2(H))\;\;\underset{\sigma \in \mathbb {R},\mathbf {x} \in X}{\max } \lbrace \sigma - \mathbf {L}^t \mathbf {x}:\sigma \le {\mathbf {c}^{+}(\mathbf {x})}^t \mathbf {h}\;\;\forall \mathbf {h} \in H \rbrace \end{aligned}$$

    then \(v(Q(H)) = v(Q2(H))\).

Proof

$$\begin{aligned}&\mathrm{(i)}:\;\;v(Q(H)) \overset{(1)}{=} \underset{\mathbf {c} \in \varOmega }{\max }\;\; \underset{\mathbf {h} \in H}{\min }\;\; \mathbf {c}^t \mathbf {h} - v(P(\mathbf {c})) \overset{(2)}{=}\\&\quad \underset{\mathbf {c} \in \varOmega ,\;\mathbf {x} \in X}{\max }\;\; \underset{\mathbf {h} \in H}{\min }\;\;\mathbf {c}^t \mathbf {h} - \mathbf {c}^t \mathbf {x} \overset{(3)}{\le }\\&\quad \underset{\mathbf {x} \in X }{\max }\;\; \underset{\mathbf {h} \in H}{\min } \;\; \mathbf {c}^{+}(\mathbf {x})^t \mathbf {h} - \mathbf {c}^{+}(\mathbf {x})^t \mathbf {x} \overset{(4)}{\le } v(Q(H)) \end{aligned}$$

where

  1. (1)

    From the definition of Q(H).

  2. (2)

    From the definition of P(c).

  3. (3)

    From Proposition 1(i).

  4. (4)

    From the definition of Q(H) and since \(\mathbf {c}^+ (\mathbf {x}) \in \varOmega \;\; \forall x \in X\).

    1. (ii)

      Let us suppose that \(\hat{\mathbf {x}} \in X\) is an optimal solution for \(P(\mathbf {c}^+(\mathbf {x}^*))\) with \({\mathbf {c}^+(\mathbf {x}^*)}^t \hat{\mathbf {x}} < {\mathbf {c}^+(\mathbf {x}^*)}^t {\mathbf {x}^*}\). In that case we have that:

      \(v(Q(H)) = \underset{\mathbf {h} \in H}{\min } \;\; \mathbf {c}^{+}(\mathbf {x}^*)^t \mathbf {h} - \mathbf {c}^{+}(\mathbf {x}^*)^t \mathbf {x}^* < \underset{\mathbf {h} \in H}{\min } \;\; \mathbf {c}^{+}(\mathbf {x}^*)^t \mathbf {h} - \mathbf {c}^{+}(\mathbf {x}^*)^t \hat{\mathbf {x}} = \underset{\mathbf {h} \in H}{\min } \;\; \mathbf {c}^{+}(\mathbf {x}^*)^t \mathbf {h} - v(P(\mathbf {c}^{+}(\mathbf {x}^*)) \le v(Q(H))\) and we have a contradiction.

    2. (iii)

      Since \(\mathbf {c}^+(\mathbf {x})^t \mathbf {x} = \mathbf {L}^t \mathbf {x}\) for all \(\mathbf {x} \in X\) we have that

      $$\begin{aligned} v(Q(H))= & {} \underset{\mathbf {x} \in X }{\max }\;\; \underset{\mathbf {h} \in H}{\min } \;\; \mathbf {c}^{+}(\mathbf {x})^t \mathbf {h} - \mathbf {c}^{+}(\mathbf {x})^t \mathbf {x}\\= & {} \underset{\sigma \in \mathbb {R},\mathbf {x} \in X}{\max } \lbrace \sigma - \mathbf {L}^t \mathbf {x}: \sigma \le {\mathbf {c}^{+}(\mathbf {x})}^t \mathbf {h}\;\;\forall \mathbf {h} \in H\rbrace = v(Q2(H))\;\;\bullet \end{aligned}$$

\(\square \)

Lemma 2

Let \(H \subseteq X\) with \(\vert H \vert \ge 1\).

  1. (i)

    Let \(\mathbf {c}^*\) be an optimal solution for Q(H), let \(\mathbf {x}^*\) be an optimal solution for \(P(\mathbf {c}^*)\) and let \(\sigma ^* = \underset{\mathbf {h} \in H }{\min } \;\;{\mathbf {c}^+(\mathbf {x^*})}^t \mathbf {h}\) then \((\sigma ^*,\mathbf {x}^*)\) is an optimal solution for Q2(H).

  2. (ii)

    Let \((\sigma ^*,\mathbf {x}^*)\) be an optimal solution for Q2(H) then \(\mathbf {c}^{+} (\mathbf {x}^*)\) is an optimal solution for Q(H).

Proof

$$\begin{aligned}&(i)\;\;v(Q(H)) \overset{(1)}{=} \underset{\mathbf {h} \in H}{\min }\;\;{\mathbf {c}^*}^t \mathbf {h} - v(P(\mathbf {c}^*)) \overset{(2)}{=} \underset{\mathbf {h} \in H}{\min }\;\;{\mathbf {c}^*}^t \mathbf {h} - {\mathbf {c}^*}^t \mathbf {x}^* \overset{(3)}{\le }\\&\quad \underset{\mathbf {h} \in H}{\min }\;\;{{\mathbf {c}^+(\mathbf {x}}^*)}^t \mathbf {h} - {{\mathbf {c}^+(\mathbf {x}}^*)}^t\mathbf {x}^* \overset{(4)}{=} \sigma ^* - \mathbf {L}^t \mathbf {x}^* \overset{(5)}{\le } v(Q2(H)) \overset{(6)}{=} v(Q(H)) \end{aligned}$$

where

  1. (1)

    Since \(\mathbf {c}^*\) is an optimal solution for Q(H).

  2. (2)

    Since \(\mathbf {x}^*\) is an optimal solution for \(P(\mathbf {c}^*)\).

  3. (3)

    Since \(\mathbf {c}^t \mathbf {h} - \mathbf {c}^t \mathbf {x} \le {\mathbf {c}^{+}(\mathbf {x})}^t \mathbf {h} - {\mathbf {c}^{+}(\mathbf {x})}^t \mathbf {x}\) for all \(\mathbf {h} \in X,\mathbf {x} \in X,\mathbf {c} \in \varOmega \).

  4. (4)

    From the definition of \(\sigma ^*\) and since \(\mathbf {c}^{+}(\mathbf {x})^t \mathbf {x} = \mathbf {L}^t \mathbf {x}\;\;\forall \mathbf {x} \in X\).

  5. (5)

    Since \((\sigma ^*,\mathbf {x}^*)\) is a feasible solution for Q2(H).

  6. (6)

    From Lemma 1.

Therefore \((\sigma ^*,\mathbf {x}^*)\) is an optimal solution for Q2(H).

$$\begin{aligned} \mathrm{(ii)}\;\;v(Q(H)) \overset{(1)}{=} v(Q2(H)) \overset{(2)}{=}\sigma ^* - \mathbf {L}^t \mathbf {x}^* \overset{(3)}{=}\underset{\mathbf {h} \in H}{\min } \;\; \mathbf {c}^{+}(\mathbf {x}^*)^t \mathbf {h} - \mathbf {c}^{+}(\mathbf {x}^*)^t \mathbf {x}^* \end{aligned}$$

where

  1. (1)

    From Lemma 1.

  2. (2)

    Since \((\sigma ^*,\mathbf {x}^*)\) is an optimal solution for Q2(H).

  3. (3)

    From the definition of Q2(H) and since \(\mathbf {L}^t \mathbf {x} = {\mathbf {c}^+ (\mathbf {x}})^t \mathbf {x}\;\;\forall x \in X\).

Therefore from Lemma 1 the scenario \({\mathbf {c}^+ (\mathbf {x}^*})\) is an optimal solution for \(Q(H)\;\;\bullet \).

We have designed Q2(H) to compute upper bounds to v(MSR(k)). If we have another problem to compute lower bounds then we may design an algorithm that iterates between both problems until the gap is closed. To do that we reformulate MSR(k) by using a level set transformation. In the reformulation we have one constraint for each \(\mathbf {x} \in X\). Let \(Y \subseteq X\). If \(\mathbf {x} \notin Y\) we delete the constraint and we have a relaxation named S(Y). Problem S(Y) is presented in Lemma 3. \(\square \)

Lemma 3

Let \(Y \subseteq X\) with \(\vert Y \vert \ge 1\). Let S(Y) be a problem in \((\delta ,H)\) defined as follows:

$$\begin{aligned} (S(Y))\;\;\underset{\delta \in \mathbb {R},H \subseteq X}{\min } \lbrace \delta : \delta \ge \mathbf {c^{+}{(\mathbf {x})}}^t \mathbf {h} - \mathbf {L}^t \mathbf {x}\;\text{ for } \text{ some }\; \mathbf {h} \in H\;\forall \mathbf {x} \in Y, \;\;\vert H \vert =k \rbrace \end{aligned}$$

then \(v(\text{ MSR }(k)) \ge v(S(Y))\)

Proof

$$\begin{aligned}&v(\text{ MSR }(k)) \overset{(1)}{=} \underset{H \subseteq X}{\min } \lbrace v(Q(H)):\vert H \vert =k \rbrace \overset{(2)}{=}\\&\quad \min \lbrace \underset{\mathbf {x} \in X }{\max }\;\; \underset{\mathbf {h} \in H }{\min }\;\; \mathbf {c^{+}{(\mathbf {x})}}^t \mathbf {h} - \mathbf {L}^t \mathbf {x}: H \subseteq X,\;\; \vert H \vert = k \rbrace \overset{(3)}{=}\\&\quad \underset{\sigma \in \mathbb {R},H \subseteq X}{\min } \lbrace \delta : \delta \ge \underset{\mathbf {h} \in H }{\min }\;\; \mathbf {c^{+}{(\mathbf {x})}}^t \mathbf {h} - \mathbf {L}^t \mathbf {x}\;\; \forall \mathbf {x} \in X,\;\; \vert H \vert = k \rbrace \overset{(4)}{\ge } \\&\quad \underset{\sigma \in \mathbb {R},H \subseteq X}{\min } \lbrace \delta : \delta \ge \underset{\mathbf {h} \in H}{\min }\;\; \mathbf {c^{+}{(\mathbf {x})}}^t \mathbf {h} - \mathbf {L}^t \mathbf {x}\;\; \forall \mathbf {x} \in Y,\;\; \vert H \vert = k \rbrace \overset{(5)}{=} \\&\quad \underset{\delta \in \mathbb {R},H \subseteq X}{\min } \lbrace \delta : \delta \ge \mathbf {c^{+}{(\mathbf {x})}}^t \mathbf {h} - \mathbf {L}^t \mathbf {x}\;\text{ for } \text{ some }\; \mathbf {h} \in H\;\forall \mathbf {x} \in Y,\;\;\vert H \vert =k \rbrace = v(S(Y)) \end{aligned}$$

where

  1. (1)

    From the definition of \(\text{ MSR }(k)\).

  2. (2)

    From Lemma 1.

  3. (3)

    By level set transformation.

  4. (4)

    Since \(Y \subseteq X\).

  5. (5)

    A trivial equality.

Problem S(Y) has some disjunctive constraints and we need a reformulation to solve it. In order to solve S(Y) we use the Big-M reformulation named S2(Y). S(Y) may be rewritten as a problem in \((\delta ,H,\mathbf {s},\mathbf {z})\) as follows:

$$\begin{aligned}&(S2(Y))\;\;\min \delta \;\;s.t.\\&\delta \ge \mathbf {s}_{\mathbf {x}} - \mathbf {L}^t \mathbf {x}\;\forall \mathbf {x} \in Y\\&\mathbf {s}_{\mathbf {x}} \ge \mathbf {c^{+}{(\mathbf {x})}}^t \mathbf {h}^{j} -(1-\mathbf {z}_{(j,\mathbf {x})}) M_\mathbf {x}\;\; \forall \mathbf {x} \in Y\;\;\forall j\in [k]\\&\sum _{j=1}^k \mathbf {z}_{(j,\mathbf {x})} = 1\;\;\forall \mathbf {x} \in Y\\&\mathbf {h}^{(j)} \in X\;\; \forall j \in [k],\;\;\mathbf {z}_{(j,\mathbf {x})} \in \lbrace 0,1 \rbrace \;\;\forall \mathbf {x} \in Y\;\;\forall j \in [k],\;\;\delta \in \mathbb {R},\;\;\mathbf {s}_\mathbf {x} \in \mathbb {R} \;\forall \mathbf {x} \in Y \end{aligned}$$

where \(H=\lbrace \mathbf {h}^{(1)},\ldots ,\mathbf {h}^{(k)} \rbrace \) and we use \(M_{\mathbf {x}} = \underset{\mathbf {w} \in X }{\max }\;\; \mathbf {c}^{+}(\mathbf {x})^t \mathbf {w}\) as Big-M values with optimality guaranteed.

Let \(\mathbf {x} \in Y\). Note that if \(\mathbf {z}_{j,\mathbf {x}} = 1\) then \(\mathbf {s}_{\mathbf {x}} \ge \mathbf {c^{+}{(\mathbf {x})}}^t \mathbf {h}^{j}\) and we have \(\delta \ge \mathbf {c}^(\mathbf {x})^t \mathbf {h} - \mathbf {L}^t \mathbf {x}\) for some \(\mathbf {h} \in H\). If \(\mathbf {z}_{j,\mathbf {x}} = 0\) then the big-M value ensures that the constraint is satisfied. The relations between the optimal solutions for S(Y) and S2(Y) are presented in Lemma 4. \(\square \)

Lemma 4

(i) If \((\delta ,H,\mathbf {s},\mathbf {z})\) is a feasible solution for S2(Y) then \((\delta ,H)\) is a feasible solution for S(Y). (ii) If \((\delta ,H)\) is a feasible solution for S(Y) then there exists \((\mathbf {{s},\mathbf {z}})\) such that \((\delta ,H,\mathbf {s},\mathbf {z})\) is a feasible solution for S2(Y). (iii) \(v(S(Y)) = v(S2(Y))\)

Proof

  1. (i)

    Let \((\delta ,H,\mathbf {s},\mathbf {z})\) be a feasible solution for S2(Y) then if \(\mathbf {z}_{(j,\mathbf {x})}=1\) we have \(\mathbf {s}_{\mathbf {x}} \ge \mathbf {c^{+}{(\mathbf {x})}}^t \mathbf {h^j}\) and then \(\delta \ge \mathbf {c^{+}{(\mathbf {x})}}^t \mathbf {h} - \mathbf {L}^t \mathbf {x}\;\text{ for } \text{ some }\; \mathbf {h} \in H\;\forall \mathbf {x} \in Y\). Therefore \((\delta ,H)\) is a feasible solution for S(Y).

  2. (ii)

    Let \((\delta ,H)\) be a feasible solution for S(Y). Let \(j(\mathbf {x})\) and \(\mathbf {s}_{\mathbf {x}}\) such that \(\mathbf {s}_\mathbf {x} = \underset{\mathbf {h} \in H}{\min } \;\;{\mathbf {c}^+ (\mathbf {x})}^t \mathbf {h} = {\mathbf {c}^+ (\mathbf {x})}^t \mathbf {h^{j(\mathbf {x})}}\;\;\forall \mathbf {x} \in Y\). Let \(\mathbf {z}_{(j(\mathbf {x}),\mathbf {x})} = 1\). We have that \((\delta ,H,\mathbf {s},\mathbf {z})\) is a feasible solution for \(S2(Y)\;\;\bullet \).

  3. (iii)

    Since (i) and (ii) are valid and \(\delta \) is the value of the objective function for both problems we have that \(v(S(Y)) = v(S2(Y))\).

We derived problems to compute upper and lower bounds to v(MSR(k)). The S-Q algorithm to be presented iterates between S2(Y) to obtain a lower bound and Q2(H) to obtain an upper bound. Each time we solve S2(Y) we obtain a new lower bound and a new H. Each time we solve Q2(H) we update the best upper bound and obtain a new \(\mathbf {y} \in X\) to be added to Y. Since X is a finite set the algorithm is finite. Now we present the S-Q algorithm and Lemma 5 to prove its finitude.

The S-Q algorithm to solve the MSR(k) problem

Let \(\epsilon \ge 0\). Let \(H \subseteq X\) with \(\vert H \vert = k\). Let \(Y \subseteq X\). Let \(UB = \infty \). Let \(LB = 0\). The output is \(H^*\) with \(v(Q(H^*))- v(MSR(k)) \le \epsilon \).

  1. 1.

    Solve Q2(H).

    Let \((\sigma ,\mathbf {y})\) be an optimal solution and let \(UB = min \lbrace UB,v(Q2(H)) \rbrace \).

  2. 2.

    If \(UB - LB \le \epsilon \) let \(H^* = H\) and stop, otherwise let \(Y = Y \cup \lbrace \mathbf {y} \rbrace \).

  3. 3.

    Solve S2(Y).

    Let \((\delta ,H,\mathbf {s},\mathbf {z})\) be an optimal solution and let \(LB=\delta \).

  4. 4.

    If \(UB - LB \le \epsilon \) let \(H^* = H\) and stop, otherwise return to Step 1.

\(\square \)

Lemma 5

Algorithm S-Q finds an \(\epsilon \)-optimal solution for the MSR(k) problem in a finite number of steps.

Proof

Let \((\delta ,H,\mathbf {s},\mathbf {z})\) be an optimal solution for S2(Y) and let \((\sigma ,\mathbf {y})\) be an optimal solution for Q2(H). If \(\mathbf {y} \in Y\) then:

$$\begin{aligned}&LB \overset{(1)}{=} \delta \overset{(2)}{\ge } \underset{\mathbf {h} \in H }{\min } \;\;\mathbf {c^{+}{(\mathbf {y})}}^t \mathbf {h} - \mathbf {L}^t \mathbf {y} \overset{(3)}{=} v(Q2(H)) \overset{(4)}{\ge } UB \\&\quad \overset{(5)}{\ge } v(MSR(k)) \overset{(6)}{\ge } v(S(Y)) \overset{(7)}{=} v(S2(Y)) \overset{(8)}{=} \delta \overset{(9)}{=} LB \end{aligned}$$

where

  1. (1)

    see Step 3.

  2. (2)

    since \((\delta ,H,\mathbf {s},\mathbf {z})\) is a feasible solution for S2(Y) and \(\mathbf {y} \in Y\) we have that

    $$\begin{aligned} \delta \ge \mathbf {c^{+}{(\mathbf {y})}}^t \mathbf {h} - \mathbf {L}^t \mathbf {y}\;\text{ for } \text{ some }\; \mathbf {h} \in H \end{aligned}$$
  3. (3)

    By Lemma 1 since \((\sigma ,\mathbf {y})\) is an optimal solution for Q2(H).

  4. (4)

    See Step 1.

  5. (5)

    By the definition of \(\text{ MSR }(k)\) since \(UB = v(Q2(H)) = v(Q(H))\) for some H with \(H \subseteq X\) and \(\vert H \vert = k\).

  6. (6)

    From Lemma 3.

  7. (7)

    From Lemma 4.

  8. (8)

    Since \((\delta ,H,\mathbf {s},\mathbf {z})\) is an optimal solution for S2(Y).

  9. (9)

    See Step 3.

Since X is a finite set then \(UB - LB \le \epsilon \) in a finite number of steps.

Note that if \(k=1\) then the S-Q algorithm becomes a known classic algorithm to find an absolute robust solution (Aissi et al. 2009). Also, if \(\epsilon =0\) the algorithm finds an optimal solution.

Solving the MSR(k) problem may be a task that consumes a lot of computing time and a lot of memory so we consider two greedy approaches to be presented in the next section. \(\square \)

3 Greedy aproaches for the MSR(k) problem

In this section, we present two greedy algorithms for the \(\text{ MSR }(k)\) problem. The first one (the T greedy algorithm) works as follows: let \(H \subseteq X\), a problem T(H) in \(\mathbf {x}\) is solved to minimize the regret of \(H \cup \lbrace \mathbf {x} \rbrace \). If \(\mathbf {x}\) is optimal to T(H) then \(\mathbf {x}\) is added to H. We solve T(H) iteratively until \(\vert H \vert =k\). The second one (the Q greedy algorithm) is a very simple application of the first k steps of a multiparametric algorithm to solve P(c) for all \(c \in \varOmega \). Some lemmas and auxiliary problems are necessary.

3.1 A greedy algorithm for the MSR(k) problem based on conditionated absolute robust solutions

Let \(H \subseteq X\) and let T(H) be a problem in (\(\mathbf {x}\)) defined as follows:

$$\begin{aligned} (T(H))\;\;\underset{\mathbf {x} \in X }{\min }\;\; v(Q(H \cup \lbrace \mathbf {x} \rbrace ) \end{aligned}$$

We say that \(v(Q(H \cup \mathbf {x}))\) is the regret of \(\mathbf {x}\) conditionated by H and if \(\mathbf {x}^*\) is an optimal solution for T(H) then we say that \(\mathbf {x}^*\) is an absolute robust solution conditionated by H.

The T-greedy algorithm for the MSR(k) problem

Let \(H \subseteq X\).

  1. 1.

    Solve T(H) and let \(\mathbf {x}\) be an optimal solution.

  2. 2.

    If \(v(T(H))=0\) Stop.

  3. 3.

    If \(\vert H \vert = k\) then Stop, otherwise let \(H=H \cup \lbrace \mathbf {x} \rbrace \) and return to Step 1.

The T-greedy algorithm is designed in such a manner that the next solution to be included in H is an absolute robust solution conditionated by H. If \(v(T(H)) = 0\) at Step 2 then we do not need another different solution even if \(\vert H \vert < k\).

Note that the T-greedy algorithm may be seen as the first k steps of an algorithm to find an optimal multiparametric solution for P relative to \(\varOmega \) (Crema 2000).

\(\hat{X}\) is an optimal multiparametrical solution for P relative to \(\varOmega \) if \(\hat{X} \subseteq X\) and \(v(Q(\hat{X})) = 0\). In that case \(\underset{\mathbf {x} \in \hat{X} }{min}\;\; \mathbf {c}^t \mathbf {x} = v(P(\mathbf {c}))\) for all \(\mathbf {c} \in \varOmega \). If we use the T-greedy algorithm with \(k=\infty \) then \(v(T(H)) = 0\) at Step 2 in a finite number of steps because X is a finite set.

In order to solve T(H) we present below an approach that may be seen, again, as a generalization of a standard approach to find an absolute robust solution. For each \(\mathbf {x} \in X\) then \(v(Q(H \cup \lbrace \mathbf {x} \rbrace )\) is an upper bound for v(T(H)). Let \(Y \subseteq X\). We present below a problem to obtain a lower bound named W(HY). To do that we reformulate T(H) by using a level set transformation. In the reformulation we have a constraint for each \(\mathbf {x} \in X\). Then if \(\mathbf {x} \notin Y\) we delete the constraint and we a have a relaxation which is a lower bound. Problem W(HY) is presented in Lemma 6.

Lemma 6

Let \(H \subseteq X\) with \(\vert H \vert \ge 1\) and let \(Y \subseteq X\) with \(\vert Y \vert \ge 1\). Let W(HY) be a problem in \((\lambda ,\mathbf {x})\) defined as follows:

$$\begin{aligned} \underset{\lambda \in \mathbb {R},\mathbf {x} \in X}{\min }\lbrace \lambda : \lambda \ge \underset{\mathbf {h} \in H}{\min }\;\; \mathbf {c}^{+}(\mathbf {y})^t \mathbf {h} - \mathbf {L}^t \mathbf {y}\;\;\vee \;\; \lambda \ge \mathbf {c}^{+}(\mathbf {y})^t \mathbf {x} - \mathbf {L}^t \mathbf {y}\;\;\forall \mathbf {y} \in Y \rbrace \end{aligned}$$

then \(v(T(H)) \ge v(W(H,Y))\)

Proof

$$\begin{aligned}&v(T(H)) \overset{(1)}{=} \underset{\mathbf {x} \in X}{\min } \;\; \underset{\mathbf {y} \in X }{\max }\;\;\underset{\mathbf {h} \in H \cup \lbrace \mathbf {x} \rbrace }{\min } \mathbf {c}^{+}(\mathbf {y})^t \mathbf {h} - \mathbf {L}^t \mathbf {y} \overset{(2)}{\ge } \\&\quad \underset{\lambda \in \mathbb {R}, \mathbf {x} \in X}{\min }\;\;\lbrace \lambda : \lambda \ge \underset{\mathbf {h} \in H \cup \lbrace \mathbf {x} \rbrace }{\min } \mathbf {c}^{+}(\mathbf {y})^t \mathbf {h} - \mathbf {L}^t \mathbf {y} \;\;\forall \mathbf {y} \in Y\rbrace \overset{(3)}{=}\\&\quad \underset{\lambda \in \mathbb {R},\mathbf {x} \in X}{\min }\lbrace \lambda : \lambda \ge \underset{\mathbf {h} \in H}{\min }\;\; \mathbf {c}^{+}(\mathbf {y})^t \mathbf {h} - \mathbf {L}^t \mathbf {y}\;\;\vee \;\;\lambda \ge \mathbf {c}^{+}(\mathbf {y})^t \mathbf {x} - \mathbf {L}^t \mathbf {y}\;\;\forall \mathbf {y} \in Y \rbrace =\\&\qquad v(W(H,Y)) \end{aligned}$$

where

  1. (1)

    From Lemma 1 and the definition of T(H).

  2. (2)

    Since \(Y \subseteq X\) and by level set transformation.

  3. (3)

    A trivial equality.

We have disjunctive constraints in W(HY). In order to solve W(HY) we use the Big-M formulation. W(HY) may be rewritten as a problem in \((\lambda ,\mathbf {x},\mathbf {s},\mathbf {z})\) as follows:

$$\begin{aligned}&(W2(H,Y))\;\;\min \lambda \;\;s.t.\\&\lambda \ge \mathbf {s}_{\mathbf {y}} -\mathbf {L}^t \mathbf {y},\\&\mathbf {s}_{\mathbf {y}} \ge \underset{\mathbf {h} \in H}{\min }\;\; \mathbf {c}^{+}(\mathbf {y})^t \mathbf {h} \mathbf {z}_{\mathbf {y}}\;\;\forall \mathbf {y} \in Y,\\&\mathbf {s}_{\mathbf {y}} \ge \mathbf {c}^{+}(\mathbf {y})^t \mathbf {x} - M_{\mathbf {y}} \mathbf {z}_{\mathbf {y}}\;\;\forall \mathbf {y} \in Y,\\&\mathbf {x} \in X,\;\;\mathbf {z}_{\mathbf {y}} \in \lbrace 0,1 \rbrace \;\;\forall \mathbf {y} \in Y,\;\;\lambda \in \mathbb {R},\;\;\mathbf {s}_\mathbf {y} \in \mathbb {R}\;\;\forall \mathbf {y} \in Y \end{aligned}$$

and we use \(M_{\mathbf {y}} = \underset{\mathbf {w} \in X}{\max }\;\; \mathbf {c}^{+}(\mathbf {y})^t \mathbf {w}\) as Big-M values with optimality guaranteed.

Note that if \(\mathbf {z}_\mathbf {y} = 1\) then \(\lambda \ge \underset{\mathbf {h} \in H}{\min }\;\; {\mathbf {c}^+(\mathbf {y})}^t \mathbf {h} - \mathbf {L}^t \mathbf {y}\). If \(\mathbf {z}_\mathbf {y} = 0\) then \(\lambda \ge \mathbf {c}^+(\mathbf {y})^t \mathbf {x} - \mathbf {L}^t \mathbf {y}\).

Now we have problems to compute upper and lower bounds to v(T(H)). The W-Q algorithm to be presented iterates between W(HY) to obtain a lower bound and \(Q2(H \cup \lbrace \mathbf {x} \rbrace )\) to obtain an upper bound. Each time we solve W(HY) we obtain a new lower bound and a new \(\mathbf {x}\). Each time we solve \(Q2(H \cup \lbrace \mathbf {x} \rbrace )\) we update the best upper bound and obtain a new \(\mathbf {y} \in X\) to be added to Y. Since X is a finite set the algorithm is finite. Now we present the W-Q algorithm and Lemma 7 to prove its finitude.

W-Q algorithm to solve the T(H) problem

Let \(\epsilon \ge 0\). Let \(H \subseteq X\) with \(\vert H \vert \ge 1\). Let \(Y \subseteq X\). Let \(\mathbf {x} \in X\). Let \(UB = \infty \). Let \(LB=0\). The output is \(\mathbf {x}^*\) with \(v(Q(H \cup \lbrace \mathbf {x}^* \rbrace ))-v(T(H)) \le \epsilon \).

  1. 1.

    Solve \(Q2(H \cup \lbrace \mathbf {x} \rbrace )\). Let (\(\sigma ,\mathbf {y}\)) be an optimal solution and let \(UB = min \lbrace UB,v(Q2(H \cup \lbrace \mathbf {x} \rbrace ))\).

  2. 2.

    If \(UB - LB \le \epsilon \) let \(\mathbf {x}^* = \mathbf {x}\) and Stop, otherwise let \(Y = Y \cup \lbrace \mathbf {y} \rbrace \).

  3. 3.

    Solve W2(HY) and let \((\lambda ,\mathbf {x},\mathbf {s},\mathbf {z})\) be an optimal solution. Let \(LB = \lambda \).

  4. 4.

    If \(UB - LB \le \epsilon \) let \(\mathbf {x}^* = \mathbf {x}\) and Stop, otherwise return to Step 1.

\(\square \)

Lemma 7

Let \(H \subseteq X\) with \(\vert H \vert \ge 1\). Algorithm W-Q finds an \(\epsilon \)-optimal solution for T(H) in a finite number of steps.

Proof

Let \((\lambda ,\mathbf {x},\mathbf {s},\mathbf {z})\) be an optimal solution for W2(HY) and let \((\sigma ,\mathbf {y})\) be an optimal solution for \(Q2(H \cup \lbrace \mathbf {x} \rbrace )\). If \(\mathbf {y} \in Y\) then:

$$\begin{aligned}&LB \overset{(1)}{=} \lambda \overset{(2)}{\ge } \min \lbrace \underset{ \mathbf {h} \in H}{\min }\;\; \mathbf {c}^{+}(\mathbf {y})^t \mathbf {h},\mathbf {c}^{+}(\mathbf {y})^t \mathbf {x} \rbrace - \mathbf {L}^t \mathbf {y} \overset{(3)}{=} v(Q2(H \cup \lbrace \mathbf {x} \rbrace )) \\&\quad \overset{(4)}{\ge } UB \overset{(5)}{\ge } v(T(H)) \overset{(6)}{\ge } v(W(H,Y)) \overset{(7)}{=} v(W2(H,Y)) \overset{(8)}{=} \lambda \overset{(9)}{=} LB \end{aligned}$$

where

  1. (1)

    See Step 3.

  2. (2)

    Since \((\lambda ,\mathbf {x},\mathbf {s},\mathbf {z})\) is an optimal solution for W2(HY) and \(y \in Y\).

  3. (3)

    Since \((\sigma ,\mathbf {y})\) is an optimal solution for \(Q2(H \cup \lbrace \mathbf {x} \rbrace )\)

  4. (4)

    See Step 1.

  5. (5)

    From the definition of T(H) and since \(UB = v(Q2(H \cup \mathbf {x} ))\) for some \(\mathbf {x} \in X\).

  6. (6)

    From Lemma 6.

  7. (7)

    Since W2(HY) is the reformulation of W(HY).

  8. (8)

    Since \((\lambda ,\mathbf {x},\mathbf {s},\mathbf {z})\) is an optimal solution for W2(HY).

  9. (9)

    See Step 3.

Since X is a finite set then \(UB - LB \le \epsilon \) in a finite number of steps. Also, if \(\epsilon =0\) the algorithm finds an optimal solution. \(\square \)

3.2 A greedy algorithm for the MSR(k) problem based on a simple multiparametric algorithm

The Q-greedy algorithm for the MSR(k) problem

Let \(H \subseteq X\) with \(\vert H \vert =1\).

  1. 1.

    Solve Q2(H). Let \((\sigma ,\mathbf {x})\) be an optimal solution.

  2. 2.

    If \(v(Q2(H)) = 0\) Stop.

  3. 3.

    If \(\vert H \vert = k\) Stop, otherwise let \(H=H \cup \lbrace \mathbf {x} \rbrace \) and return to Step 1.

The Q-greedy algorithm is defined as the first k steps of a simple multiparametric algorithm (Crema 2000). In each step we are looking for the worst scenario for H. If \((\sigma ^*,\mathbf {x}^*)\) is an optimal solution for Q2(H) then \(\mathbf {c}^{+}(\mathbf {x^*})\) is the worst scenario for H and \(\mathbf {x}^*\) is chosen to be included in H.

For the same H if \(x^{1}\) is the solution generated by the Q-greedy algorithm and \(x^{2} \) is the solution generated by the T-greedy algorithm then \(v(Q(H \cup {\mathbf {x^{(1)}}})) \ge v(Q(H \cup {\mathbf {x^{(2)}}}))\). Thus, it can be expected, but it is not safe from the theoretical point of view, that the regret achieved for the last H using the T-greedy algorithm will be better than using the Q-greedy algorithm. The computational results presented in Sect. 6 confirm that expectation. The trade off between the quality of the last H and the computational effort must be considered by the decision maker.

4 An algorithm to solve the MrelSR(k) problem

In this section we present an algorithm to obtain an optimal solution for the \(\text{ MrelSR }(k)\) problem.

Our presentation follows the same scheme that we used for the MSR(k) problem and the motivation and proof of some lemmas and algorithms are analogous. Thus, for the sake of simplicity and in order to save space, we omit some motivation and proofs.

If \(H \subseteq X\) with \(\vert H \vert = k\) then \(v(Q_r(H))\) is an upper bound of \(v(\text{ MrelSR }(k)\). Let \(Y \subseteq X\). We present later a problem \(S_r(Y)\) to obtain a lower bound. The algorithm iterates between solving the problems \(S_r(Y)\) (to generate a new H) and \(Q_r(H)\) (to generate a new \(\mathbf {y}\) to be added to Y) until the gap is closed. Some lemmas and auxiliary problems are necessary.

Lemma 8

Let \(H \subseteq X\) with \(\vert H \vert \ge 1\). Let \(QX_r(H)\) be a problem in \((\mathbf {x})\) defined as follows:

$$\begin{aligned} \underset{\mathbf {x} \in X}{\max }\;\; \frac{ \underset{ \mathbf {h} \in H}{\min }\;\; \mathbf {c}^{+}(\mathbf {x})^t \mathbf {h} - \mathbf {c}^{+}(\mathbf {x})^t \mathbf {x} }{\mathbf {c}^{+}(\mathbf {x})^t \mathbf {x}} \end{aligned}$$

then:

  1. (i)

    \(v(Q_r(H)) = v(QX_r(H))\)

  2. (ii)

    If \(\mathbf {c}^*\) is an optimal solution for \(Q_r(H)\) and \(\mathbf {x}^*\) is an optimal solution for \(P(\mathbf {c}^*)\) then \(\mathbf {x}^*\) is an optimal solution for \(QX_r(H)\).

  3. (iii)

    If \(\mathbf {x}^*\) is an optimal solution for \(QX_r(H)\) then \(\mathbf {c}^* = \mathbf {c}^{+} (\mathbf {x}^*)\) is an optimal solution for \(Q_r(H)\).

Proof

  1. (i)

    Let \(\mathbf {c}^*\) be an optimal solution for \(Q_r(H)\) and let \(\mathbf {x}^*\) be an optimal solution for \(P(\mathbf {c}^*)\), then we have:

    $$\begin{aligned} v(Q_r(H))&\overset{(1)}{=} \frac{\underset{\mathbf {h} \in H}{\min } \;\;{\mathbf {c^*}}^t \mathbf {h} - {\mathbf {c^*}}^t \mathbf {x}^* }{{\mathbf {c^*}}^t \mathbf {x}^*} \overset{(2)}{\le } \underset{\mathbf {c} \in \varOmega ,\;\;\mathbf {x} \in X}{\max } \;\;\frac{\underset{\mathbf {h} \in H}{\min } \;\;\mathbf {c}^t \mathbf {h} - \mathbf {c}^t \mathbf {x}}{\mathbf {c}^t \mathbf {x}} \\&\overset{(3)}{\le }\underset{\mathbf {x} \in X}{\max } \;\;\frac{\underset{\mathbf {h} \in H}{\min } \;\; \mathbf {c}^{+}(\mathbf {x})^t \mathbf {h} - \mathbf {c}^{+}(\mathbf {x})^t \mathbf {x}}{\mathbf {c}^{+}(\mathbf {x})^t \mathbf {x}} \overset{(4)}{ =} v(QX_r(H)) \end{aligned}$$

where

  1. (1)

    From the definition of \(Q_r(H)\) and since \(\mathbf {c}^*\) is an optimal solution for \(Q_r(H)\) and \(\mathbf {x}^*\) is an optimal solution for \(P(\mathbf {c}^*)\).

  2. (2)

    a trivial inequality.

  3. (3)

    Since \(\mathbf {c}^t \mathbf {h} - \mathbf {c}^t \mathbf {x} \le {\mathbf {c}^{+}(\mathbf {x})}^t \mathbf {h} - {\mathbf {c}^{+}(\mathbf {x})}^t \mathbf {x}\) for all \(\mathbf {x} \in X\), for all \(\mathbf {c} \in \varOmega \) and for all \(\mathbf {h} \in X\) and \(\mathbf {c}^t \mathbf {x} \ge \mathbf {c}^{+}(\mathbf {x})^t \mathbf {x}\) for all \(\mathbf {x} \in X\) and for all \(\mathbf {c} \in \varOmega \).

  4. (4)

    From the definition of \(QX_r(H)\).

Let \(\mathbf {x}^*\) be an optimal solution for \(QX_r(H)\) then we have:

$$\begin{aligned} v(QX_r(H))&\overset{(5)}{=} \frac{\underset{\mathbf {h} \in H}{\min } \;\; \mathbf {c}^{+}(\mathbf {x}^*)^t \mathbf {h} - \mathbf {c}^{+}(\mathbf {x}^*)^t \mathbf {x}^*}{\mathbf {c}^{+}(\mathbf {x}^*)^t \mathbf {x}^*} \overset{(6)}{\le } \frac{\underset{\mathbf {h} \in H}{\min } \;\; \mathbf {c}^{+}(\mathbf {x}^*)^t \mathbf {h} - v(P(\mathbf {c}^{+}(\mathbf {x}^*)))}{v(P(\mathbf {c}^{+}(\mathbf {x}^*)))} \\&\overset{(7)}{\le }\underset{\mathbf {c} \in \varOmega }{\max }\;\; \frac{\underset{\mathbf {h} \in H }{\min }\;\; \mathbf {c}^t \mathbf {h} - v(P(\mathbf {c}))}{v(P(\mathbf {c})) } \overset{(8)}{=} v(Q_r(H)) \end{aligned}$$

where

  1. (5)

    From the definition of \(QX_r(H)\).

  2. (6)

    Since \(v(P(\mathbf {c}^{+}(\mathbf {x}^*))) \le {\mathbf {c}^{+}(\mathbf {x}^*)}^t \mathbf {x}^*\)

  3. (7)

    A trivial inequallity.

  4. (8)

    From the definition of \(Q_r(H)\).

  1. (ii)

    Let \(\mathbf {c}^*\) be an optimal solution for \(Q_r(H)\) and let \(\mathbf {x}^*\) be an optimal solution for \(P(\mathbf {c}^*)\), then

    $$\begin{aligned} v(QX_r(H))&\overset{(1)}{=} v(Q_r(H)) \overset{(2)}{=} \frac{\underset{\mathbf {h} \in H }{\min }\;\; {\mathbf {c}^*}^t \mathbf {h} - v(P(\mathbf {c}^*))}{v(P(\mathbf {c}^*)) } \overset{(3)}{=} \frac{\underset{\mathbf {h} \in H }{\min }\;\; {\mathbf {c}^*}^t \mathbf {h} - {\mathbf {c}^*}^t \mathbf {x}^*}{{\mathbf {c}^*}^t \mathbf {x}^* } \\&\overset{(4)}{\le }\frac{\underset{\mathbf {h} \in H }{\min }\;\; {\mathbf {c}^+(\mathbf {x}^*)}^t \mathbf {h} - {\mathbf {c}^+(\mathbf {x}^*)}^t \mathbf {x}^*}{{\mathbf {c}^+(\mathbf {x}^*)}^t \mathbf {x}^* } \overset{(5)}{\le } v(QX_r(H)) \end{aligned}$$

therefore \(\mathbf {x}^*\) is an optimal solution for \(QX_r(H)\),

where

  1. (1)

    From (i).

  2. (2)

    Since \(\mathbf {c}^*\) is an optimal solution for \(Q_r(H)\).

  3. (3)

    Since \(\mathbf {x}^*\) is an optimal solution for \(P(\mathbf {c}^*)\)

  4. (4)

    Since \({\mathbf {c}^*}^t \mathbf {h} - {\mathbf {c}^*}^t \mathbf {x}^* \le {\mathbf {c}^+(\mathbf {x}^*)}^t \mathbf {h} - {\mathbf {c}^+(\mathbf {x}^*)}^t \mathbf {x}^*\) and \( {\mathbf {c}^+(\mathbf {x}^*)}^t \mathbf {x}^* = \mathbf {L}^t \mathbf {x}^* \le {\mathbf {c}^*}^t \mathbf {x}^*\)

  5. (5)

    From the definition of \(QX_r(H)\).

  1. (iii)

    Let \(\mathbf {x}^*\) be an optimal solution for \(QX_r(H)\) and let \(\mathbf {c}^* = \mathbf {c}^+(\mathbf {x}^*)\).

    $$\begin{aligned} v(Q_r(H))&\overset{(1)}{=}&v(QX_r(H)) \overset{(2)}{=} \frac{\underset{\mathbf {h} \in H }{\min }\;\; {\mathbf {c}^+(\mathbf {x}^*)}^t \mathbf {h} - {\mathbf {c}^+(\mathbf {x}^*)}^t \mathbf {x}^*}{{\mathbf {c}^+(\mathbf {x}^*)}^t \mathbf {x}^* } \overset{(3)}{=} \frac{\underset{\mathbf {h} \in H }{\min }\;\; {\mathbf {c}^*}^t \mathbf {h} - {\mathbf {c}^*}^t \mathbf {x}^*}{{\mathbf {c}^*}^t \mathbf {x}^* } \\&\overset{(4)}{\le }&\frac{\underset{\mathbf {h} \in H }{\min }\;\; {\mathbf {c}^*}^t \mathbf {h} - v(P(\mathbf {c}^*))}{v(P(\mathbf {c}^*))} \overset{(5)}{\le } v(Q_r(H)) \end{aligned}$$

therefore \(\mathbf {c}^*\) is an optimal solution for \(Q_r(H)\),

where

  1. (1)

    From (i).

  2. (2)

    Since \(\mathbf {x}^*\) is an optimal solution for \(QX_r(H)\).

  3. (3)

    Since \(\mathbf {c}^+(\mathbf {x}^*) = \mathbf {c}^*\).

  4. (4)

    Since \(v(P(\mathbf {c}^*)) \le {\mathbf {c}^*}^t \mathbf {x}^*\).

  5. (5)

    From the definition of \(Q_r(H)\;\;\bullet \)

\(QX_r(H)\) is a combinatorial optimization problem with a rational objective function (Megiddo 1979). Hence, we know some basic properties as follows (for the sake of completeness we present a proof): \(\square \)

Proposition 2

Let \(k \ge 1\), let \(H \subseteq X\) with \(\vert H \vert = k\), let \(\mu \ge 1\) and let \(RX_r(\mu ,H)\) be a problem in \((\mathbf {x})\) defined as:

$$\begin{aligned} (RX_r(\mu ,H))\;\;\underset{\mathbf {x} \in X}{\max }\;\; \underset{\mathbf {h} \in H}{\min } \;\;\mathbf {c}^{+}(\mathbf {x})^t \mathbf {h} - \mu \mathbf {c}^{+}(\mathbf {x})^t \mathbf {x} \end{aligned}$$

then:

  1. (i)

    \(v(RX_r(\mu ,H)) = 0\) if and only if \(\mu = v(QX_r(H)) + 1\).

  2. (ii)

    If \(v(RX_r(\mu ,H)) = 0\) and \(\mathbf {x}^*\) is an optimal solution for \(RX_r(\mu ,H)\) then \(\mathbf {x}^*\) is an optimal solution for \(QX_r(H)\).

  3. (iii)

    \(v(RX_r(\mu ,H))\) is a piecewise linear and decreasing convex function in \(\mu \).

Proof

See “Appendix B”.

Therefore, in order to solve \(QX_r(H)\) we may use a very simple algorithm to find \(\mu \) with \(v(RX_r(\mu ,H)) = 0\).

In order to solve \(RX_r(\mu ,H)\) we may reformulate it as a problem in \((\phi ,\mathbf {x})\) as follows:

$$\begin{aligned} \underset{\phi \in \mathbb {R},\mathbf {x} \in X}{\max } \lbrace \phi - \mu \mathbf {L}^t \mathbf {x}: \phi \le \mathbf {c}^{+}(\mathbf {x})^t \mathbf {h}\;\;\forall \mathbf {h} \in H\rbrace \end{aligned}$$

Since X is a finite set and \(v(RX_r(\mu ,H))\) is a piecewise linear and decreasing convex function in \(\mu \) we have that the next algorithm finds \(\mu ^*\) with \(v(RX(\mu ^*,H)) = 0\) in a finite number of steps. For the sake of completeness we present a proof. \(\square \)

Algorithm Find- \(\mu ^*\)

Let \(\mu = 1\). Let \(H \subseteq X\) with \(\vert H \vert = k\). The output is \(\mu ^*\) with \(v(RX_r(\mu ^*,H)) = 0\) and \(\mathbf {x}^*\) optimal for \(QX_r(H)\).

  1. 1.

    Solve \(RX_r(\mu ,H)\). Let \(\mathbf {x}\) be an optimal solution.

  2. 2.

    If \( v(RX_r(\mu ,H)) = 0\) let \(\mu ^* = \mu \), let \(\mathbf {x}^* = \mathbf {x}\) and stop.

  3. 3.

    Let \(\mu = \frac{\underset{\mathbf {h} \in H}{\min } \;\;\mathbf {c}^{+}(\mathbf {x})^t \mathbf {h} }{\mathbf {\mathbf {c}^+ (\mathbf {x})}^t \mathbf {x}}\) and return to Step 1.

Proposition 3

Algorithm Find-\(\mu ^*\) finds \(\mu ^*\) with \(v(RX_r(\mu ^*,H)) = 0\) and \(\mathbf {x}^*\) optimal for \(QX_r(H)\) in a finite number of steps.

Proof

See “Appendix C”. \(\square \)

Lemma 9

Let \(Y \subseteq X\) with \(\vert Y \vert \ge 1\). Let \(S_r(Y)\) be a problem in \((\sigma _r,H)\) defined as follows:

$$\begin{aligned} (S_r(Y))\;\;\underset{\sigma _r \in \mathbb {R},H \subseteq X}{\min } \lbrace \sigma _r: \mathbf {L}^t \mathbf {x}\; \sigma _r \ge \mathbf {c^{+}{(\mathbf {x})}}^t \mathbf {h} - \mathbf {L}^t \mathbf {x}\;\text{ for } \text{ some }\; \mathbf {h} \in H\;\forall \mathbf {x} \in Y,\;\;\vert H \vert =k \rbrace \end{aligned}$$

then \(v(MrelSR(k)) \ge v(S_r(Y))\)

We omit the proof.

In order to solve \(S_r(Y)\) we use the Big-M formulation. \(S_r(Y)\) may be rewritten as a problem in \((\sigma _r,H,\mathbf {s},\mathbf {z})\) as follows:

$$\begin{aligned}&(S_r(Y))\;\;\min \sigma _r\;\;s.t.\\&\mathbf {L}^t \mathbf {x}\;\sigma _r \ge s_{\mathbf {x}} - \mathbf {L}^t \mathbf {x}\;\forall \mathbf {x} \in Y,\;\;\forall j \in [k],\\&\mathbf {s}_{\mathbf {x}} \ge \mathbf {c^{+}{(\mathbf {x})}}^t \mathbf {h}^{j} -(1-\mathbf {z}_{(j,\mathbf {x})}) M_\mathbf {x}\;\; \forall \mathbf {x} \in Y,\\&\sum _{j=1}^k \mathbf {z}_{(j,\mathbf {x})} = 1\;\;\forall \mathbf {x} \in Y,\\&\mathbf {h}^{(j)} \in X\;\; \forall j \in [k],\;\;\mathbf {z}_{(j,\mathbf {x})} \in \lbrace 0,1 \rbrace \;\;\forall \mathbf {x} \in Y\;\;\forall j \in [k],\;\;\sigma _r \in \mathbb {R},\;\;\mathbf {s}_\mathbf {x} \in \mathbb {R}\;\;\forall \mathbf {x} \in Y \end{aligned}$$

where \(H=\lbrace \mathbf {h}^{(1)},\ldots ,\mathbf {h}^{(k)} \rbrace \) and we use \(M_{\mathbf {x}} = \underset{\mathbf {w} \in X}{\max }\;\;\mathbf {c}^{+}(\mathbf {x})^t \mathbf {w}\) as Big-M values with optimality guaranteed.

Sr-Qr algorithm to solve the MrelSR(k) problem

Let \(\epsilon \ge 0\). Let \(H \subseteq X\) with \(\vert H \vert = k\). Let \(Y \subseteq X\). Let \(UB = \infty \). Let \(LB =0\). The output is \(H^*\) with \(v(QX_r(H^*)) - v(MrelSR(k)) \le \epsilon \).

  1. 1.

    Solve \(QX_r(H)\). Let \(\mathbf {y}\) be an optimal solution and let \(UB = min \lbrace UB,v(Q_r(H)) \rbrace \).

  2. 2.

    If \(UB - LB \le \epsilon \) let \(H^* = H\) and stop, otherwise let \(Y = Y \cup \lbrace \mathbf {y} \rbrace \).

  3. 3.

    Solve \(SX_r(Y)\). Let \((\sigma _r,H,\mathbf {s},\mathbf {z})\) be an optimal solution and let \(LB=\sigma _r\).

  4. 4.

    If \(UB - LB \le \epsilon \) let \(H^* = H\) and stop, otherwise return to Step 1

Lemma 10

Algorithm Sr-Qr finlds an \(\epsilon \)-optimal solution for MrelSR(k) in a finite number of steps.

We omit the proof.

5 Greedy approaches for the MrelSR(k) problem

In this section we present two greedy algorithms for the \(\text{ MrelSR }(k)\) problem.

Our presentation follows the same scheme that we used for the greedy approach to MSR(k) problem and the motivation and proof of some lemmas and algorithms are analogous. Thus, for the sake of simplicity and in order to save space, we omit some motivation and proofs.

The first one (the Tr-greedy algorithm) works as follows: let \(H \subseteq X\), a problem \(T_r(H)\) in \(\mathbf {x}\) is solved to minimize the relative regret of \(H \cup \lbrace \mathbf {x} \rbrace \). If \(\mathbf {x}\) is optimal to \(T_r(H)\) then \(\mathbf {x}\) is added to H. We solve \(T_r(H)\) iteratively until \(\vert H \vert =k\). The second one (the \(Q_r\)-greedy algorithm) is analogous to the Q greedy algorithm. Some lemmas and auxiliary problems are necessary.

5.1 A greedy algorithm for the MrelSR(k) problem based on conditionated relative robust solutions

Let \(H \subseteq X\) and let \(T_r(H)\) be a problem in \(\mathbf {x}\) defined as follows:

$$\begin{aligned} (T_r(H))\;\; \underset{\mathbf {x} \in X}{\min }\;\;v(Q_r(H \cup \lbrace \mathbf {x} \rbrace )) \end{aligned}$$

We say that \(v(Q_r(H \cup \mathbf {x}))\) is the relative regret of \(\mathbf {x}\) conditionated by H and if \(\mathbf {x}^*\) is an optimal solution for \(T_r(H)\) then we say that \(\mathbf {x}^*\) is a relative robust solution conditionated by H

The Tr-greedy algorithm for the MrelSR(k) problem

Let \(H \subseteq X\) with \(\vert H \vert = 1\).

  1. 1.

    Solve \(T_r(H)\) and let \(\mathbf {x}\) be an optimal solution.

  2. 2.

    If \(v(T_r(H))=0\) Stop.

  3. 3.

    If \(\vert H \vert = k\) then Stop, otherwise Let \(H=H \cup \lbrace \mathbf {x} \rbrace \) and return to Step 1.

The Tr-greedy algorithm is designed in such a manner that the next solution to be included in H is a relative robust solution conditionated by H. If \(v(T_r(H)) = 0\) at Step 3 then we do not need another different solution even if \(\vert H \vert < k\).

In order to solve \(T_r(H)\) we present below an approach that is analogous to the approach presented to solve T(H).

Lemma 11

Let \(Y \subseteq X\) with \(\vert Y \vert \ge 1\). Let \(W_r(H,Y)\) be a problem in \((\sigma _W,\mathbf {x})\) defined as follows:

$$\begin{aligned}&\underset{\sigma _W \in \mathbb {R},\mathbf {x} \in X}{\min }\lbrace \sigma _W: \mathbf {L}^t \mathbf {y} \sigma _W \ge \underset{\mathbf {h} \in H}{\min }\;\; \mathbf {c}^{+}(\mathbf {y})^t \mathbf {h} - \mathbf {L}^t \mathbf {y}\;\;\vee \\&\quad \mathbf {L}^t \mathbf {y} \sigma _W \ge \mathbf {c}^{+}(\mathbf {y})^t \mathbf {x} - \mathbf {L}^t \mathbf {y}\;\;\forall \mathbf {y} \in Y\rbrace \end{aligned}$$

then \(v(T_r(H)) \ge v(W_r(H,Y))\)

We omit the proof.

In order to solve \(W_r(H,Y)\) we use the Big-M formulation. \(W_r(H,Y)\) may be rewritten as a problem in \((\sigma _W,\mathbf {x},\mathbf {s},\mathbf {z})\) as follows:

$$\begin{aligned}&(W_r(H,Y))\;\;\min \;\;\sigma _W\;\;s.t\\&\mathbf {L}^t \mathbf {y}\;\sigma _W \ge \mathbf {s}_{\mathbf {y}} -\mathbf {L}^t \mathbf {y},\\&\mathbf {s}_{\mathbf {y}} \ge \underset{\mathbf {h} \in H}{\min }\;\; \mathbf {c}^{+}(\mathbf {y})^t \mathbf {h} \mathbf {z}_{\mathbf {y}}\;\;\forall \mathbf {y} \in Y,\\&\mathbf {s}_{\mathbf {y}} \ge \mathbf {c}^{+}(\mathbf {y})^t \mathbf {x} - M_{\mathbf {y}} \mathbf {z}_{\mathbf {y}}\;\;\forall \mathbf {y} \in Y,\\&\mathbf {x} \in X,\;\;\mathbf {z}_{\mathbf {y}} \in \lbrace 0,1 \rbrace \;\;\forall \mathbf {y} \in Y,\;\;\sigma _W \in \mathbb {R},\;\;\mathbf {s}_\mathbf {y} \in \mathbb {R}\;\;\forall \mathbf {y} \in Y \end{aligned}$$

and we use \(M_{\mathbf {y}} = \underset{x \in X}{\max } \;\;\mathbf {c}^{+}(\mathbf {y})^t \mathbf {x} \) as Big-M values with optimality guaranteed.

Wr-Qr algorithm to solve \(T_r(H)\)

Let \(\epsilon \ge 0\). Let \(H \subseteq X\). Let \(Y \subseteq X\). Let \(\mathbf {x} \in X\). Let \(UB = \infty \). Let \(LB= 0\). The output is \(\mathbf {x}^*\) with \(v(Q_r(H \cup \lbrace \mathbf {x}^* \rbrace ))-v(T_r(H)) \le \epsilon \).

  1. 1.

    Solve \(Q_r(H \cup \lbrace \mathbf {x} \rbrace )\). Let (\(\mathbf {y}\)) be an optimal solution and let \(UB = min \lbrace UB,v(Q_r(H \cup \lbrace \mathbf {x} \rbrace ))\).

  2. 2.

    If \(UB - LB \le \epsilon \) let \(\mathbf {x}^* = \mathbf {x}\) and stop, otherwise let \(Y = Y \cup \lbrace \mathbf {y} \rbrace \).

  3. 3.

    Solve \(W_r(H,Y)\) and let \((\sigma _W,\mathbf {x},\mathbf {s},\mathbf {z})\) be an optimal solution. Let \(LB = \sigma _W\).

  4. 4.

    If \(UB - LB \le \epsilon \) let \(\mathbf {x}^* = \mathbf {x}\) and stop, otherwise return to Step 1.

Lemma 12

Algorithm Wr-Qr finds an \(\epsilon \)-optimal solution for \(T_r(H)\) in a finite number of steps.

We omit the proof.

5.2 A greedy algorithm for the MrelSR(k) problem based on a simple multiparametric algorithm

The Qr-greedy algorithm

Let \(H \subseteq X\) with \(\vert H \vert =1\).

  1. 1.

    Solve \(QX_r(H)\). Let \(\mathbf {x}\) be an optimal solution.

  2. 2.

    If \(v(QX_r(H)) = 0\) Stop.

  3. 3.

    If \(\vert H \vert = k\) Stop, otherwise let \(H=H \cup \lbrace \mathbf {x} \rbrace \) and return to Step 1.

For the same H if \(x^{1}\) is the solution generated by the Qr-greedy algorithm and \(x^{2} \) is the solution generated by the Tr-greedy algorithm then \(v(Q_r(H \cup {\mathbf {x^{(1)}}})) \ge v(Q_r(H \cup {\mathbf {x^{(2)}}}))\). Thus, it can be expected, but it is not safe from the theoretical point of view, that the regret achieved with the last H using the Tr-greedy algorithm will be better than using the Qr-greedy algorithm. The trade off between the quality of the last H and the computational effort must be considered by the decision maker.

6 Computational results

6.1 Computer environment and the notation

Our algorithms have been performed on a personal computer as follows:

Intel(R)Core(TM) i7-3630QM CPU, HP envy dv6, with 12.00 GB Ram, 2.40 GHz and Windows 8.1 Operating System. All the instances have been processed through the commercial ILOG-Cplex 12.4 (http://ibm-ilog-cplex-optimization-studio-acade.software.informer.com/12.4/) from a MATLAB code by using the branch and cut algorithm with all the parameters of CPLEX in their default values with the exemption of the tolerance to declare an integer value that was changed to \(10^{-12}\) for the SP problem with euclidean topology to avoid numerical difficulties that appeared by using the default value. We use \(\epsilon = 0\).

We use the \(*\) algorithm, with \(* \in \lbrace \text{ Q-greedy,T-greedy,Qr-greedy,Tr-greedy }\rbrace \), beginning with:

\(H = \lbrace h^{(1)} \rbrace \) where \(h^{(1)}\) is either an absolute robust solution or a relative robust solution according the case and \(Y = \emptyset \).

If \(k=2\) we use the \(S-Q\) and \(Sr-Qr\) algorithms beginning with H where H is the output of the T-greedy (Tr-greedy) algorithm used for \(k=2\) and \(Y = \emptyset \).

If \(k=3\) we use the \(S-Q\) and \(Sr-Qr\) algorithms beginning with H where H is the output of the T-greedy (Tr-greedy) algorithm used for \(k=3\) and \(Y = \emptyset \).

In order to solve T(H) (\(T_r(H)\)) by using the W-Q (Wr-Qr) algorithm the initial \(\mathbf {x}\) is an optimal solution for Q2(H) (\(QX_r(H)\)).

The information to be presented is not the same for all tables but in order to save space and simplify the exposition we present all the values reported (\(* \in \lbrace \text{ Q-greedy,T-greedy,Qr-greedy,Tr-greedy,S-Q,Sr-Qr } \rbrace \)):

  • k: the number of solutions to define H,

  • t*: the largest time in seconds to generate H where \(\vert H \vert =k\) with the * algorithm. The time to obtain the first H is included,

  • avet*: the times average for algorithm *,

  • it*: the largest number of iterations (the iterations to obtain the first H is included) with the * algorithm,

  • red*k is the reduction percentage for the regret values if we use the set H (with k solutions) generated by the algorithm * instead of the absolute(relative) robust solution (the values presented in the tables are the average for the set considered),

  • difk and difrk (with \(k \in \lbrace 2,3 \rbrace \)) are the differences of the average values of the reduction percentage for the regret values between the optimal solutions and the solutions generated with the T-greedy algorithms as follows: dif2=redS-Q2-redT2,dif3=redS-Q3-redT3, dif2r=redSr-Qr2 - redTr2 and dif3r=redSr-Qr3 - redTr3,

  • f: the number of times that the S-Q(Sr-Qr) algorithm failed to obtain an optimal solution with a time limit equal to 3600 s (without including the time to obtain \(h^{(1)}\) and the initial H according the case).

Some remarks are necessary: If \(k \in \lbrace 2,3 \rbrace \) we use a time limit equal to 3600 s for the S-Q and Sr-Qr algorithms (without including the time to obtain \(h^{(1)}\) and the initial H according the case), that implies that the overall time may be greater than 3600. There is no time limit for the heuristic algorithms for \(2 \le k \le 6\). If the S-Q (Sr-Qr) algorithm failed to obtain an optimal solution we use the best lower bound generated to compute the differences between optimal solutions and the solutions generated by the greedy algorithms. If a problem was not solved the execution time and the iterations are not included to compute averages and worst cases.

6.2 Data generation

At the present time we have computational results for the Shortest Path (SP) and the p-Medians (p-M) problems.

6.2.1 Data for the shortest path problem

Consider a directed graph \(G=(V,E)\). Let \(\hat{v} \in V\) and \(\hat{w} \in V\). We are looking for the path from \(\hat{v}\) to \(\hat{w}\) with the minimum cost.

For each \(v \in V\) let \(E^+(v) = \lbrace (v,w): (v,w) \in E \rbrace \) and let \(E^-(v) = \lbrace (w,v):(w,v) \in E \rbrace \).

Let \(\mathbf {x} \in {\lbrace 0,1 \rbrace }^{|E|}\) with \(x_e = 1\) if the edge e is selected.

Let \(X \subseteq {\lbrace 0,1 \rbrace }^{|E|}\) defined as follows:

$$\begin{aligned}&X = \lbrace \mathbf {x}:\\&\sum _{e:e \in E^+(\hat{v})} x_e = 1,\\&\sum _{e:e \in E^-(v)} x_e - \sum _{e \in E^+(v)} x_e =0\;\;\forall v \in V\;\; \text{ with }\;\; v \notin \lbrace \hat{v},\hat{w} \rbrace ,\\&\sum _{e:e \in E^-(\hat{w})} x_e = 1,\\&\mathbf {x} \in {\lbrace 0,1 \rbrace }^{|E|} \rbrace \end{aligned}$$

Let \(\mathbf {L}\) and \(\mathbf {U}\) be known vectors in \(\mathbb {R}^{|E|}\) with \(\mathbf {0} \le \mathbf {L} \le \mathbf {U}\). Let \(\mathbf {c} \in \mathbb {R}^{|E|}\) with: \(\mathbf {L} \le \mathbf {c} \le \mathbf {U}\). The Shortest Path (SP) problem with cost \(\mathbf {c}\) is a problem in \(\mathbf {x}\) defined as follows:

$$\begin{aligned} (P(\mathbf {c}))\;\;\underset{\mathbf {x} \in X}{\min }\;\; \mathbf {c}^t \mathbf {x} \end{aligned}$$

Since X may be written as a 0-1-Integer Programming model then \(P(\mathbf {(c)})\), Q(H), S(Y), W(HY), \(Q_r(H)\), \(Q_r(\mu ,H)\), \(S_r(Y)\), and \(W_r(H,Y)\) may be solved by using CPLEX.

We define G with two different topologies: A Mesh topology and an Euclidean topology as follows:

Mesh topology

We use \(G=(V,E)\) with a simple mesh topology: there is only one node at level 0, named (0, 1). Next we have m levels with r nodes at each level. Node (0, 1) is connected with all the nodes of level 1: from (1, 1) to (1, r). Every node of level i is connected with all nodes of level \(i+1\) (with \(i \in \lbrace 1,\ldots , m \rbrace \)). That is: node (ik) (with \(k \in \lbrace 1, \ldots ,r \rbrace \)) is connected with nodes \((i+1,1),\ldots , (i+1,r)\). Finally, the nodes of level m are connected with the only node at level \(m+1\) named \((m+1,1)\).

We have \(|V| = 2 + rm\) and \(|E| =2r + r^2(m-1)\). We use \(\hat{v}=(0,1)\) and \(\hat{w} = (m+1,1)\). Every path from \(\hat{v}\) to \(\hat{w}\) has \(m+1\) edges.

Let \(\gamma > 0\). The data were generated as follows: for each \(e \in E\) we generate \(\mathbf {L}_e\) from U(0, 1000). Let \(\mathbf {U}_e = (1+r_e \gamma ) \mathbf {L}_e\) with \(r_e\) taken from U(0, 1).

Euclidean topology

The graphs are generated following Hanasusanto et al. (2015) as follows:

In each problem instance, the nodes correspond to |V| points with coordinates that are chosen uniformly at random in the square \([0,10] \times [0,10]\). We choose the pair of nodes with the largest Euclidean distance as \(\hat{v}\) and \(\hat{w}\) (the start and terminal nodes). Let \(0< p <1\). To generate E, we begin with a fully connected graph and remove \(100 \times p\)% of the arcs in order of decreasing Euclidean distance, that is, starting with the longest arcs. If the graph is not connected we do not use it. The data were generated as follows: for each \(e \in E\) the travel time between each pair of adjacent nodes varies between 100% and 150% of the Euclidean distance between the nodes.

6.2.2 Data for the p-Medians problems

Let \(J=\lbrace 1,\ldots ,s \rbrace \) a set of demand locations. Each demand location is a candidate to be a service location (a median). Let \(\mathbf {LL},\mathbf {UU}\) be known matrices such that \(\mathbf {0} \le \mathbf {LL} \le \mathbf {UU}\). If the demand location j is assigned to the median i the cost belongs to \([\mathbf {LL}_{ij},\mathbf {UU}_{ij}] \;\;(i,j \in J)\). Let p the number of medians to be selected.

Let \(\mathbf {z}_i \in \lbrace 0,1 \rbrace \;\;(i \in J)\) with \(\mathbf {z}_i = 1\) if and only if the demand location i is selected to be a median.

Let \(\mathbf {w}_{ij} \in \lbrace 0,1 \rbrace \;\;(i,j \in J)\) with \(\mathbf {w}_{ij} = 1\) if and only if the demand location j is assigned to a median located at i.

Let \(\mathbf {q}_{ij}\) such that \(\mathbf {LL}_{ij} \le \mathbf {q}_{ij} \le \mathbf {UU}_{ij}\) for all \(i,j \in J\). The p-M problem with cost \(\mathbf {q}\) is a problem in wz defined as follows:

$$\begin{aligned}&min \sum _{i\in J} \sum _{j \in J} \mathbf {q}_{ij} \mathbf {w}_{ij}\;\;s.t.\\&\mathbf {w}_{ij} \le \mathbf {z}_i\;\;\forall i,j \in J\\&\sum _{i \in J} \mathbf {z}_i = p\\&\sum _{i \in J} \mathbf {w}_{ij} = 1\;\;\forall j \in J\\&\mathbf {z}_i \in \lbrace 0,1 \rbrace ,\;\;\mathbf {w}_{ij} \in \lbrace 0,1 \rbrace \;\;\forall i,j \in J \end{aligned}$$

Let \(n=s+s^2\). With very simple variable changes the p-M problem may be rewritten as \(\underset{x \in X}{\min }\;\;\mathbf {c}^t \mathbf {x}\) where \(X \subseteq {\lbrace 0,1 \rbrace }^n\) and \(\mathbf {L} \le \mathbf {c} \le \mathbf {U}\) for appropriate vectors \(\mathbf {L},\mathbf {U} \in \mathbb {R}^n\).

Since X may be written as a 0-1-Integer Programming model then \(P(\mathbf {c})\), Q(H), S(Y), W(HY), \(Q_r(H)\), \(Q_r(\mu ,H)\), \(S_r(Y)\), and \(W_r(H,Y)\) may be solved by using CPLEX.

The data were generated at random as follows: let \((\mathbf {x1}_j,\mathbf {x2}_j)\) be the location j taken from \(U((0,100) \times (0,100))\), let \(\mathbf {D}_j\) be the demand of location j taken from U(0, 100). Let \(\mathbf {d}_{ij}\) be the distance from location i to location j computed as \(\mathbf {d}_{ij} = \vert \mathbf {x1}_i - \mathbf {x1}_j \vert + \vert \mathbf {x2}_i - \mathbf {x2}_j \vert \). Let \(\gamma > 0\). Let \(\mathbf {LL}_{ij} = \mathbf {d}_{ij} \mathbf {D}_j\) and let \(\mathbf {UU}_{ij} = (1+r_{ij}\gamma ) \mathbf {LL}_{ij}\) where \(r_{ij}\) is taken from U(0, 1).

6.3 Performance of the algorithms

6.3.1 Absolute case (MSR(k))

Tables

Results associated with the use of the S-Q and T-Greedy algorithms for p-M and SP problems (with a mesh topology and a euclidean topology) with \(k \in \lbrace 2,3 \rbrace \) may be seen in Table 1 (120 problems were considered). For each parameter combination 10 problems were solved.

Results associated with the use of the Q-greedy and T-Greedy algorithms with \( k \in \lbrace 2, \ldots ,6 \rbrace \) may be seen in Table 2 (170 problems were considered). We can see times for \(k=6\) and the percentages of reduction of the regret values associated with the two algorithms.

Complimentary details may be seen in Tables 4, 5, 6, 7, 8 and 9 (see “Appendix D”, 290 problems were considered). We can see times, iterations and failures of the optimal algorithm for the time limit considered. Also, we can see percentages of reduction of the regret values and the difference between optimal and greedy algorithms.

Remarks

We present in Table 1 the times average for algorithm S-Q (avetS-Q) and the differences of the average values of the reduction percentage for the regret values between the optimal solutions and the solutions generated with the T-greedy algorithms (difk wit \(k \in \lbrace 2,3 \rbrace \)).

S-Q algorithm failed 3 times to obtain an optimal solution for p-M problems with \((m,p,\gamma ,k) = (60,5,0.15,3)\) and for that cases the execution times are not included to compute the average. None of problems were solved with \((m,p,\gamma ,k) = (60,5,0.35,3)\). When the S-Q algorithm fails we use the best lower bound generated to compute the differences between optimal solutions and the solutions generated by the T-greedy algorithm.

From Table 1 it is evident that when the dimensions and the size of the uncertainty set increase, the computational effort increases.

See the columns with \(\gamma =0.15\) and \(\gamma =0.35\) with the rest of parameters fixed for SP problems with mesh topology and p-M problems to note that the time increases in general and sometimes considerably (for example for the SP problems with \((r,m,\gamma ,k)= (10,50,0.15,3)\) and \((r,m,\gamma ,k)= (10,50,0.35,3)\) we have 53.30 and 256.62 seconds for the time averages).

With \((\gamma ,k)\) fixed move from one line to another with larger dimensions to note that the time increases in general (for example for the p-M problems with \((m,p,\gamma ,k) = (60,2,0.15,2)\) and \((m,p,\gamma ,k) = (60,5,0.15,2)\) we have 118.45 and 563.49 seconds for the time averages). Also, with (pk) fixed for the SP problems with euclidean topology see the times as \(\vert V \vert \) increases (for example if \((p,k)=(0.70,3)\) the average times are 9.03,18.59,72.90 and 698.33 seconds for \(\vert V \vert = 15,20,25,30\)).

For the problems included in Tables 1, 4, 5, 6, 7, 8 and 9 the T-Greedy algorithm found near optimal solutions. For \(k=2\), the worst case for the average difference is 1.52% for the SP problems with the mesh topology. For \(k=3\), the worst case for the average difference is 6% for the p-M problems.

For the problems included in Tables 1, 4, 5, 6, 7, 8 and 9 the S-Q algorithm was used to solve 290 problems and found an optimal solution 287 times for \(k=2\) and 241 times for \(k=3\) with a time limit of 3600 s. The worst case was for the set with \((p,m,\gamma ,k) = (60,5,0.35,3)\) (p-M problems), in which none of the ten problems included were solved.

It is evident (see Table 2) that as we can expect the reduction percentages and the computational effort of the heuristic algorithms are larger, in general, using the T-Greedy algorithm.

The use of the greedy approaches is justified because we can find near optimal solutions with a tolerable computational effort. The percentages of reduction obtained are, in general, large with some few exemptions.

Table 1 (i) SP problems with mesh topology and parameters \((r,m,\gamma )\), (ii) p-M problems with parameters \((m,p,\gamma )\) and (iii) SP problems with euclidean topology and parameter \((\vert V \vert ,p)\), where \(\gamma \in \lbrace 0.15,0.35 \rbrace \), \(p \in \lbrace 0.35,0.70 \rbrace \). T-Greedy and S-Q algorithms for the MSR(k) problem where \(k \in \lbrace 2,3 \rbrace \). 10 problems in each set
Table 2 (i) SP problems with mesh topology and parameters \((r,m,\gamma )\), (ii) p-M problems with parameters \((m,p,\gamma )\) and (iii) SP problems with euclidean topology and parameters \((\vert V \vert ,p)\). Q-greedy and T-greedy algorithms for the MSR(k) problem where \(k \in \lbrace 2,\ldots , 6 \rbrace \). Reduction percentage for the regret values and times (for \(k=6\)). 10 problems in each set
Table 3 (i) SP problems with mesh topology and parameters \((r,m,\gamma )\), (ii) p-M problems with parameters \((m,p,\gamma )\) and (iii) SP problems with euclidean topology and parameters \((\vert V \vert ,p)\). Qr-greedy and Tr-greedy algorithms for the MrelSR(k) problem where \(k \in \lbrace 2,\ldots , 6 \rbrace \). Reduction percentage for the regret values and times (for \(k=6\)). 10 problems in each set

6.3.2 Relative case (MrelSR(k))

Tables

Results associated with the use of the Qr-greedy and Tr-Greedy algorithms with \( k \in \lbrace 2, \ldots ,6 \rbrace \) may be seen in Table 3 (150 problems). We can see times and the percentages of reduction of the regret values associated with the two algorithms.

Complimentary details may be seen in Tables 10, 11, 12, 13, 14 and 15 (see “Appendix D”). Results associated with the use of the Sr-Qr, Qr-Greedy and Tr-Greedy algorithms for p-M and SP problems (with a mesh topology and a euclidean topology) with \(k \in \lbrace 2,3 \rbrace \) are presented. We can see times, iterations and failures of the optimal algorithm for the time limit considered for 180 problems. Also, we can see percentages of reduction of the regret values and the difference between optimal and greedy algorithms.

Remarks

The percentages of reduction obtained are, in general, large with some few exemptions (see Table 3).

It is evident that as we can expect the reduction percentages and the computational effort of the heuristic algorithms are larger, in general, using the Tr-Greedy algorithm (see Table 3).

For the problems included in Tables 10, 11, 12, 13, 14 and 15 the Qr-Greedy and Tr-Greedy algorithms found near optimal solutions with a tolerable computational effort. For \(k=2\), the worst case for the average difference is 1.52% for the p-M problems. For \(k=3\), the worst case for the average is 5.15% for the SP problem with a mesh topology.

For the problems included in Tables 10, 11, 12, 13, 14 and 15, the Sr-Qr algorithm was used to solve 180 problems and found an optimal solution 179 times for \(k=2\) and 160 times for \(k=3\) with a time limit of 3600 s. In this case, the worst scenario was for p-M problems with \((m,p,\gamma ,k) = (80,5,0.15,3)\), the algorithm failed to obtain an optimal solution for 5 problems.

Again, it is evident that as we can expect the computational effort increases, in general, while the dimensions increase. Also, the use of the greedy approaches is justified because we can find near optimal solutions with a tolerable computational effort.

7 Conclusions and further extensions

7.1 Conclusions

We studied the min max min robust (relative) regret problem. We presented the regret and the relative regret of a set with k elements. Algorithms to find a set with the minimum (relative) regret were developed. Also, greedy approaches to save computational effort were designed.

S-Q and Sr-Qr algorithms defined to solve the MSR(k) and the MrelSR(k) problem respectively may be seen as a generalization of classical algorithms to obtain an absolute and relative robust solution.

Our greedy approach includes two algorithms, the first one based on conditionated absolute (relative) robust solutions (T-Greedy, Tr-Greedy) and the second one based on a simple mutiparametric algorithm (Q-Greedy, Qr-Greedy).

We presented a computational experience for the p-M and SP problems. Based on our computational experiments we can expect for small problems:

  1. 1.

    if \(k = 2\) the S-Q and Sr-Qr algorithms may be used with tolerable execution times and if \(k=3\) the same remark is also valid often,

  2. 2.

    if \(k \le 3\) the Q-Greedy, T-Greedy, Qr-Greedy and Tr-Greedy algorithms obtain optimal or near optimal solutions, and for \(k \le 6\) the algorithms obtain a good set of solutions with a tolerable computational effort,

  3. 3.

    for \(k \le 6\) the Q-Greedy and Qr-Greedy algorithms may be used to obtain a good set of solutions quickly.

In order to use the algorithms for medium to large problems it will be necessary to improve its performance (in the extensions presented in this section two possible improvements are mentioned).

Finally, the tradeoff between the computational effort and the quality of the solutions generated with our algorithms must be considered by the decision maker.

7.2 Extensions

  1. 1.

    In robust programming a significant effort is directed towards the design of special purpose algorithms in order to find robust solutions to problems with particular structures. It is reasonable then to think that a next step should be the design of specialized algorithms to solve the MSR(k) and MrelSR(k) problems.

  2. 2.

    The paper may be rewritten without any problem if \(P(\mathbf {c})\) is a 0-1-MILP problem with the uncertainty relative to the cost of the 0-1-variables as follows:

    $$\begin{aligned} (P(\mathbf {c)})\;\;\min \lbrace \mathbf {c}^t \mathbf {x} + \mathbf {d}^t \mathbf {y}:(\mathbf {x},\mathbf {y}) \in X, \mathbf {x} \in {\lbrace 0,1 \rbrace }^n, \mathbf {y} \in \mathbb {R}^m, \mathbf {y} \ge 0 \rbrace \end{aligned}$$

    where \(\mathbf {c} \in [\mathbf {L},\mathbf {U}]\), \(\mathbf {d} \in \mathbb {R}^m\) and \(X \subseteq {\lbrace 0,1 \rbrace }^n \times \mathbb {R}^m\).

  3. 3.

    The algorithms may be improved with a better choice for the Big-M values. A dynamic choice may be designed by using standard procedures (Crema 2014).

  4. 4.

    The algorithms could be improved if several constraints are added in order to break the symmetries (note that the output of the algorithms is a set and then there are several equivalent representations with our integer programming models).

  5. 5.

    We point out that our approach may be used in practice if the cost vectors become known over the time. Considering again our example in the introduction, that may be the case if the travel times and the demands of an emergency system are monitored in real time. In that scenario, we can choose a new solution when the system changes enough. Multiple changes in a short time, even by using a small set of solutions, may be a problem for humans and that should be considered for further works.

  6. 6.

    The approach used may be generalized easily (from the theoretically point of view at least) in order to consider other uncertainty models (finite uncertainty and bounded uncertainty).

  7. 7.

    The T-greedy (Tr-greedy) algorithm may be improved as follows: If \(k=2\) and beginning with \(h^{(1)} \in X\) use the T-greedy (Tr-greedy) algorithm to find \(H = \lbrace h^{(1)},h^{(2)} \rbrace \). Next, beginning with \(h^{(2)}\) use the T-greedy (Tr-greedy) algorithm to find \(H= \lbrace h^{(2)},h^{(3)} \rbrace \) and so on until convergence. If \(k=3\) similar approaches may be defined.