1 Introduction

In multiobjective problems nondominated points are of interest where each nondominated point is better than any other feasible point in at least one objective. In practical multiobjective integer programs, there are many nondominated points and finding all nondominated points is computationally hard. Furthermore, the decision maker (DM) typically is not interested in many of the nondominated points. Concentrating on the preferred regions is important in such situations to reduce the computational burden.

In the literature, there are different approaches that find the most preferred point of a DM. One approach is to estimate the underlying preference function of the DM, which is unknown to the DM and the analyst. However, estimating such a function that sufficiently represents the DM’s preferences is problematic. Interactive approaches, on the other hand, aim to converge to the most preferred point by progressively obtaining preference information from a DM. There are many interactive approaches in the literature. Zionts (1981) developed an interactive algorithm to choose between discrete alternatives for a DM whose preferences are consistent with a linear preference function. Korhonen et al. (1984) developed a version that assumes a quasiconcave preference function and Köksalan and Sagala (1995) generalized for any monotone preference function. Lokman et al. (2014) also considered a quasiconcave preference function but they addressed integer programs rather than choosing from an available set of alternatives.

In this study, we develop an interactive algorithm for biobjective integer programs where each objective is of minimization type. The algorithm finds the most preferred point of a DM who is assumed to have an underlying quasiconvex preference function to be minimized. Note that, this case is equivalent to maximizing a quasiconcave preference function (which is more common in the literature) where all objectives are of maximization type. Tezcaner and Köksalan (2011) developed an algorithm, BestSol, for biobjective integer programs to find the most preferred point of a DM whose preferences are consistent with a linear preference function. In this paper, we consider the more general case where the DM’s preferences are consistent with a general quasiconvex function. These functions represent a wide spectrum of preference structures including linear preference functions.

We demonstrate our algorithm on the generalized biobjective traveling salesperson problem (BOTSP). In this problem there are typically multiple efficient tours. Furthermore, the generalized BOTSP is considered to have multiple efficient edges between each node pair. The existence of these efficient edges lead to a separate combinatorial optimization problem: the biobjective shortest path problem (BOSPP). A subset of these efficient edges appears in the efficient tours.

We organize the paper as follows. We give some definitions in Sect. 2. We provide the details of our interactive algorithm in Sect. 3 and discuss it on the generalized BOTSP in Sect. 4. We demonstrate the algorithm on an example and present computational results on randomly generated problems in Sect. 5. We make concluding remarks in Sect. 6.

2 Definitions

Before we present the problem in detail, we give some relevant definitions. These definitions are adapted from Tezcaner and Köksalan (2011) and they will be used throughout the paper, unless otherwise stated.

Let \({{\mathbf {x}}}=\left( x_1, x_2 ,\ldots ,x_n\right) \) denote an integer-valued decision variable vector, \(X\subset {\mathbb {Z}}^{n}\) the feasible set, point \({{\mathbf {z}}}({{\mathbf {x}}})=(z_1({{\mathbf {x}}}),z_2({{\mathbf {x}}}),\ldots , z_p({{\mathbf {x}}}))\) the objective function vector corresponding to the decision vector \({{\mathbf {x}}}\), and \(Z=\{{{\mathbf {z}}}({{\mathbf {x}}}):{{\mathbf {x}}}\in X\}\subset {\mathbb {R}}^{p}\) the image of the feasible set in objective space. Note that we refer to any vector in set X as a solution and its image in the objective space Z as a point. We will refer to the most preferred (efficient) solution and the most preferred (nondominated) point using the same distinction throughout the paper. We assume, without loss of generality, that all objectives are to be minimized.

Definition 2.1

A point \({{\mathbf {z}}}({{\mathbf {x}}})\in Z\) is said to be nondominated iff there does not exist \({{\mathbf {z}}}({{\mathbf {x}}}')\in Z\) such that \(z_{j}({{\mathbf {x}}}')\le z_{j}({{\mathbf {x}}})\) for \(j=1,\ldots ,p\) and \(z_{j}({{\mathbf {x}}}')< z_{j}({{\mathbf {x}}})\) for at least one j. If there exists such a \(z_{j}({{\mathbf {x}}}'), z_{j}({{\mathbf {x}}})\) is said to be dominated. The set of all nondominated points, \(Z_{ND} \subset Z,\) constitute the nondominated set.

Definition 2.2

If \({{\mathbf {z}}}({{\mathbf {x}}})\in Z\) is nondominated, then \({{\mathbf {x}}}\) is said to be efficient, and if \({{\mathbf {z}}}({{\mathbf {x}}})\) is dominated, \({{\mathbf {x}}}\) is said to be inefficient. The set of all efficient solutions constitute the efficient set.

Definition 2.3

A nondominated point \({{\mathbf {z}}}({{\mathbf {x}}})\in Z_{ND} \) is a supported nondominated point iff there exists a positive linear combination of objectives that is minimized by \({{\mathbf {x}}}\). Otherwise, \({{\mathbf {z}}}({{\mathbf {x}}})\) is an unsupported nondominated point.

Definition 2.4

An extreme nondominated point is a supported nondominated point that has the minimum possible value in at least one of the objectives.

From this point on, we will use the notation \({{\mathbf {y}}}^{{{\mathbf {i}}}}=\left( y_{1}^{i}, y_{2}^{i}, \ldots , y_{p}^{i}\right) \) to denote vector \({{\mathbf {z}}}({{\mathbf {x}}}_{{{\mathbf {i}}}})=( z_{1}({{\mathbf {x}}}_{{\mathbf {i}}}),z_{2}({{\mathbf {x}}}_{{\mathbf {i}}}),\ldots ,z_{p}({{\mathbf {x}}}_{{\mathbf {i}}}))\) whenever we need to simplify the notation. Although more than one \({{\mathbf {x}}}\) vector may correspond to the same \({{\mathbf {y}}}\) vector, we are only interested in one of them since we operate in the objective space. Therefore, we assume that \({{\mathbf {y}}}^{{{\mathbf {i}}}}\ne {{\mathbf {y}}}^{{{\mathbf {j}}}}\) when \(i\ne j\) for the sake of further simplifying the notation.

Definition 2.5

Let \(T=\{t|{{\mathbf {y}}}^{{{\mathbf {t}}}}\in Z_{ND}\}\) and \({{\mathbf {y}}}^{{{\mathbf {i}}}}\in Z_{ND}\) be a supported nondominated point. Point \({{\mathbf {y}}}^{{{\mathbf {j}}}}\in Z, j\ne i\), is adjacent to \({{\mathbf {y}}}^{{{\mathbf {i}}}}\) iff there does not exist \(\mu _{t}, \lambda \in {\mathbb {R}}\) satisfying \(\sum \nolimits _{t\in T,t\ne j} \mu _t=1, 0\le \mu _{t}\le 1, 0<\lambda \le 1\), and \(\sum \nolimits _{t\in T,t\ne j} \mu _{t} {{\mathbf {y}}}^{{{\mathbf {t}}}}\le \lambda {{\mathbf {y}}}^{{{\mathbf {j}}}}+(1-\lambda ){{\mathbf {y}}}^{{{\mathbf {i}}}}\).

The above definition implies that adjacent nondominated points are supported nondominated points of the feasible solution space considered.

In a bicriteria problem, there are at most two nondominated points adjacent to a given nondominated point (Ramesh et al. 1990).

Definition 2.6

Let \(\left\{ {{\mathbf {y}}}^{{{\mathbf {i}}}}\in {\mathbb {R}}^{p}, i=1,\ldots , m\right\} \) be a set of points. \(f:{\mathbb {R}}^{p}\rightarrow {\mathbb {R}}\) is a quasiconvex function if and only if \(f\left( \sum \nolimits _{i=1}^{m}\mu _{i}{{\mathbf {y}}}^{{{\mathbf {i}}}}\right) \le \max \nolimits _{i}\left\{ f({{\mathbf {y}}}^{{{\mathbf {i}}}})\right\} \) for \(\mu _{i}\in {\mathbb {R}},i=1,\ldots , m\) such that \(\sum \nolimits _{i=1}^{m}\mu _{i}=1, \mu _{i}\ge 0, i=1,\ldots ,m\).

3 An interactive algorithm

As mentioned above, there are many nondominated points in multiobjective problems in practice and it is neither practical nor necessary to generate all of them. In our interactive approach, we aim to generate a small subset of the nondominated set while converging the most preferred point. We select and provide the DM a pair of nondominated points and ask for the preferred one. This procedure is repeated and the preferences of the DM lead our search to the most preferred point of the DM.

We assume that the DM’s preferences are consistent with a quasiconvex preference function to be minimized. In the literature, quasiconcave preference functions (with objectives to be maximized) are widely used. If all objectives are minimization type, the same theory directly applies. We use Lemma 1 and Theorem 1 [adapted from Lemma 2 and Theorem 1 of Korhonen et al. (1984), respectively] to reduce the objective space corresponding to the implied inferior regions based on the expressed preferences of the DM.

Lemma 1

Consider a quasiconvex function \(f:{\mathbb {R}}^{p}\rightarrow {\mathbb {R}}\), and points \({\mathbf {y}}^{{\mathbf {i}}}\in {\mathbb {R}}^{p}, \quad i=1,2,\ldots ,m.\) Let \(f\left( {\mathbf {y}}^{{\mathbf {k}}} \right) >f\left( {\mathbf {y}}^{{\mathbf {i}}} \right) ,i\ne k\) and \(Y=\left\{ {\mathbf {y}}|{\mathbf {y}}={\mathbf {y}}^{{\mathbf {k}}}+\sum \limits _{i=1;i\ne k}^{m}\mu _{i}({\mathbf {y}}^{{\mathbf {k}}}-{\mathbf {y}}^{{\mathbf {i}}}),\mu _{i}\ge 0\right\} \). If \({\mathbf {y}}\in Y, {\mathbf {y}}\ne {\mathbf {y}}^{{\mathbf {k}}}\) then \(f({\mathbf {y}})\ge f({\mathbf {y}}^{{\mathbf {k}}})\).

Proof

The proof directly follows from the proof of Lemma 2 in Korhonen et al. (1984). \(\square \)

Theorem 1

Let \(f :{\mathbb {R}}^{p}\rightarrow {\mathbb {R}},\) be a nondecreasing quasiconvex function, \({\mathbf {y}}^{{\mathbf {i}}}\in {\mathbb {R}}^{p},i=1,2,\ldots ,m\), and \({\mathbf {y}}\in {\mathbb {R}}^{p}\) be any point, and \(f\left( {\mathbf {y}}^{{\mathbf {k}}} \right) >f\left( {\mathbf {y}}^{{\mathbf {i}}} \right) , i\ne k.\) If there is a feasible solution to

$$\begin{aligned} {\mathbf {y}}^{{\mathbf {k}}}+\sum \limits _{i=1;i\ne k}^{m}\mu _{i}({\mathbf {y}}^{{\mathbf {k}}}-{\mathbf {y}}^{{\mathbf {i}}})\le {\mathbf {y}}, \mu _{i}\ge 0,\quad i=1,\ldots ,m, \end{aligned}$$

then \(f({\mathbf {y}})\ge f\left( {\mathbf {y}}^{{\mathbf {k}}} \right) \).

Proof

The proof directly follows from the proof of Theorem 1 in Korhonen et al. (1984). \(\square \)

Remark

The set Y in Lemma 1 defines a cone. We will refer to the cone and the region dominated by the cone as the cone-inferior region. If a DM’s preferences are consistent with f, then any point \({\mathbf {y}}\) that is in the cone-inferior region is less preferred than \({\mathbf {y}}^{{\mathbf {k}}}\) according to Theorem 1.

We present the following lemma to find the regions in the objective space that do not contain any nondominated points using the information that two nondominated points are adjacent.

Lemma 2

Let nondominated point \({\mathbf {y}}^{{\mathbf {k}}}\) be adjacent to a nondominated point \({\mathbf {y}}^{{\mathbf {r}}}, r\ne k.\) Then, there does not exist any \({\mathbf {y}}^{{\mathbf {i}}}, i\ne k,r\), that dominates some affine combination of \({\mathbf {y}}^{{\mathbf {k}}}\) and \({\mathbf {y}}^{{\mathbf {r}}}\).

Proof

Recall \(T=\left\{ t|{\mathbf {y}}^{{\mathbf {t}}}\in Z_{ND} \right\} .\) For \({\mathbf {y}}^{{\mathbf {r}}}\) to be adjacent to \({\mathbf {y}}^{{\mathbf {k}}}\), the following inequality does not hold for any \(\mu _{t},\lambda \in {\mathbb {R}}\) satisfying \(\sum \nolimits _{t\in T,t\ne r} \mu _{t}=1, 0\le \mu _{t}\le 1\) and \(0<\lambda \le 1\).

$$\begin{aligned} \sum \limits _{t\in T,t\ne r} {\mu _{t}{\mathbf {y}}^{{\mathbf {t}}}} \le \lambda {\mathbf {y}}^{{\mathbf {r}}}+(1-\lambda ){\mathbf {y}}^{{\mathbf {k}}} \end{aligned}$$
(1)

Or equivalently,

$$\begin{aligned} \sum \limits _{t\in T,t\ne r} {\mu _{t}{\mathbf {y}}^{{\mathbf {t}}}} -{\mathbf {y}}^{{\mathbf {k}}}\le \lambda \left( {\mathbf {y}}^{{\mathbf {r}}}-{\mathbf {y}}^{{\mathbf {k}}} \right) \end{aligned}$$
(2)

Similarly, for \({\mathbf {y}}^{{\mathbf {k}}}\) to be adjacent to \({\mathbf {y}}^{{\mathbf {r}}},\) the following inequality does not hold for any \(\mu _{t},\lambda \in {\mathbb {R}}\) satisfying \(\sum \nolimits _{t\in T,t\ne k} \mu _{t}=1, 0\le \mu _{t}\le 1\) and \(0<\lambda \le 1\).

$$\begin{aligned} \sum \limits _{t\in T,t\ne k} {\mu _{t}{\mathbf {y}}^{{\mathbf {t}}}} -{\mathbf {y}}^{{\mathbf {r}}}\le \lambda \left( {\mathbf {y}}^{{\mathbf {k}}}-{\mathbf {y}}^{{\mathbf {r}}} \right) \end{aligned}$$
(3)

Suppose there exists a nondominated point \({\mathbf {y}}^{{\mathbf {i}}}\) dominating some affine combination of \({\mathbf {y}}^{{\mathbf {k}}}\) and \({\mathbf {y}}^{{\mathbf {r}}}\).

$$\begin{aligned} {\mathbf {y}}^{{\mathbf {i}}}\le {\uplambda }_{r}{\mathbf {y}}^{{\mathbf {r}}}+{\uplambda } _{k}{\mathbf {y}}^{{\mathbf {k}}}\quad \hbox { for some } {\uplambda }_{k},{\uplambda }_{r}\in {\mathbb {R}} \hbox { and } {\uplambda }_{k}+{\uplambda }_{r}=1. \end{aligned}$$
(4)

This inequality is equivalent to the following two inequalities.

$$\begin{aligned}&{\mathbf {y}}^{{\mathbf {i}}}-{\mathbf {y}}^{{\mathbf {k}}}\le {\uplambda }_{r}\left( {\mathbf {y}}^{{\mathbf {r}}}-{\mathbf {y}}^{{\mathbf {k}}} \right) \quad \hbox { for some } {\uplambda }_{r}\in {\mathbb {R}}. \end{aligned}$$
(5)
$$\begin{aligned}&{\mathbf {y}}^{{\mathbf {i}}}-{\mathbf {y}}^{{\mathbf {r}}}\le {\uplambda }_{k}\left( {\mathbf {y}}^{{\mathbf {k}}}-{\mathbf {y}}^{{\mathbf {r}}} \right) \quad \hbox { for some } {\uplambda }_{k}\in {\mathbb {R}}. \end{aligned}$$
(6)

Setting \(\mu _{k}=1-\mu _{i}\) for \(\mu _{i}>0\) and \(\mu _{t}=0\) for \(t\ne k,i\) in (2), we obtain the following inequality.

$$\begin{aligned} {\mathbf {y}}^{{\mathbf {i}}}-{\mathbf {y}}^{{\mathbf {k}}}\le \frac{\lambda }{\mu _{i}}\left( {\mathbf {y}}^{{\mathbf {r}}}-{\mathbf {y}}^{{\mathbf {k}}} \right) \quad \hbox { for } 0<\lambda , \mu _{i}\le 1. \end{aligned}$$
(7)

Similarly, setting \(\mu _{r}=1-\mu _{i}\) for \(\mu _{i}>0\) and \(\mu _{t}=0\) for \(t\ne r,i\) in (3), we obtain the following inequality.

$$\begin{aligned} {\mathbf {y}}^{{\mathbf {i}}}-{\mathbf {y}}^{{\mathbf {r}}}\le \frac{\lambda }{\mu _{i}}\left( {\mathbf {y}}^{{\mathbf {k}}}-{\mathbf {y}}^{{\mathbf {r}}} \right) \quad \hbox { for } 0<\lambda , \mu _{i}\le 1. \end{aligned}$$
(8)

In (7) and (8), \(\frac{\lambda }{\mu _{i}}\in {\mathbb {R}}_{+}\) for \(0<\lambda , \mu _{i}\le 1.~ {\uplambda }_{k}\) or \({\uplambda }_{r}\) is positive to satisfy \({\uplambda }_{k}+{\uplambda }_{r}=1\) in (4). If \({\uplambda }_{r}>0\), the inequality (5) becomes equivalent to (7) for \(\frac{\lambda }{\mu _{i}}={\uplambda }_{r}\) that contradicts with \({\mathbf {y}}^{{\mathbf {r}}}\) being adjacent to \({\mathbf {y}}^{{\mathbf {k}}}\). If \({\uplambda }_{k}>0\), the inequality (6) becomes equivalent to (8) for some \(\frac{\lambda }{\mu _{i}}={\uplambda }_{k}\) that contradicts with \({\mathbf {y}}^{{\mathbf {k}}}\) being adjacent to \({\mathbf {y}}^{{\mathbf {r}}}\). \(\square \)

With Lemma 3 we provide the theory showing the region that does not contain the most preferred point and hence can be reduced, around a nondominated point that is preferred to both its adjacent points.

Lemma 3

Let \(f:{\mathbb {R}}^{2}\rightarrow {\mathbb {R}}\), be a quasiconvex function (to be minimized) that represents the preferences of the DM, \({\mathbf {y}}^{{\mathbf {i}}}=\left( y_{1}^{i},y_{2}^{i} \right) ,\) and \({\mathbf {y}}^{{\mathbf {a}}}\) and \({\mathbf {y}}^{{\mathbf {c}}}\) be the left and right adjacent points of the supported nondominated point \({\mathbf {y}}^{{\mathbf {b}}}.\) Let \(f\left( {\mathbf {y}}^{{\mathbf {b}}} \right) <f({\mathbf {y}}^{{\mathbf {a}}})\) and \(f\left( {\mathbf {y}}^{{\mathbf {b}}} \right) <f\left( {\mathbf {y}}^{{\mathbf {c}}} \right) \) and \(f\left( {\mathbf {y}}^{*} \right) =\min _{{{\mathbf {y}}}\in Z_{ND}}\{f({\mathbf {y}})\}\). Then

$$\begin{aligned} {\mathbf {y}}^{*}\in Y^{*}= & {} \left\{ {\mathbf {y}}\vert {{\mathbf {y}}=\mu _{a}{\mathbf {y}}^{{\mathbf {a}}}+\mu _{b}{\mathbf {y}}^{{\mathbf {b}}}+\mu _{ab}\eta \left( {\mathbf {y}}^{{\mathbf {a}}},{\mathbf {y}}^{{\mathbf {b}}} \right) , \mu _{a}+\mu _{b}+\mu _{ab}=1, \mu _{a},\mu _{b},\mu _{ab}\ge 0}\right\} \\&\cup \left\{ {\mathbf {y}}\vert {{\mathbf {y}}=\mu _{c}{\mathbf {y}}^{{\mathbf {c}}}+\mu _{b}{\mathbf {y}}^{{\mathbf {b}}}+\mu _{cb}\eta \left( {\mathbf {y}}^{{\mathbf {c}}},{\mathbf {y}}^{{\mathbf {b}}} \right) , \mu _{c}+\mu _{b}+\mu _{cb}=1, \mu _{c},\mu _{b},\mu _{cb}\ge 0}\right\} \\&\hbox { where } \eta \left( {\mathbf {y}}^{{\mathbf {i}}},{\mathbf {y}}^{{\mathbf {j}}} \right) =\left( \max \left( y_{1}^{i},y_{1}^{j} \right) ,\max \left( y_{2}^{i},y_{2}^{j} \right) \right) . \end{aligned}$$

Proof

If point \({\mathbf {y}}^{{\mathbf {b}}}\) and points \({\mathbf {y}}^{{\mathbf {a}}}\) and \({\mathbf {y}}^{{\mathbf {c}}}\) are adjacent points, there does not exist any point dominating any affine combinations of the adjacent point pairs due to Lemma 2. This eliminates region \(Y_{1}=\left\{ {\mathbf {y}}|{\mathbf {y}}<{\uplambda }_{a}{\mathbf {y}}^{{\mathbf {a}}}+{\uplambda }_{b}{\mathbf {y}}^{{\mathbf {b}}}\,\hbox { for all }\,{\uplambda }_{a},{\uplambda }_{b}\in {\mathbb {R}}\,\hbox { such that }\, {\uplambda }_{a}+{\uplambda }_{b}=1\right\} \cup \left\{ {\mathbf {y}}|{\mathbf {y}}<{\uplambda }_{c}{\mathbf {y}}^{{\mathbf {c}}}+{\uplambda }_{b}{\mathbf {y}}^{{\mathbf {b}}}\,\hbox { for all }\, {\uplambda }_{c},{\uplambda }_{b}\in {\mathbb {R}}\, \hbox { such that }\,{\uplambda }_{c}+{\uplambda }_{b}=1 \right\} .\) We eliminate the cone-inferior regions \(Y_{2}=\left\{ {\mathbf {y}}\vert {\mathbf {y}}\ge {\mathbf {y}}^{{\mathbf {a}}}+\mu _{b}\left( {\mathbf {y}}^{{\mathbf {a}}}-{\mathbf {y}}^{{\mathbf {b}}} \right) ,\mu _{b}\ge 0\right\} \) and \(Y_{3}=\left\{ {\mathbf {y}}\vert {\mathbf {y}}\ge {\mathbf {y}}^{{\mathbf {c}}}+\mu _{b}\left( {\mathbf {y}}^{{\mathbf {c}}}-{\mathbf {y}}^{{\mathbf {b}}} \right) ,\mu _{b}\ge 0\right\} \) since \({\mathbf {y}}^{*}\notin \left\{ Y_{2}\cup Y_{3} \right\} \) due to Lemma 1. For \({\mathbf {y}}^{{\mathbf {d}}}\in Y_{4}=\{{\mathbf {y}}|y_{j}\ge y_{j}^{i}, j=1,2\), and \(y_{j}>y_{j}^{i}\) for at least one j for \(i=a,b,c\}, \quad f\left( {\mathbf {y}}^{{\mathbf {d}}} \right) \ge f\left( {\mathbf {y}}^{{\mathbf {a}}} \right) \) or \(f\left( {\mathbf {y}}^{{\mathbf {d}}} \right) \ge f\left( {\mathbf {y}}^{{\mathbf {b}}} \right) \) or \(f\left( {\mathbf {y}}^{{\mathbf {d}}} \right) \ge f\left( {\mathbf {y}}^{{\mathbf {c}}} \right) .\) Therefore, \({\mathbf {y}}^{*}\in Y^{*}=\left\{ {\mathbf {y}}\vert {\mathbf {y}}=\mu _{a}{\mathbf {y}}^{{\mathbf {a}}} +\mu _{b}{\mathbf {y}}^{{\mathbf {b}}}+\mu _{ab}\eta \left( {\mathbf {y}}^{{\mathbf {a}}},{\mathbf {y}}^{{\mathbf {b}}} \right) ,\mu _{a}+\mu _{b}+\mu _{ab}=1, \mu _{a},\mu _{b},\mu _{ab}\ge 0\right\} \cup \left\{ {\mathbf {y}}\vert {\mathbf {y}} =\mu _{c}{\mathbf {y}}^{{\mathbf {c}}} +\mu _{b}{\mathbf {y}}^{{\mathbf {b}}}+\mu _{cb}\eta \left( {\mathbf {y}}^{{\mathbf {c}}},{\mathbf {y}}^{{\mathbf {b}}} \right) , \mu _{c}+\mu _{b}+\mu _{cb}=1,\mu _{c},\mu _{b},\mu _{cb}\ge 0\right\} \). \(\square \)

We demonstrate Lemma 3 in Fig. 1. Here, we have three nondominated points \({\mathbf {y}}^{{\mathbf {a}}}, {\mathbf {y}}^{{\mathbf {b}}}\) and \({\mathbf {y}}^{{\mathbf {c}}}\). We eliminate the regions in which the most preferred point of the DM cannot lie. Given that point \({\mathbf {y}}^{{\mathbf {b}}}\) is preferred to its two adjacent points, \({\mathbf {y}}^{{\mathbf {a}}}\) and \({\mathbf {y}}^{{\mathbf {c}}}\), there cannot be any points in Region 1 (which is the union of the regions dominating the affine combinations of \({\mathbf {y}}^{{\mathbf {a}}}\) and \({\mathbf {y}}^{{\mathbf {b}}}\), and \({\mathbf {y}}^{{\mathbf {a}}}\) and \({\mathbf {y}}^{{\mathbf {c}}})\) due to Lemma 2. The most preferred point of the DM cannot lie in the cone-inferior regions 2 and 3 due to Theorem 1. Region 4 is composed of points dominated by \({\mathbf {y}}^{{\mathbf {a}}},{\mathbf {y}}^{{\mathbf {b}}}\) or \({\mathbf {y}}^{{\mathbf {c}}}\). Therefore, the only regions in which the most preferred point of the DM can lie are the triangular regions between the current most preferred point, \({\mathbf {y}}^{{\mathbf {b}}}\), and its two adjacent points, \({\mathbf {y}}^{{\mathbf {a}}}\) and \({\mathbf {y}}^{{\mathbf {c}}}\).

Fig. 1
figure 1

Admissible regions for the most preferred point

We do not impose any additional constraints on the objective space before finding the supported nondominated point that is preferred to all its adjacent points. We refer to the objective space that has no additional constraints as the original objective space.

The three additional constraints; an upper bound on the first objective \((z_{1}({\mathbf {x}})\le {UB}^{1})\), an upper bound on the second objective \((z_{2}({\mathbf {x}})\le {UB}^{2})\), and a lower bound on the weighted combination of these objectives \(\left( wz_{1}({\mathbf {x}})+\left( 1-w \right) z_{2}({\mathbf {x}})\ge LB \right) \) define a region (triangle) in which the most preferred point of the DM may lie. We refer to the region defined by these triangles as the reduced objective space. The nondominated points in the reduced objective space are, by definition, unsupported nondominated points. In Fig. 1, we have two regions; the left triangle and the right triangle. The left and right triangles’ bounds on the two objectives are \(z_{1}({\mathbf {x}})\le y_{1}^{b}\) and \(z_{2}({\mathbf {x}})\le y_{2}^{a}-\delta ,\) and \(z_{1}({\mathbf {x}})\le y_{1}^{c}-\delta \) and \(z_{2}({\mathbf {x}})\le y_{2}^{b},\) respectively, where \(\delta \) is a sufficiently small positive constant. By subtracting \(\delta \) from \(y_{2}^{a}\) and \(y_{1}^{c},\) we exclude the inferior points \({\mathbf {y}}^{{\mathbf {a}}}\) and \({\mathbf {y}}^{{\mathbf {c}}}\) from the triangles. The lower bound on the weighted combination of the two objectives is a redundant constraint since there are no solutions that violate this constraint given the upper bounds on the objectives. Therefore, we will not use the lower bound on the weighted combination of the two objectives.

We next summarize the steps of the interactive algorithm to find the most preferred point of a DM whose preferences are consistent with a quasiconvex preference function for a biobjective integer program. In this algorithm, we generalize the concepts used in Tezcaner Köksalan (2011)’s approach for the quasiconvex case. We use the ideas in Lemmas 123, and Theorem 1 above to narrow down the objective space.

3.1 The steps of the interactive algorithm (IA)

Let \(w_{LE}=1-\rho \), and \(w_{RE}=\rho \). Throughout the algorithm, we use \(\rho \) as a sufficiently small positive constant to prevent obtaining dominated points (see Steuer 1986, pp. 422–430 for further discussion on how to choose \(\rho \)).

Step 1. Find the two extreme nondominated points of the problem minimizing \(U({\mathbf {x}})=wz_{1}({\mathbf {x}})+(1-w)z_{2}({\mathbf {x}})\) for \(w=w_{LE}\) and \(w_{RE}\), respectively, and denote them by \({\mathbf {y}}^{{\mathbf {LE}}}\) and \({\mathbf {y}}^{{\mathbf {RE}}},\) respectively. If \({\mathbf {y}}^{{\mathbf {LE}}}={\mathbf {y}}^{{\mathbf {RE}}}\), the problem has only one nondominated point. Let \({\mathbf {y}}^{{\prime }}={\mathbf {y}}^{{\mathbf {LE}}}={\mathbf {y}}^{{\mathbf {RE}}}\) and go to Step 10. Otherwise, go to Step 2.

Step 2. Let \(w^{\prime }=\frac{y_{2}^{LE}-y_{2}^{RE}}{\left( y_{2}^{LE}-y_{2}^{RE} \right) +\left( y_{1}^{RE}-y_{1}^{LE} \right) }\)

Find the solution that minimizes the function, \(U({\mathbf {x}})=wz_{1}({\mathbf {x}})+(1-w)z_{2}({\mathbf {x}})\) formed with weight \(w=w^{\prime }\). Let the corresponding point be \({\mathbf {y}}^{{\prime }}\). If \({\mathbf {y}}^{{\prime }}={\mathbf {y}}^{{\mathbf {LE}}}\) or \({\mathbf {y}}^{{\mathbf {RE}}}\), set \({\mathbf {y}}^{{\mathbf {L}}}={\mathbf {y}}^{{\mathbf {LE}}}\) and \({\mathbf {y}}^{{\mathbf {R}}}={\mathbf {y}}^{{\mathbf {RE}}}\), and go to Step 3. Otherwise, go to Step 4.

Step 3. Ask the DM to compare \({\mathbf {y}}^{{\mathbf {L}}}\) with \({\mathbf {y}}^{{\mathbf {R}}}\).

  • If \({\mathbf {y}}^{{\mathbf {L}}}\) is preferred to \({\mathbf {y}}^{{\mathbf {R}}}\), set the two bounds of the right triangle as \(z_{2}({\mathbf {x}})\le y_{2}^{L}, z_{1}({\mathbf {x}})\le y_{1}^{R}-\delta .\) Set \({\mathbf {y}}^{{\prime }}={\mathbf {y}}^{{\mathbf {L}}}.\) Go to Step 8.

  • If \({\mathbf {y}}^{{\mathbf {R}}}\) is preferred to \({\mathbf {y}}^{{\mathbf {L}}}\), set the two bounds of the left triangle as \(z_{2}({\mathbf {x}})\le y_{2}^{L}-\delta \), \(z_{1}({\mathbf {x}})\le y_{1}^{R}.\) Set \({\mathbf {y}}^{{\prime }}={\mathbf {y}}^{{\mathbf {R}}}\). Go to Step 6.

Step 4. Find the left adjacent point, \({\mathbf {y}}^{{\mathbf {L}}}\), of \({\mathbf {y}}^{{\prime }}\). Ask the DM to compare \({\mathbf {y}}^{{\prime }}\) and \({\mathbf {y}}^{{\mathbf {L}}}\).

  • If \({\mathbf {y}}^{{\prime }}\) is preferred to \({\mathbf {y}}^{{\mathbf {L}}}\), set the two bounds for the left triangle as \(z_{2}({\mathbf {x}})\le y_{2}^{L}-\delta , z_{1}({\mathbf {x}})\le y_{1}^{{\prime }}.\) If no bounds for the right triangle has been defined yet, go to Step 5; otherwise, go to Step 6.

  • If \({\mathbf {y}}^{{\mathbf {L}}}\) is preferred to \({\mathbf {y}}^{{\prime }},\) set the two bounds for the right triangle as \(z_{2}({\mathbf {x}})\le y_{2}^{L}, z_{1}({\mathbf {x}})\le y_{1}^{{\prime }}-\delta .\) Set \({\mathbf {y}}^{{\prime }}={\mathbf {y}}^{{\mathbf {L}}}\). If \({\mathbf {y}}^{{\prime }}={\mathbf {y}}^{{\mathbf {LE}}}\), go to Step 8 and if \({\mathbf {y}}^{{\prime }}\ne {\mathbf {y}}^{{\mathbf {LE}}}\), go to Step 4.

Step 5. Find the right adjacent point, \({\mathbf {y}}^{{\mathbf {R}}},\) of \({\mathbf {y}}^{{\prime }}.\) Ask the DM to compare \({\mathbf {y}}^{{\prime }}\) with \({\mathbf {y}}^{{\mathbf {R}}}\).

  • If \({\mathbf {y}}^{{\prime }}\) is preferred to \({\mathbf {y}}^{{\mathbf {R}}}\), set the two bounds for the right triangle as \(z_{1}({\mathbf {x}})\le y_{1}^{R}-\delta \), \(z_{2}({\mathbf {x}})\le y_{2}^{{\prime }}.\) Go to Step 6.

  • If \({\mathbf {y}}^{{\mathbf {R}}}\) is preferred to \({\mathbf {y}}^{{\prime }},\) set the two bounds for the left triangle as \(z_{2}({\mathbf {x}})\le y_{2}^{{\prime }}-\delta , z_{1}({\mathbf {x}})\le y_{1}^{R}.\) Set \({\mathbf {y}}^{{\prime }}={\mathbf {y}}^{{\mathbf {R}}}\). If \({\mathbf {y}}^{{\prime }}={\mathbf {y}}^{{\mathbf {RE}}}\), go to Step 6 and if \({\mathbf {y}}^{{\prime }}\ne {\mathbf {y}}^{{\mathbf {RE}}}\), go to Step 5.

Step 6. Search for the left adjacent point of \({\mathbf {y}}^{{\prime }}\) in the left triangle.

  • If there is no new nondominated point:

    • If the bounds for the right triangle have been defined, go to Step 8.

    • If no bounds for the right triangle has been defined yet, go to Step 10.

  • If a new nondominated point is found, denote it by \({\mathbf {y}}^{{\mathbf {L}}}\) and go to Step 7.

Step 7. Ask the DM to compare \({\mathbf {y}}^{{\prime }}\) with \({\mathbf {y}}^{{\mathbf {L}}}\).

  • If \({\mathbf {y}}^{{\prime }}\) is preferred to \({\mathbf {y}}^{{\mathbf {L}}},\) set \(z_{2}({\mathbf {x}})\le y_{2}^{L}-\delta \) for the left triangle and go to Step 6.

  • If \({\mathbf {y}}^{{\mathbf {L}}}\) is preferred to \({\mathbf {y}}^{{\prime }}\), set \(z_{1}({\mathbf {x}})\le y_{1}^{L}\) for the left triangle, and \(z_{2}({\mathbf {x}})\le y_{2}^{L}\) and \(z_{1}({\mathbf {x}})\le y_{1}^{{\prime }}-\delta \) for the right triangle. Set \({\mathbf {y}}^{{\prime }}={\mathbf {y}}^{{\mathbf {L}}}\) and go to Step 6.

Step 8. Search for the right adjacent point of \({\mathbf {y}}^{{\prime }}\) in the right triangle. If there is no new nondominated point, go to Step 10. If a new nondominated point is found, denote it by \({\mathbf {y}}^{{\mathbf {R}}}\) and go to Step 9.

Step 9. Ask the DM to compare \({\mathbf {y}}^{{\prime }}\) with \({\mathbf {y}}^{{\mathbf {R}}}\).

  • If \({\mathbf {y}}^{{\prime }}\) is preferred to \({\mathbf {y}}^{{\mathbf {R}}},\) set \(z_{1}({\mathbf {x}})\le y_{1}^{R}-\delta \) for the right triangle and go to Step 8.

  • If \({\mathbf {y}}^{{\mathbf {R}}}\) is preferred to \({\mathbf {y}}^{{\prime }},\) set \(z_{2}({\mathbf {x}})\le y_{2}^{{\prime }}-\delta \) and \(z_{1}({\mathbf {x}})\le y_{1}^{R}\) for the left triangle, \(z_{2}({\mathbf {x}})\le y_{2}^{R}\) for the right triangle. Set \({\mathbf {y}}^{{\prime }}={\mathbf {y}}^{{\mathbf {R}}}\) and go to Step 6.

Step 10. The most preferred point \({\mathbf {y}}^{*}\) is \({\mathbf {y}}^{{\prime }}.\)

To demonstrate, consider nondominated points \({\mathbf {y}}^{{\mathbf {a}}},{\mathbf {y}}^{{\mathbf {b}}},{\mathbf {y}}^{{\mathbf {c}}},{\mathbf {y}}^{{\mathbf {d}}}\) and \({\mathbf {y}}^{{\mathbf {e}}}\) in Fig. 2. The most preferred point of the DM that minimizes the underlying preference function shown in Fig. 2a is \({\mathbf {y}}^{{\mathbf {c}}}.\) We find the two extreme nondominated points; \({\mathbf {y}}^{{\mathbf {a}}}\) and \({\mathbf {y}}^{{\mathbf {e}}}\) (step 1), and we find \({\mathbf {y}}^{{\mathbf {c}}}\) minimizing the function passing through \({\mathbf {y}}^{{\mathbf {a}}}\) and \({\mathbf {y}}^{{\mathbf {e}}}\) (step 2). Since \({\mathbf {y}}^{{\mathbf {c}}}\) is preferred to both its adjacent points \({\mathbf {y}}^{{\mathbf {b}}}\) and \({\mathbf {y}}^{{\mathbf {e}}},\) we reduce the objective space to the two triangles shown in Fig. 2b (steps 4 and 5). We discover that there are no new points in the left triangle (step 6). We next find \({\mathbf {y}}^{{\mathbf {d}}}\) in the right triangle (step 8). The DM prefers \({\mathbf {y}}^{{\mathbf {c}}}\) to \({\mathbf {y}}^{{\mathbf {d}}}\) and we further reduce the objective space accordingly as shown in Fig. 2c (step 9). Since there are no new points in any of the two triangles, we conclude that \({\mathbf {y}}^{{\mathbf {c}}}\) is the most preferred point (step 10).

Note that there might be several efficient solutions that correspond to the same most preferred point obtained. Our approach terminates with one of those most preferred solutions.

3.2 Finding adjacent points

During IA, we find adjacent points of \({\mathbf {y}}^{{\prime }}\) both in the original (in steps 4 and 5) and the reduced objective spaces (in steps 6 and 8). We modify the method presented in Tezcaner and Köksalan (2011) to find the adjacent points. Our modification is in the initialization step to avoid searching for previously-found nondominated points.

Fig. 2
figure 2

Demonstration of the interactive algorithm. a Nondominated set. b Reducing the objective space around \(y^{c}\). c Search within the reduced objective space

We first explain the steps to find the left adjacent point of \({\mathbf {y}}^{{\prime }}\) in the original objective space. Let S be a set that stores all nondominated points found in IA so far.

Step A.0. Let \(\mathrm {A}=\left\{ {\mathbf {y}}^{{\mathbf {i}}}|{\mathbf {y}}^{{\mathbf {i}}}\in S, y_{1}^{i}<y_{1}^{{\prime }} \right\} .\) If \(A=\emptyset \), find the left extreme nondominated point, \({\mathbf {y}}^{{\mathbf {LE}}}\) minimizing \(U({\mathbf {x}})=(1-\rho )z_{1}({\mathbf {x}})+\rho z_{2}({\mathbf {x}})\) where \(\rho \) is as before. Set \({\mathbf {y}}^{{\mathbf {L}}}={\mathbf {y}}^{{\mathbf {LE}}}\) and \(w^{\prime }=\frac{y_{2}^{L}-y_{2}^{{\prime }}}{\left( y_{2}^{L}-y_{2}^{{\prime }} \right) +\left( y_{1}^{{\prime }}-y_{1}^{L} \right) }.\) If \(A\ne \emptyset \), set \(w^{\prime }=\min _{{{\mathbf {y}}}^{{{\mathbf {i}}}}\in A}\left\{ \frac{y_{2}^{i}-y_{2}^{{\prime }}}{\left( y_{2}^{i}-y_{2}^{{\prime }} \right) +\left( y_{1}^{{\prime }}-y_{1}^{i} \right) }\right\} \) and \({\mathbf {y}}^{{\mathbf {L}}}=\hbox {argmin}_{{{\mathbf {y}}}^{{{\mathbf {i}}}}\in A}\left\{ \frac{y_{2}^{i} -y_{2}^{{\prime }}}{\left( y_{2}^{i}-y_{2}^{{\prime }} \right) +\left( y_{1}^{{\prime }}-y_{1}^{i} \right) }\right\} .\)

Step A.1. Find the solution that minimizes \(U({\mathbf {x}})=w^{\prime }z_{1}({\mathbf {x}})+(1-w^{\prime })z_{2}({\mathbf {x}})\) and denote the corresponding objective function vector as \({\mathbf {y}}^{{\mathbf {min}}}.\) If \({\mathbf {y}}^{{\mathbf {min}}}={\mathbf {y}}^{{\mathbf {L}}}\) or \({\mathbf {y}}^{{\prime }},\) go to Step A.2. Otherwise, let \({\mathbf {y}}^{{\mathbf {L}}}={\mathbf {y}}^{{\mathbf {min}}}, w^{\prime }=\frac{y_{2}^{L}-y_{2}^{{\prime }}}{\left( y_{2}^{L}-y_{2}^{{\prime }} \right) +\left( y_{1}^{{\prime }}-y_{1}^{L} \right) }\) and go to Step A.1.

Step A.2. The left adjacent point of \({\mathbf {y}}^{{\prime }}\) is \({\mathbf {y}}^{{\mathbf {L}}}\).

In Step A.0, we set the current left adjacent point of \({\mathbf {y}}^{{\prime }}\) to the left extreme nondominated point if there are no other nondominated points in the region searched. In case there are other points in the searched region, we examine the slopes of the lines joining each such point to \({\mathbf {y}}^{{\prime }}\) and select the point that corresponds to the line that has the least-negative slope. We set \(w^{\prime }\) to the absolute value of the slope of the line joining \({\mathbf {y}}^{{\prime }}\) with its current left adjacent point. In Step A.1, we keep updating the current left adjacent point of \({\mathbf {y}}^{{\prime }}\), each time minimizing \(U({\mathbf {x}})\) formed with a different \(w^{\prime }\). We terminate the algorithm when no new point exists.

A similar algorithm is used to find the right adjacent point of \({\mathbf {y}}^{{\prime }}\) in the original objective space. The main difference is that the right adjacent point of \({\mathbf {y}}^{{\prime }}\) is the nondominated point that is associated with the line having the most-negative slope, considering all lines joining candidate points to \({\mathbf {y}}^{{\prime }}\).

To find the left adjacent point of \({\mathbf {y}}^{{\prime }}\) inside the left triangle defined by bounds \({UB}^{1}\) and \({UB}^{2}\), we also follow steps A.0 to A.2. In this case \(\mathrm {A}=\left\{ {\mathbf {y}}^{{\mathbf {i}}}|{\mathbf {y}}^{{\mathbf {i}}}\in S, y_{1}^{i}<{UB}^{1},y_{2}^{i}\le {UB}^{2} \right\} \) and we use the additional constraints \(z_{k}({\mathbf {x}})\le {UB}^{k}, k=1,2\) each time we search for a solution minimizing \(U({\mathbf {x}})\).

Finding the right adjacent point of \({\mathbf {y}}^{{\prime }}\) inside the right triangle is similar.

3.3 Reducing the objective space further

During IA, we search for both supported and unsupported nondominated points. Supported nondominated points can be found by optimizing an objective function that is a linear combination of the two objectives. Thus, the problem reduces to its single objective version, making the use of problem-specific algorithms possible, if available. On the other hand, searching for unsupported nondominated points may turn out to be computationally hard because of the constraints \(y_{k}^{i}\le {UB}^{k}, k=1,2,\) that need to be added to the original problem. These additional constraints may cause additional computational difficulty. To improve the computational effort for finding unsupported nondominated points, Tuyttens et al. (2000) develop further constraints to reduce the search region. In this section, we present Tuyttens et al.’s approach and improve it further to reduce a larger area in the search region.

Tuyttens et al. study the so called two phase algorithm on the biobjective assignment problem. In this algorithm, the nondominated frontier is generated in two phases. In the first phase, the supported nondominated points are generated (see Fig. 3a). This phase is computationally relatively easy because each supported nondominated point can be obtained by minimizing a suitable linear combination of the two objectives. In the second phase, unsupported nondominated points are found. To find the unsupported points, the nondominated region between each adjacent supported nondominated point pair (see Fig. 3b) is searched.

Fig. 3
figure 3

The two phase algorithm. a Phase one. b Phase two

Tuyttens et al. propose an additional constraint that can be used to narrow down the search space in the second phase. This constraint reduces the dominated part of the triangles between supported nondominated points as illustrated in Fig. 4. Suppose two new unsupported nondominated points, \(\varvec{y}^\mathbf{3 }\) and \(\varvec{y}^\mathbf{4 },\) are found during the search between the two supported nondominated points, \(\varvec{y}^\mathbf{1 }\) and \(\varvec{y}^\mathbf{2 }.\) From the definition of dominance, we know that the shaded region is dominated by the unsupported nondominated points. Tuyttens et al. construct a line joining the two supported nondominated points, \(\varvec{y}^\mathbf{1 }\) and \(\varvec{y}^\mathbf{2 }\), and move it in the direction of dominated points. They search for the first point where the line has no intersection with the nondominated region inside the triangle. The idea is to cover as big a portion of the shaded region as possible using a linear constraint. In the example in Fig. 4, there are three possibilities for this line to pass through. These are the points that have the worst objective values of the two neighboring nondominated points. Przybylski et al. (2008) refer to these points as local nadir points. We demonstrate the local nadir points, \({\varvec{\eta }}^\mathbf{1 },{\varvec{\eta }}^\mathbf{2 }\) and \({\varvec{\eta }}^\mathbf{3 },\) with values \(\left( \eta _{1}^{1},\eta _{2}^{1} \right) =\left( y_{1}^{3},y_{2}^{1} \right) ,\left( \eta _{1}^{2},\eta _{2}^{2} \right) =\left( y_{1}^{4},y_{2}^{3} \right) ,\) and \(\left( \eta _{1}^{3},\eta _{2}^{3} \right) =\left( y_{1}^{2},y_{2}^{4} \right) ,\) and the lines that pass through them in Fig. 4b.

Among these three possibilities, only the line through \({\varvec{\eta }}^\mathbf{2 }\) does not have an intersection with the nondominated regions. Therefore, the value of this line is set as an upper bound \(\left( wz_{1}(\varvec{x})+\left( 1-w \right) z_{2}(\varvec{x})\le wy_{1}^{4}+\left( 1-w \right) y_{2}^{3}={UB}^{c} \right) \) as shown in Fig. 4c, where \(w=\frac{y_{2}^{1}-y_{2}^{2}}{\left( y_{2}^{1}-y_{2}^{2} \right) +\left( y_{1}^{2}-y_{1}^{1} \right) }\). Przybylski et al. (2008) modify this bound exploiting the fact that all objective coefficients are integer valued.

Although constraint \(wz_{1}({\mathbf {x}})+\left( 1-w \right) z_{2}({\mathbf {x}})\le {UB}^{c}\) reduces a part of the dominated set, a dominated region still remains unaccounted for. We improve this bound in order to further reduce the unaccounted dominated region with linear constraints. We next develop an algorithm to determine a number of upper bounds, which we refer to as weighted upper bounds, in the triangles. Throughout the procedure below, we search within the triangle defined by the two nondominated points \(\varvec{y}^\mathbf{1 }, \varvec{y}^\mathbf{2 },\) and their nadir point, where we assume \(y_{1}^{1}<y_{1}^{2}\) and \(y_{2}^{1}>y_{2}^{2}\), without loss of generality. Let w be as above.

3.3.1 W-UB Procedure

Step U.0. Find local nadir points of all neighboring nondominated points within the triangle and place them in set L. Denote the nadir point l as \({\varvec{\eta }}^{{\mathbf {l}}}\).

Step U.1. Find the local nadir point that gives the largest combined objective value, \({\varvec{\eta }}^{*}={\mathop {\hbox {argmax }}\limits _{{\varvec{\eta }}^{{{\mathbf {l}}}}\in L}}{\left\{ w\eta _{1}^{l}+(1-w)\eta _{2}^{l}\right\} }\). Set \({\varvec{\eta }}^{\mathbf{0}}={\varvec{\eta }}^{*}\).

Step U.2. Find the left adjacent nadir point, \({\varvec{\eta }}^{\varvec{L}{} \mathbf 0 },\) of \({\varvec{\eta }}^{\mathbf{0}}\), treating the problem as a maximization problem. If there are no left adjacent nadir points in the search space, set \({\varvec{\eta }}^{\mathbf{0}}={\varvec{\eta }}^{*}\) and go to Step U.3. Otherwise, define a new weighted upper bound, \(w^{{\prime }}z_{1}({\mathbf {x}})+\left( 1-w^{{\prime }} \right) z_{2}({\mathbf {x}})\le UB\left( w^{{\prime }} \right) =w^{\prime }\eta _{1}^{0}+\left( 1-w^{\prime } \right) \eta _{2}^{0}\) where \(w^{{\prime }}=\frac{\eta _{2}^{L0}-\eta _{2}^{0}}{\left( \eta _{2}^{L0}-\eta _{2}^{0} \right) +\left( \eta _{1}^{0}-\eta _{1}^{L0} \right) }\). If \(\eta _{2}^{L0}=y_{2}^{1},\) set \({\varvec{\eta }}^{\mathbf{0}}={\varvec{\eta }}^{*}\) and go to Step U.3. If \(\eta _{2}^{L0}\ne y_{2}^{1}\), set \({\varvec{\eta }}^{\mathbf{0}}={\varvec{\eta }}^{\varvec{L}{} \mathbf{0}}\) and go to Step U.2.

Step U.3. Find the right adjacent nadir point, \({\varvec{\eta }}^{\varvec{R0}},\) of \({\varvec{\eta }}^{\mathbf{0}}\), treating the problem as a maximization problem. If there are no right adjacent nadir points in the search space, terminate the algorithm. Otherwise, define a new weighted upper bound \(w^{\prime }z_{1}({\mathbf {x}})+\left( 1-w^{\prime } \right) z_{2}({\mathbf {x}})\le UB\left( w^{{\prime }} \right) =w^{\prime }\eta _{1}^{0}+\left( 1-w^{\prime } \right) \eta _{2}^{0}\) where \(w^{{\prime }}=\frac{\eta _{2}^{0}-\eta _{2}^{R0}}{\left( \eta _{2}^{0}-\eta _{2}^{R0} \right) +\left( \eta _{1}^{R0}-\eta _{1}^{0} \right) }\). If \(\eta _{1}^{R0}=y_{1}^{2},\) terminate the algorithm. If \(\eta _{1}^{R0}\ne y_{1}^{2},\) set \({\varvec{\eta }}^{\mathbf{0}}={\varvec{\eta }}^{\varvec{R}{} \mathbf{0}}\) and go to Step U.3.

Fig. 4
figure 4

The additional constraint in phase two. a Dominated regions. b Local nadir points. c Additional constraint

We may visualize the set of local nadir points, L, as nondominated points of a problem where more is better (maximization) in each criterion. With this analogy, steps U.2 and U.3 of the W-UB procedure finds the left and right adjacent points of a given nondominated (local nadir) point where all points are readily available. To find the left adjacent point of \({\varvec{\eta }}^{\mathbf{0}},\) we check the line that passes through \({\varvec{\eta }}^{\mathbf{0}}\) and any other point that has the first objective value smaller than that of \({\varvec{\eta }}^{\mathbf{0}}.\) The (local nadir) point corresponding to the line with the steepest slope (most negative) is the left adjacent point. The right adjacent point of \({\varvec{\eta }}^{\mathbf{0}}\) is found in a similar manner on the right side of \({\varvec{\eta }}^{\mathbf{0}}\) using the line that has the flattest slope (least negative) among the candidate lines. Specifically, we find the left and right adjacent points of \({\varvec{\eta }}^{\mathbf{0}}\) as follows:

$$\begin{aligned} {\varvec{\eta }}^{{{\mathbf {L}}}\mathbf{0}}= & {} \mathop {\hbox {argmax}_{{{\mathbf {n}}}^{{{\mathbf {l}}}}\in Q}}\left\{ \frac{\eta _{2}^{l}-\eta _{2}^{0}}{\left( \eta _{2}^{l}-\eta _{2}^{0} \right) +\left( \eta _{1}^{0}-\eta _{1}^{l} \right) } \right\} \quad \hbox { where } \mathrm {Q}=\left\{ {\varvec{\eta }}^{{\mathbf {l}}}|{\varvec{\eta }}^{{\mathbf {l}}}\in L,\eta _{1}^{l}<\eta _{1}^{0} \right\} .\\ {\varvec{\eta }}^{{{\mathbf {R}}}\mathbf{0}}= & {} \mathop {\hbox {argmin}_{{{\mathbf {n}}}^{{{\mathbf {l}}}}\in Q}}\left\{ \frac{\eta _{2}^{0}-\eta _{2}^{l}}{\left( \eta _{2}^{0}-\eta _{2}^{l} \right) +\left( \eta _{1}^{l}-\eta _{1}^{0} \right) } \right\} \quad \hbox { where } \mathrm {Q=}\left\{ {\varvec{\eta }}^{{\mathbf {l}}}|{\varvec{\eta }}^{{\mathbf {l}}}\in L,\eta _{1}^{l}>\eta _{1}^{0} \right\} . \end{aligned}$$

These weighted upper bounds impose cuts in the feasible space eliminating a larger portion of the dominated space. Using this procedure, we replace the bound given in the example of Fig. 4c with the two bounds shown in Fig. 5. The procedure to generate such bounds may be used to reduce the solution space in any biobjective integer program to improve computational efficiency.

Fig. 5
figure 5

Additional constraints in phase two

4 Generalized biobjective traveling salesperson problem

We developed IA to find the most preferred point of a DM for any biobjective integer program. In this section, we demonstrate it on the generalized BOTSP.

Most of the literature on multiobjective traveling salesperson problem (MOTSP) assumes that each node pair is connected with a single edge that is naturally efficient. This is not a realistic assumption, since there would typically be a number of efficient edges each better than another efficient edge in at least one objective, under the presence of multiple objectives. We refer to the case where each node is connected by a number of efficient edges as the generalized MOTSP. The MOTSP with a single edge between nodes is a special case of the generalized MOTSP. More information on the generalized MOTSP can be found in Tezcaner and Köksalan (2011).

MOTSP with a single edge between nodes is NP-hard (Ehrgott 2005, p. 279). Mostly, heuristics have been developed for the solution of this problem. Lust and Teghem (2010) classify these solution approaches and develop a new method, two-phase Pareto local search, for MOTSP. Paquete and Stützle (2003) develop a two-phase local search method and Jaszkiewicz and Zielniewicz (2009) consider a Pareto memetic algorithm using path relinking and Pareto local search. Jozefowiez et al. (2008), Karademir (2008) and Bérubé et al. (2009) consider a special traveling salesperson problem (TSP), TSP with profits. Jozefowiez et al. approximate the nondominated frontier by an evolutionary algorithm (EA), Karademir proposes a genetic algorithm and Bérubé generates the nondominated frontier using the \(\upvarepsilon \)-constraint method (see Chankong and Haimes 1983). Hansen (2000) uses a scalarizing function to solve MOTSP. Special cases of MOTSP are studied in Özpeynirci and Köksalan (2009, 2010). To the best of our knowledge, Tezcaner and Köksalan’s (2011) study is the only one that defines and addresses the generalized MOTSP. It develops a general interactive algorithm to find the most preferred point for a biobjective integer program and implements it for the generalized BOTSP.

Fig. 6
figure 6

Generalized BOTSP. a Efficient edges. b Efficient tours

We demonstrate a special generalized BOTSP example in Fig. 6. Here we consider the three circular areas as regions to be avoided (as one of the objectives); central parts more important to be avoided than those towards the circumferences. The other objective we consider is to minimize total distance traveled. We have in total 5 nodes to visit in a terrain divided into grids (equal-length intervals in both x and y axes). The vehicle is assumed to move horizontally, vertically, or diagonally through a grid. There are three efficient edges between nodes 1 and 5 as shown in Fig. 6a. These efficient edges form due to the conflict between the two objectives. The efficient edge that totally avoids the threat region has the largest distance, whereas the edge with the shortest distance exposes the vehicle to the biggest threat. Figure 6b shows two efficient tours that pass through all nodes. The dashed lines correspond to the first tour (1-5-4-3-2-1 or equivalently 1-2-3-4-5-1 since the graph is assumed to be undirected) and the light solid lines correspond to the second tour (1-4-3-5-2-1 or 1-2-5-3-4-1). The dark solid lines are common to both tours.

In a realistic setting, an edge between a pair of nodes could correspond to a path that passes through a set of subnodes. For example, subnodes would correspond to the corner points of grids (grid points) in Fig. 6 and to the intersection points of highways in a road network. Therefore, an edge corresponds to a path between a pair of nodes passing through a subset of subnodes. Then, finding the efficient edges between a node pair requires solving a BOSPP between that node pair. Although the BOSPP is NP-complete (Ehrgott 2005, p. 222), for practical size problems the number of efficient edges is typically small (see Müller-Hannemann and Weihe 2006) and finding them is manageable in general. Once the efficient edges are formed, they can be used as input and the generalized BOTSP can be formulated as below.

Let \(G=(N,E)\) be an undirected graph with node set \(N=\left\{ 1,2,\ldots \right\} , e_{ijr}\) be efficient edge r connecting node pair \((i,j), R_{ij}\) be the index set of efficient edges between node pair (ij), and \(E=\left\{ e_{121},e_{122},\ldots ,e_{\left| N\right| ,\left| N \right| -1,\left| R_{\left| N \right| ,\left| N \right| -1} \right| }\right\} \) be the set of all efficient edges. Let \(x_{ijr}\) be the binary decision variable that indicates whether efficient edge \(r\in {R}_{ij}\) is used in the tour or not, \(c_{ijr}^{k}\) denote the value of objective function k of efficient edge \(r\in {R}_{ij},\) and \(P=\left\{ (i,j)|i\in N,j\in N,i\ne j\right\} \) be the set of all node pairs. The problem can be formulated as:

$$\begin{aligned}&\hbox {(G-BOTSP)}\nonumber \\&\quad \mathrm {Min }\,z_{1}({\mathbf {x}})=\sum \limits _{(i,j)\in P} \sum \limits _{r\in {{R}}_{ij}} {c_{ijr}^{1}x_{ijr}} \end{aligned}$$
(9)
$$\begin{aligned}&\mathrm {Min }\,z_{2}({\mathbf {x}})=\sum \limits _{(i,j)\in P} \sum \limits _{r\in {{R}}_{ij}}{c_{ijr}^{2}x_{ijr}} \end{aligned}$$
(10)
$$\begin{aligned}&\hbox {Subject to:}\nonumber \\&\quad \sum \limits _{j\in N} {\sum \limits _{r\in {{R}}_{ij}} x_{ijr} =1}\quad i\in N \end{aligned}$$
(11)
$$\begin{aligned}&\sum \limits _{i\in N} {\sum \limits _{r\in {{R}}_{ij}} x_{ijr} =1} \quad j\in N \end{aligned}$$
(12)
$$\begin{aligned}&u_{i}-u_{j}+\sum \limits _{r\in {{R}}_{ij}} {(|N|-1)x_{ijr}}\nonumber \\&\quad +\sum \limits _{r\in {{R}}_{ji}} {(|N|-3)x_{jir}} \le \left| N \right| -2 \quad i,j\in N\backslash \left\{ 1 \right\} , i\ne j \end{aligned}$$
(13)
$$\begin{aligned}&1\le u_{i}\le \left| N \right| -1 \quad i\in N\backslash \left\{ 1\right\} \end{aligned}$$
(14)
$$\begin{aligned}&x_{ijr}\in \{0, 1\} \quad \left( i,j \right) \in P, r\in {{R}}_{ij} \end{aligned}$$
(15)

The two objectives, \(z_{1}({\mathbf {x}})\) and \(z_{2}({\mathbf {x}})\), are any two sum-type objectives. They are minimized in (9) and (10), respectively. We assume that \(c_{ijr}^{k}\ge 0\) for all \(\left( i,j \right) \in P,k=1,2,\) and \(r\in {{R}}_{ij}\). Equations (11) and (12) ensure departure from and arrival to each node, respectively. There are different possible formulations for subtour elimination constraints. Laporte (1992) reviews the algorithms developed for TSP. He covers some subtour elimination constraints including the formulation introduced by Miller et al. (1960), which was strengthened later. Constraints (13) and (14) are the strengthened version of Miller et al.’s formulation. This formulation requires a smaller number of additional constraints compared to other subtour elimination constraints. However it requires including \(|N|-1\) positive decision variables in the formulation.

In order to solve (G-BOTSP), we first need to find all efficient edges between all node pairs. An algorithm developed for solving multiobjective shortest path problems can be used for this purpose. Alternatively, we may employ the \(\upvarepsilon \)-constraint method (see Chankong and Haimes 1983) where one objective is optimized treating the other as a constraint to satisfy an aspiration level, \(\upvarepsilon \). Changing \(\upvarepsilon \) systematically all efficient edges can be found.

During IA, we find nondominated points both in the original and reduced objective spaces. We next discuss these two cases.

4.1 Finding nondominated points in the original objective space

In Steps 1 through 5 of IA, we find tours for the combined objective \(U({\mathbf {x}})=wz_{1}({\mathbf {x}})+(1-w)z_{2}({\mathbf {x}})\) without imposing additional constraints in the objective space. The only difference of this formulation from the single-objective TSP is the existence of multiple efficient edges between node pairs. Note that there is a unique efficient edge between each node pair (ij) that minimizes \(U({\mathbf {x}}).\) Recall that \(c_{ijr}^{k}\) denotes the \(k^{th}\) objective value of efficient edge \(r\in {{R}}_{ij},k=1,2.\) Using the weights of \(U({\mathbf {x}}),\) we create a dummy edge for each node pair \(\left( i,j \right) \) with the following edge cost:

$$\begin{aligned} c_{ij}(w)=\min \nolimits _{r\in R_{ij}}\left\{ wc_{ijr}^{1}+\left( 1-w \right) c_{ijr}^{2} \right\} \qquad \quad \left( i,j \right) \in P \end{aligned}$$

We then solve the TSP using the following objective function, where \(h_{ij}\) is a binary variable to indicate whether or not the edge connecting node pair (ij) is used.

$$\begin{aligned} \mathrm {Min }\,z({\mathbf {h}})=\sum \limits _{(i,j)\in P} {c_{ij}(w)h_{ij}} \end{aligned}$$

We solve the resulting single-objective TSP using the Concorde software (see http://www.math.uwaterloo.ca/tsp/concorde.html) inputting the dummy edge cost for each node pair (ij).

4.2 Finding nondominated points in the reduced objective space

In Steps 6–9 of IA, we have the following two constraints in addition to constraints (11)–(15).

$$\begin{aligned}&\sum \limits _{(i,j)\in P} \sum \limits _{r\in {{R}}_{ij}} {c_{ijr}^{1}x_{ijr}} \le {\textit{UB}}^{1} \end{aligned}$$
(16)
$$\begin{aligned}&\sum \limits _{(i,j)\in P} \sum \limits _{r\in {{R}}_{ij}} {c_{ijr}^{2}x_{ijr}} \le {\textit{UB}}^{2} \end{aligned}$$
(17)

As mentioned before, the nondominated points in the reduced objective space are unsupported nondominated points. As we find new unsupported points, we can introduce cuts of the form below using W-UB Procedure to further reduce the objective space:

$$\begin{aligned} w^{{\prime }}z_{1}({\mathbf {x}})+\left( 1-w^{{\prime }} \right) z_{2}({\mathbf {x}})\le \textit{UB}\left( w^{{\prime }} \right) \end{aligned}$$
(18)

The introduction of (16)–(18) destroys the input structure required by Concorde, making it inapplicable. In the absence of Concorde, we may use any integer programming solver to find unsupported nondominated points minimizing \(U({\mathbf {x}})=w^{\prime }z_{1}({\mathbf {x}})+(1-w^{\prime })z_{2}({\mathbf {x}})\) subject to constraints (11)–(18). In our application, we use the CPLEX solver.

5 Demonstration and computational results

In this section, we first provide an example problem to demonstrate our algorithm. The problem is a special case of the generalized BOTSP, the unmanned air vehicle (UAV) route planning problem. We then present computational results for randomly generated instances of the generalized BOTSP.

5.1 An example

In the UAV route planning problem, the path the vehicle follows through a terrain visiting all predetermined targets is searched for. This problem is a special case of the generalized BOTSP. We consider a number of nodes (targets) to visit where each node is connected by a number of efficient edges. We assume there are two minimization-type objectives: total distance travelled and total radar detection threat.

The approaches that have been developed for the route planning problem of UAVs are mostly heuristic in nature (see, for example, Zheng et al. 2003; Foo et al. 2009; Wu et al. 2009; Waldock and Corne 2012). Many of them linearly combine the objectives forming a single objective and search for an “optimal” solution rather than addressing the tradeoffs between multiple efficient solutions. To the best of our knowledge, Tezcaner and Köksalan’s (2011) approach is the only implementation of an interactive approach to biobjective UAV route planning.

We demonstrate our algorithm using the UAV route planning problem introduced by Tezcaner and Köksalan (2011), which they solved assuming an underlying linear preference function. In this problem, there are 5 nodes and 3 radars. Here we assume that the DM has an underlying Tchebycheff preference function \(f\left( {\mathbf {y}} \right) =\mathrm {max }[0.33\left( y_{1}-y_{1}^{**} \right) ,0.67\left( y_{2}-y_{2}^{**} \right) ]\) to be minimized where \({\mathbf {y}}^{**}=(y_{1}^{**},y_{2}^{**})\) is a utopia point having values that are less than or equal to the best possible values in each objective. We set \({\mathbf {y}}^{**}=\) (0,0). We pretend that we do not know f and use it only to simulate the preferences of the DM in responding to pairwise comparisons. For demonstration purposes, we provide the nondominated set of the problem in the objective space in Fig. 7. There are three efficient tours and a set of nondominated points for each tour. The efficient solutions of different tours are demonstrated by different shapes in Fig. 7. The objective function values of each nondominated point are given in Table 1. We see that the most preferred point of the DM is \({\mathbf {y}}^{*}=\) (29.312, 12.764) when we plug its objective values in f. In our algorithm, we pretend that we do not know the nondominated set, f,  or the most preferred point.

Before running the algorithm, we search for all efficient edges between all node pairs using the \(\upvarepsilon \)-constraint method and find 44 of them.

Fig. 7
figure 7

Nondominated set of the UAV route planning problem

Table 1 Nondominated points

In the first step of IA, we find the two extreme nondominated points, \({\mathbf {y}}^{{\mathbf {LE}}}=\,\)(29.312, 12.764) and \({\mathbf {y}}^{{\mathbf {RE}}}=\,\)(39.312, 4.744). We then go to Step 2 and find the nondominated point, \({\mathbf {y}}^{\prime }=\,\)(32.140, 5.305), corresponding to the solution minimizing \(U({\mathbf {x}})\) for \(w^{\prime }=\,\)0.445. In Step 4, we find the left adjacent point of \({\mathbf {y}}^{\prime }\), \({\mathbf {y}}^{{\mathbf {L}}}=\,\)(29.312, 12.764), and ask the DM to compare \({\mathbf {y}}^{\prime }\) with \({\mathbf {y}}^{{\mathbf {L}}}. f\) implies that the DM prefers \({\mathbf {y}}^{{\mathbf {L}}}.\) Thus, we set the two bounds for the right triangle as \(z_{2}({\mathbf {x}})\le \, \)12.764, \(z_{1}({\mathbf {x}})\le \, \)32.140-\(\delta \). The current most preferred point is set to \({\mathbf {y}}^{{\prime }}=\,\)(29.312, 12.764) and since \({\mathbf {y}}^{{\prime }}={\mathbf {y}}^{{\mathbf {LE}}}\), we go to Step 8. We then search for the right adjacent point of \({\mathbf {y}}^{\prime }=\,\)(29.312, 12.764) within the triangle and we find the three unsupported nondominated points, (31.898, 8.344), (31.554, 8.491), and (30.726, 10.029), successively.

In this small example, we end up finding 9 of the 13 nondominated points and ask the DM to compare 4 nondominated point pairs in order to find the most preferred point. We next give some results for larger instances.

Table 2 Results for 5, 8, and 15-node problems

5.2 Computational results

To further demonstrate its performance, we test IA on several randomly generated instances. We consider 5, 8, and 15-node generalized BOTSPs, in which we generate 20 efficient edges between each node pair. For each problem type, we create 10 different instances randomly generating the objective values of the edges from a uniform distribution with range [0, 100].

We solve all the problems assuming that the DM has an underlying Tchebycheff preference function \(f\left( y \right) =max[\lambda \left( y_{1}-y_{1}^{**}\right) , \left( 1-\lambda \right) \left( y_{2}-y_{2}^{**} \right) ]\) to be minimized, as above. We solve the instances for three different weight values, \(\lambda =0.2\), 0.5, and 0.8. We implement the algorithm in \(\hbox {C}{++}\) and run the tests on a PC with Intel Core i7-3770 CPU, 8 GB RAM.

In order to report the size of the full nondominated set, we solve the whole problem using the \(\upvarepsilon \)-constraint method. We report these in Table 2 in order to provide a sense of the magnitude of the problem. We also present the averages and the ranges of the number of nondominated points found, as well as the number of comparisons made by our algorithm in Table 2. There is a large deviation in the total number of nondominated points between the 10 instances of each problem size as seen from the ranges in Table 2. This in turn leads to large variations in the number of nondominated points found and the number of comparisons made. The algorithm finds only a small percentage (2–15 %) of the nondominated points in all problems and eliminates roughly a third of the generated nondominated points directly, without asking the DM.

Although there are variations in solution times, all problems are solved fast. For 5, 8, and 15-node problems, the average solution durations are 17.429, 31.546 and 95.997 s, respectively. As expected, the larger the problem size, the longer the execution times.

6 Conclusions

In this study, we develop an interactive algorithm for bicriteria integer programs that finds the most preferred point of a DM whose preferences are consistent with a quasiconvex function. The algorithm gradually reduces the search region around the most preferred point of the DM considering his/her preferences. We ask for comparison between adjacent points, which can be supported or unsupported depending on the search region. To reduce the computational effort in finding unsupported nondominated points, we improve the bound developed by Tuyttens et al. (2000) and eliminate more of the search region.

We apply the interactive algorithm to the generalized BOTSP that has multiple efficient edges between nodes and test the algorithm on randomly generated instances with 5, 8, and 15 nodes. The results show fast convergence to the most preferred point of the DM avoiding unnecessary preference comparisons.

During the interactive algorithm, the majority of the computational effort is devoted to the search for unsupported nondominated points in reduced objective spaces. In its current version, the interactive algorithm can be applied to any biobjective integer program. When the algorithm is applied to different problem types, problem specific approaches can be used to improve the computational efficiency of the search for unsupported nondominated points. Alternatively, for larger problem instances that are computationally prohibitive even in the single-objective case, heuristics can be developed for the search for unsupported nondominated points. Another possibility for handling the computational complexity could be to develop an approximation algorithm by defining a reasonable worst-case performance bound. For example, a larger than warranted portion of the objective space can be reduced each time a point is preferred to its adjacent points, allowing for a small deviation from the most preferred point. Development of such heuristics is a promising area for future research.

Currently, our algorithm is applicable to any biobjective integer program. Generalizing for mixed integer programs is another idea for future research. One main difficulty would be the existence of continuous nondominated frontiers. In this case, the selection of the point pairs to present to the DM should be reconsidered. Additionally, the space reduction after the DM’s preferences might be modified considering the shape of the frontier. We should also readdress the stopping condition due to the continuous frontier.