Abstract
This paper generalizes inverse optimization for multi-objective linear programming where we are looking for the least problem modifications to make a given feasible solution a weak efficient solution. This is a natural extension of inverse optimization for single-objective linear programming with regular “optimality” replaced by the “Pareto optimality”. This extension, however, leads to a non-convex optimization problem. We prove some special characteristics of the problem, allowing us to solve the non-convex problem by solving a series of convex problems.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Multi-objective linear programming (MOLP) deals with multi-objective optimization problems where all the objectives and constraints are linear. These problems arise in many fields, including engineering, finance, and medicine [3, 12, 21]. When it comes to multi-objective optimization, a typical optimality concept used in single-objective optimization is replaced with the Pareto optimality. A feasible solution is Pareto optimal if it is impossible to improve some objective functions without compromising others.
Inverse optimization (IO), studied by Burton and Toint [6] in 1992, is a new field of optimization dealing with problems in which some of the parameters may not be precisely known, only estimates of these parameters may be known. Instead, it is possible that some information about the problem, such as some solutions or some values for objective function, are given from experience or from some experiments. Thus, the aim is to determine values of the parameters by using this information. In the past years, several types of IOs have been studied by the researchers [7, 13, 16, 23].
This paper deals with an important type of IO in which we are looking for the least problem modifications in order to make a given feasible solution of the problem an optimal solution. In another words, if \(\hat{x}\) is a feasible solution of the problem, how can we modify the problem, for example by changing the cost function, as least as possible according to some metrics so that \(\hat{x}\) becomes an optimal solution. This problem has been well studied for different types of single-objective optimization problems including linear programming [1, 2, 22, 23], combinatorial optimization [13], conic programming [14], integer programming [19], and countably infinite linear programming [11]. Roland et al. [18] studied inverse optimization for a special class of multi-objective combinatorial optimization problems where the least modification of the criteria matrix is sought to turn a set of feasible points to a set of Pareto optimal points. They demonstrated that the inverse multi-objective combinatorial optimization under some norms can be solved by algorithms based on mixed integer programming. This paper deals with inverse multi-objective linear programming (IMOLP) where the least criteria matrix modification is sought to turn a given feasible solution into a Pareto optimal solution.
For IO problems studied in this paper, and all the aforementioned papers, while a feasible point \(\hat{x}\) could be in principle an interior point of feasible region, this would turn the IO problem into an evident problem where the cost function is modified to zero so that all feasible solutions, including \(\hat{x}\), would become weakly efficient. There is a new type of IO, inspired by noisy data appear in some applications, that has been recently studied by Chan et al. [9] where \(\hat{x}\) could be an interior point of the feasible region. For this type of IO, both cost vector and \(\hat{x}\) are modified to achieve the optimality. Chan et al. [7, 8] have also studied a very special multi-objective version of their IO, inspired by application in radiotherapy cancer treatment, where the criteria matrix is remained unchanged and the objective weights are sought to turn a given point \(\hat{x}\), that could be an interior point or even infeasible point, into a near-optimal solution. This paper deals with the natural extension of a classical IO problem, introduced earlier, for multi-objective programs where the least perturbations in criteria matrix is sought to turn a feasible solution into a Pareto optimal solution.
The paper is organized as follows. Section 2 includes some preliminaries. Section 3 reviews inverse linear programming (ILP), and Sect. 4 generalizes ILP to inverse multi-objective linear programming (IMOLP) and discusses the non-convexity of the problem. Afterwards, some special characteristics of IMOLP are proved, providing necessary tools to develop an efficient convex-optimization-based algorithm. A simple numerical example and a geometrical interpretation are also provided in Sect. 4. And finally, Sect. 5 concludes the paper.
2 Preliminaries
In this paper, the row and column vectors are distinguishable from each other by text. Let \({\mathbb {R}}^{n}\) and \({\mathbb {R}}^{m\times n}\) denote the set of all real n-vectors and \(m\times n\) matrices respectively, and \(a_{i}^{}\) denotes the i-th row of the matrix A. For \(x\in {\mathbb {R}}^{n}\), \({||x ||}_{p}^{}\) denotes the \(\ell _{p}\)-norm of the vector x. Consider the following two problems:
where \(A\in {\mathbb {R}}^{m\times n}\), \(C\in {\mathbb {R}}^{k\times n}\), \(c\in {\mathbb {R}}^{n}\), \(b\in {\mathbb {R}}^{m}\), and \(S=\left\{ x\in {\mathbb {R}}^{n}\;|\;Ax\ge b \right\} \) is the feasible region. A feasible point \(\hat{x}\in S\) is called weak efficient or weak Pareto optimal of MOLP(C) if there exists no \(x\in S\) such that \(Cx<C\hat{x}\). All the ordering in this paper are component-wise. The set of all weak efficient points of MOLP(C) is denoted by \(S_{we}(C)\). The set of all optimal solutions of LP(c) is denoted by \(S_{o}(c)\).
For a set \(\mathbf {y}=\left\{ y_{1}^{},\dots ,y_{l}^{}\right\} \subseteq {\mathbb {R}}^{n}\), \(cone\left( \mathbf {y}\right) \) and \(conv\left( \mathbf {y}\right) \) denote the conic and convex hull of \(\mathbf {y}\) defined as:
For a feasible point \(\hat{x}\in S\), let \(I(\hat{x})=\left\{ i\;|\;a_{i}^{}\hat{x}=b_{i}^{}\right\} \) be the set of all active constraints indices at \(\hat{x}\). The conic hull of the set \(\left\{ a_{i}^{}\;|\;i\in I(\hat{x})\right\} \) is:
As we would see in the next sections, \(\hat{K}\) plays an important role in the optimality of \(\hat{x}\) for LP(c) and MOLP(C).
Definition 1
The distance between two non-empty sets A, \(B\subseteq {\mathbb {R}}^{n}\) is defined as:
It is well-known that for a compact set A and a closed set B, we have [5]:
-
i)
\(d(A,B)=min\left\{ ||x-y ||_{p}^{}\;|\;x\in A, y\in B\right\} \).
-
ii)
\(A\cap B=\emptyset \) if and only if \(d(A,B)>0\).
3 Inverse single-objective linear programming
This section reviews an inverse single-objective linear programming (ILP) problem under \(\ell _p\)-norm with the cost vector modification. This problem has been initially studied by Zhang and Liu [22, 23] under \(\ell _1\)-norm and \(\ell _{\infty }\)-norm, and later been extended to the weighted norms by Ahuja and Orlin [1].
An ILP problem under \(\ell _p\)-norm for LP(c) looks for the minimal modification in the cost vector in order to make a given feasible point \(\hat{x}\) an optimal solution. This can be formulated as follows:
Using the Karush-Kuhn-Tucker (KKT) optimality conditions of LP [4], ILP(\(c,\hat{x}\)) can be re-written as:
Problem (1) is a convex non-linear optimization problem which could be transferred into LP for \(p=1\), and \(\infty \) [1, 22, 23]. The problem is always feasible with an optimal solution, and \((y^{*},c)\) is an optimal solution if and only if \(\hat{x}\in S_{o}(c)\).
Yet another equivalent formulation of ILP(\(c,\hat{x}\)) can be obtained, based on the conic hull concept introduced in the last section, by employing the following Lemma.
Lemma 1
[4, 17] Let \(\hat{x}\in S\) be a feasible point of LP(\(\hat{c}\)). Then, \(\hat{x}\in S_{o}(\hat{c})\) if and only if \(\hat{c}\in \hat{K}\).
Using Lemma 1, we can replace the constraint \(\hat{x}\in S_{o}(\hat{c})\) in ILP(\(c,\hat{x}\)) with \(\hat{c}\in \hat{K}\), which would result into the following equivalent problem:
4 Inverse multi-objective linear programming
This section provides a natural extension of inverse single-objective linear programming provided in the previous section for multi-objective programs. In the following subsections, we first introduce the inverse MOLP (IMOLP) and then prove two special characteristics of IMOLPs. Afterwards, we introduce the new algorithm followed by a numerical example and a geometric interpretation.
4.1 Problem formulation and characteristics
Given that optimality for LP is replaced by Pareto optimality in MOLP and the cost vector is replaced by the criteria matrix, it then makes sense to define the inverse MOLP as a minimal modifications in the criteria matrix in order to make a given feasible solution \(\hat{x}\) a weak efficient solution. This leads to the following problem:
where \(S_{we}(\hat{C})\) is a set of all weak efficient points of MOLP(\(\hat{C}\)), and \(\Vert .\Vert \) denotes a matrix norm on \({\mathbb {R}}^{k\times n}\), gauging the modifications between two matrices C and \(\hat{C}\). It is worth mentioning that IMOLP(\(C,\hat{x}\)) is simply reduced to ILP(\(c,\hat{x}\)) if there is only one objective function.
Now we take advantage of the following weighted-sum theorem to make connection between ILPs and IMOLPs.
Theorem 1
[10, 15, 20] \(\hat{x}\in S\) is a weak efficient solution of MOLP(\(\hat{C}\)) if and only if there exists a vector \(w\in W=\{w\in {\mathbb {R}}^{k}\;|\; \sum \nolimits _{i=1}^{k}w_{i}^{}=1,\;w_{i}^{}\ge 0\}\) such that \(\hat{x}\) is an optimal solution of the weighted-sum LP \(min \{w\hat{C}x\;|\;x\in S\}\).
Let conv(C) denotes the convex hull of the rows of the matrix C. The following lemma can be immediately obtained from Lemma 1 and Theorem 1.
Lemma 2
The following statements are equivalent.
-
(i)
\(\hat{x}\in S_{we}(\hat{C})\).
-
(ii)
There exists \(w\in W\) such that \(w\hat{C}\in \hat{K}\).
-
(iii)
\(d(conv(\hat{C}),\hat{K})=0\).
By the second part of Lemma 2, IMOLP(\(C,\hat{x}\)) is equivalent to:
Problem (3) provides an interpretation for IMOLPs. If \(\hat{x}\) is not a weak efficient solution, then IMOLP looks for the least modification of the criteria matrix C such that a convex combination of its rows belongs to \(\hat{K}\) which is the convex cone created by the active constraints at \(\hat{x}\). We study Problem (3) under the matrix norm \(||C-\hat{C}||=\sum \nolimits _{i=1}^{k} ||c_{i}^{} -\hat{c}_{i}^{}{||}_{p}^{}\)\(1\le p\le \infty \), although one may study other norms (e.g., Forbenius, maximum absolute row sum,...).
Now, by simply injecting the definition of the conic hull \(\hat{K}\) into (3), we obtain:
In analogy to ILP(\(c,\hat{x}\)) (1), IMOLP(\(C,\hat{x}\)) (4) is always feasible with an optimal solution. For instance, an obvious feasible point is \((\hat{C}=0, w_{i}^{}=\frac{1}{k},\; \text {for}\; i=1,\dots , k, \beta =0)\). Furthermore, by employing Lemma 2, the objective value of IMOLP(\(C,\hat{x}\)) (4) is zero if and only if \(\hat{x}\in S_{we}(C)\). However, as opposed to ILP(\(c,\hat{x}\)) (1), IMOLP(\(C,\hat{x}\)) (4) is not convex due to the presence of the variable multiplication \(w_{i}^{}\hat{c}_{i}^{}\).
The following two theorems show that even though IMOLP(\(C,\hat{x}\)) (4) is a non-convex optimization problem, it can be solved using a series of convex optimization problems. The first part of Theorem 2 reveals that there always exists an optimal solution for which only one objective function has been modified, and the second part provides a lower bound for the optimal objective value of IMOLP(\(C,\hat{x}\)) (4) that can be later on employed in the algorithm design as a termination criterion.
Theorem 2
For IMOLP(\(C,\hat{x}\)) (4) if \( d(conv(C),\hat{K})>0 \), then
-
(i)
IMOLP(\(C,\hat{x}\)) (4) has an optimal solution \(\hat{C}^{*}\) for which \( \hat{c}^{*}_{i}=c_{i}^{}\) for \(i\in \{1,\ldots ,k\}\), and \(i\ne j\).
-
(ii)
\(d(conv(C),\hat{K})\) provides a lower bound for the optimal value of IMOLP(\(C,\hat{x}\)) (4), i.e., \(\rho ^{*}\ge d(conv(C),\hat{K})\).
Proof
Let \({\xi }^{'}=(\hat{C}^{'}, w^{'},\beta ^{'})\) be an optimal solution of IMOLP(\(C,\hat{x}\)) (4) with the optimal objective value \(\rho ^{'}\). We prove part (i) by constructing a new optimal solution \({\xi }^{*}=(\hat{C}^{*}, w^{'},\beta ^{'})\) for which \(\hat{C}^{*}\) only differs C in one row. And we prove part (ii), by showing \(\rho ^{'}\ge d(conv(C),\hat{K})\).
(i)\(\Longrightarrow \) Since \({\xi }^{'}\) is a feasible solution of IMOLP(\(C,\hat{x}\)) (4), we have:
Now we define \(\hat{C}^{*}\) as follows:
where \(v_{i}^{}=\hat{c}^{'}_{i}-c_{i}^{}\) and \(w^{'}_{s}=max \left\{ w^{'}_{i}\;|\;i=1,\dots ,k\right\} >0\). Now we have:
So, \({\xi }^{*}\) is a feasible solution of IMOLP(\(C,\hat{x}\)) (4). Now we just need to show that \({\xi }^{*}\) is at least as good as \({\xi }^{'}\) in terms of objective function:
(ii)\( \Longrightarrow \) Let us define h and \(\hat{k}\) as follows:
Given the definition of \(\hat{C}^{*}\) in (6), we have
So, \(h\in conv\left( C\right) \). On the other hand, since \({\xi }^{*}\) is a feasible solution of IMOLP(\(C,\hat{x}\)) (4), \( \sum \nolimits _{i=1}^{k}w^{'}_{i} \hat{c}^{*}_{i}=\sum \nolimits _{r\in I(\hat{x})}\beta ^{'}_{r} a_{r}^{} \) and so:
which completes the proof. \(\square \)
If we assume that only the j-th objective function in IMOLP(\(C,\hat{x}\)) (4) is modified and the rest are the same, then we reach the following optimization problem:
According to the first part of Theorem 2, we just need to solve \(\text {P}_{j}\) for all j and then the problem with the smallest optimal objective value will provide the optimal solution of the IMOLP(\(C,\hat{x}\)). There is only one problem. \(\text {P}_{j}\) is still a non-convex problem due to the presence of \(w_{j}^{}\hat{c}_{j}\), although \(\text {P}_{j}\) is already much easier than Problem (4) which has \(w_{i}^{}\hat{c}_{i}^{}\) for all \(i\in \{1,\dots ,k\}\). The following theorem reveals that \(\text {P}_{j}\) can be transferred to the equivalent convex optimization problem. It is worth mentioning that \(w_{j}^{}>0\) in \(\text {P}_{j}\), otherwise the optimal objective value of \(\text {P}_{j}\) is zero meaning \(\hat{x}\) is already a weak efficient solution.
Theorem 3
If \(d(conv(C),\hat{K})>0\), then \(\text {P}_{j}\) is equivalent to the following convex optimization problem, \(\text {Q}_{j}\):
Proof
We prove by showing that there is a one-to-one correspondence between the points in the feasible region of the problem \(\text {P}_{j}\) and \(\text {Q}_{j}\). Let \({\xi }_{j}^{}=(\hat{c}_{j}^{}, w,\beta )\) be a feasible point of \(\text {P}_{j}\). By defining \({\lambda }_{i}^{}=\frac{{w}_{i}^{}}{{w}_{j}^{}}\) (\(i=1,\dots ,k, i\ne j\)), and \({\alpha }_{r}^{}=\frac{{\beta }_{r}^{}}{{w}_{j}^{}}\) (\(r\in I(\hat{x})\)), \({\zeta }_{j}^{}=(\hat{c}_{j}^{}, \lambda ,\alpha )\) is a feasible point of \(\text {Q}_{j}\) with the same objective function.
Now let \({\zeta }_{j}^{}=(\hat{c}_{j}^{}, \lambda ,\alpha )\) be a feasible point of \(\text {Q}_{j}\). By defining \({w}_{j}^{}=\frac{1}{{{\tau }}}\), \({w}_{i}^{}=\frac{{\lambda }_{i}^{}}{{{\tau }}}\) (\(i=1,\dots ,k, i\ne j\)), and \({\beta }_{r}^{}=\frac{{\alpha }_{r}^{}}{{{\tau }}}\) (\(r\in I(\hat{x})\)) where \({{\tau }}=1+\sum \nolimits _{\begin{array}{c} i=1\\ i\ne j \end{array} }^{k}{\lambda }_{i}^{}\), we obtain \({\xi }_{j}^{}=(\hat{c}_{j}^{}, w,\beta )\) as a feasible point of \(\text {P}_{j}\) with the same objective function. \(\square \)
Now we provide a geometric interpretation of \(\text {Q}_{j}\). Let \(K_{j}^{}\) be the conic hull of \(\{\{a_{r}^{}\}_{r\in I(\hat{x})}^{},\{-c_{i}^{}\}_{i=1,\ne j}^{k}\}\). Then, \(\text {Q}_{j}\) can be equivalently re-written as
So, the optimal solution of \(\text {Q}_{j}\) is the \(\ell _p\)-projection of \(c_{j}^{}\) onto \(K_{j}^{}\). This will be demonstrated in the numerical example in the next section.
4.2 Algorithm and a numerical example
The algorithm consists of two phases. The first phase calculates \(d(conv(C),\hat{K})\) which could be used to determine whether or not \(\hat{x}\) is already a weak efficient solution. It can also be served as a termination criterion by employing the second part of Theorem 2. However, Phase I could be eliminated if one already knows that \(\hat{x}\) is not weak efficient and looking for the exact optimal solution.
Phase I: Solve the following convex optimization problem
Let \(d^{*}:=d(conv(C),\hat{K})\) be the optimal objective value of Problem (7). If \(d^{*}=0\), it means \(\hat{x}\in S_{we}(C)\) and go to Step 4, otherwise let \(j=1\) and go to Step 1.
Phase II:
Step 1: Solve Problem (\(\text {Q}_{j}\)). If \(|\rho _{j}^{*}-d^{*}|\le \epsilon \), then let \(j^{*}=j\) and go to Step 3, otherwise go to Step 2.
Step 2: If \(j=k\), let \(j^{*}=argmin_{j=1,\ldots ,k}\;\rho _{j}^{*}\), and go to Step 3, otherwise let \(j=j+1\) and go to Step 1.
Step 3: Let \({\zeta }_{j^{*}}=(\hat{c}_{j^{*}}^{*}, \lambda ^{*},\alpha ^{*})\) be an optimal solution of \(\text {Q}_{j^{*}}\). Then, \({\xi }^{*}=(\hat{C}^{*}, w^{*},\beta ^{*})\) is an optimal solution of IMOLP(\(C,\hat{x}\)) (4) where \(w^{*}\) and \(\beta ^{*}\) defined as follow, and \(\hat{C}^{*}\) and C agree in all rows except the \(j^{*}\)-th one.
Step 4: End.
Example 1
Consider the MOLP problem \(min\left\{ Cx\;|\;Ax\ge b \right\} \) and its IMOLP(\(C,\hat{x}\)) counterpart with \(\hat{x}=(8,7)\) and the following data:
We have \(I(\hat{x})=\{1,2\}\) and \(\hat{K}=\left\{ x\in {\mathbb {R}}^{2}\;|\;x=\beta _{1}^{}a_{1}^{}+\beta _{2}^{}a_{2}^{}, \beta _{1}^{},\beta _{2}^{}\ge 0\right\} \), where \(a_{1}^{}=(-2,-1)\) and \(a_{2}^{}=(-3,-4)\).
It is obvious from Fig. 1a that the distance between \(\hat{K}\)—the hatched area—and conv(C)—the triangle area—is positive and so \(\hat{x}\) is not weak efficient. The distance is depicted as \(d^{*}=||h-\hat{k}||_{p}^{}\).
IMOLP(\(C,\hat{x}\)) is:
We apply the proposed algorithm for \(p=2\). We assume that we are looking for the exact solution with the threshold \(\epsilon =0\).
Phase I: Solving the convex problem (7) yields \(d^{*}=||h-\hat{k}||_{2}^{}=\frac{6}{\sqrt{73}},\) where \(h=\left( -\frac{18}{73},\frac{48}{73}\right) \) and \(\hat{k}=\left( 0,0\right) \) (see Fig. 1a). Since \(d^{*}>0\), let \(j=1\) and go to Step 1.
Phase II:
Step 1: We solve \(\text {Q}_{1}^{}\) as follows:
Figure 1b illustrates the geometric interpretation of \(\text {Q}_{1}^{}\). This problem finds the minimum distance between \(c_{1}^{}\) and \(K_{1}^{}=cone\left( \left\{ a_{1}^{},a_{2}^{},-c_{2}^{},-c_{3}^{}\right\} \right) =cone\left( \left\{ a_{1}^{},-c_{2}^{}\right\} \right) \). The optimal objective value is \(\rho ^{*}_{1}=\frac{3}{\sqrt{5}}\) and since \(\rho ^{*}_{1}- d^{*}>0\), we solve \(\text {Q}_{2}^{}\):
Figure 1c corresponds to \(\text {Q}_{2}^{}\) and the optimal solution here is the minimum distance between \(c_{2}^{}\) and \(K_{2}^{}=cone\left( \left\{ a_{1}^{},a_{2}^{},-c_{1}^{},-c_{3}^{}\right\} \right) =cone\left( \left\{ a_{1}^{},-c_{1}^{}\right\} \right) \).The optimal objective value is \(\rho ^{*}_{2}=\frac{4}{\sqrt{5}}>d^{*}\), so we need to solve \(\text {Q}_{3}^{}\).
Figure 1d corresponds to \(\text {Q}_{3}^{}\), and the optimal objective value is \(\rho ^{*}_{3}=\frac{4}{\sqrt{17}}\). Since \(\text {Q}_{3}^{}\) has the smallest objective value among all the problems, \(j^{*}=3\). The optimal solution of \(\text {Q}_{3}^{}\) is \({\zeta }^{*}_{3}=\left( \hat{c}^{*}_{3},\lambda ^{*}_{1},\lambda ^{*}_{2},\alpha ^{*}_{1},\alpha ^{*}_{2}\right) =\left( ({\frac{38}{17}}, {\frac{19}{34}}),\frac{19}{51},0,0,0\right) \). Therefore, the optimal solution of IMOLP(\(C,\hat{x}\)) is \({\xi }^{*}=\left( \hat{c}^{*}_{1},\hat{c}^{*}_{2},\hat{c}^{*}_{3}, w^{*}_{1},w^{*}_{2},w^{*}_{3},\beta ^{*}_{1},\beta ^{*}_{2}\right) =(c_{1}^{},c_{2}^{},\hat{c}_{3}^{*},\frac{19}{70},0,\frac{51}{70},0,0)\).
Figure 1e illustrates the new criteria matrix \(\hat{C}^{*}\), obtained by moving \(c_{3}^{}\) to \(\hat{c}_{3}^{*}\). It is seen that \(conv(\hat{C}^{*})\) intersects \(\hat{K}\), which according to Lemma 2 means that \(\hat{x}\) is weakly efficient for the new criteria matrix. \(\square \)
Before we close this session, we would like to show that, through a simple example, that the results presented in this paper do not necessarily hold true if there are some constraints on how much the criteria matrix can be changed. In the above example, let us assume that \(c_{3}^{}\) cannot be modified more than 0.95, according to the \(\ell _{2}\)-norm (i.e., \(||c_{3}^{} -\hat{c}_{3}^{}{||}_{2}^{}\le 0.95\)). This is less than what \(c_{3}^{}\) needs to move (\(\frac{4}{\sqrt{17}}\approx 0.97\)) in order for \(conv(\hat{C})\) to intersect \(\hat{K}\). Now, we show that moving \(c_{1}^{}\) and \(c_{3}^{}\) simultaneously leads to less modification than moving \(c_{1}^{}\) or \(c_{2}^{}\) individually. If \(c_{1}^{}\) moves to \(\hat{c}_{1}^{}=\left( -\frac{2610}{449}, -\frac{1827}{898}\right) \) and \(c_{3}^{}\) moves to \(\hat{c}_{3}^{}=\left( \frac{1010}{449}, \frac{707}{898}\right) \), then as it can be seen in Fig. 1f, \(conv(\hat{C})\) intersects \(\hat{K}\), meaning \(\hat{x}\) is a weakly efficient point. Therefore, moving \(c_{1}^{}\) and \(c_{3}^{}\) at the same time, results in \(||c_{1}^{} -\hat{c}_{1}^{}{||}_{2}^{}+||c_{3}^{} -\hat{c}_{3}^{}{||}_{2}^{}\approx 1.32\) modification in the criteria matrix which is less than the modification required by moving only \(c_{1}^{}\) (\(\rho ^{*}_{1}=\frac{3}{\sqrt{5}}\approx 1.34\)) or only \(c_{2}^{}\) (\(\rho ^{*}_{2}=\frac{4}{\sqrt{5}}\approx 1.78\)). Since moving \(c_{3}^{}\) alone is not an option due to the imposed restriction on the criteria matrix, individual modification of neither of the objective functions results in the least required modification in the criteria matrix to turn \(\hat{x}\) into a weakly efficient point.
5 Conclusion
We generalized the existing inverse linear programming to inverse multi-objective linear programming. The generalized version specializes to the existing one if there is only one objective function. We discussed the non-convexity challenge of the resulting problem, and explained how it could be circumvented by exploiting some very special characteristics of the problem and taking advantage of the fact that there is always an optimal solution for which only one objective function is modified. The results only hold true if there is no constraints on how much the criteria matrix could be modified.
References
Ahuja, R.K., Orlin, J.B.: Inverse optimization. Oper. Res. 49(5), 771–783 (2001). https://doi.org/10.1287/opre.49.5.771.10607
Akbari, Z., Peyghami, M.R.: An interior-point algorithm for solving inverse linear optimization problem. Optimization 61(4), 373–386 (2012). https://doi.org/10.1080/02331934.2011.637111
Annetts, J.E., Audsley, E.: Multiple objective linear programming for environmental farm planning. J. Oper. Res. Soc. 53(9), 933–943 (2002). https://doi.org/10.1057/palgrave.jors.2601404
Bazaraa, M.S., Jarvis, J.J., Sherali, H.D.: Linear Programming and Network Flows. Wiley, London (2011)
Boyd, S., Vandenberghe, L.: Convex Optimization. Cambridge University Press, Cambridge (2004)
Burton, D., Toint, P.L.: On an instance of the inverse shortest paths problem. Math. Program. 53(1), 45–61 (1992). https://doi.org/10.1007/BF01585693
Chan, T.C.Y., Craig, T., Lee, T., Sharpe, M.B.: Generalized inverse multiobjective optimization with application to cancer therapy. Oper. Res. 62(3), 680–695 (2014). https://doi.org/10.1287/opre.2014.1267
Chan, T.C.Y., Lee, T.: Trade-off preservation in inverse multi-objective convex optimization. Eur. J. Oper. Res. 270(1), 25–39 (2018). https://doi.org/10.1016/j.ejor.2018.02.045
Chan, T.C.Y., Lee, T., Terekhov, D.: Inverse optimization: closed-form solutions, geometry, and goodness of fit. Manag. Sci. pp. 1 – 21 (2018). https://doi.org/10.1287/mnsc.2017.2992
Ehrgott, M.: Multicriteria Optimization. Springer, Berlin (2006)
Ghate, A.: Inverse optimization in countably infinite linear programs. Oper. Res. Lett. 43(3), 231–235 (2015). https://doi.org/10.1016/j.orl.2015.02.004
Hamacher, H.W., Küfer, K.H.: Inverse radiation therapy planning: a multiple objective optimization approach. Discrete Appl. Math. 118(1), 145–161 (2002). https://doi.org/10.1016/S0166-218X(01)00261-X
Heuberger, C.: Inverse combinatorial optimization: a survey on problems, methods, and results. J. Comb. Optim. 8(3), 329–361 (2004). https://doi.org/10.1023/B:JOCO.0000038914.26975.9b
Iyengar, G., Kang, W.: Inverse conic programming with applications. Oper. Res. Lett. 33(3), 319–330 (2005). https://doi.org/10.1016/j.orl.2004.04.007
Luc, D.T.: Multiobjective Linear Programming. Springer, Berlin (2016)
Mostafaee, A., Hladík, M., Černỳ, M.: Inverse linear programming with interval coefficients. J. Comput. Appl. Math. 292, 591–608 (2016). https://doi.org/10.1016/j.cam.2015.07.034
Murty, K.G.: Linear Programming. Wiley, London (1983)
Roland, J., Smet, Y.D., Figueira, J.R.: Inverse multi-objective combinatorial optimization. Discrete Appl. Math. 161(16), 2764–2771 (2013). https://doi.org/10.1016/j.dam.2013.04.024
Schaefer, A.J.: Inverse integer programming. Optim. Lett. 3(4), 483–489 (2009). https://doi.org/10.1007/s11590-009-0131-z
Steuer, R.E.: Multiple Criteria Optimization: Theory, Computation, and Applications. Wiley, London (1986)
Steuer, R.E., Na, P.: Multiple criteria decision making combined with finance: a categorized bibliographic study. Eur. J. Oper. Res. 150(3), 496–515 (2003). https://doi.org/10.1016/S0377-2217(02)00774-9
Zhang, J., Liu, Z.: Calculating some inverse linear programming problems. J. Comput. Appl. Math. 72(2), 261–273 (1996). https://doi.org/10.1016/0377-0427(95)00277-4
Zhang, J., Liu, Z.: A further study on inverse linear programming problems. J. Comput. Appl. Math. 106(2), 345–359 (1999). https://doi.org/10.1016/S0377-0427(99)00080-1
Acknowledgements
This work was partially supported by the MSK Cancer Center Support Grant/Core Grant (P30 CA008748).
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Naghavi, M., Foroughi, A.A. & Zarepisheh, M. Inverse optimization for multi-objective linear programming. Optim Lett 13, 281–294 (2019). https://doi.org/10.1007/s11590-019-01394-0
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11590-019-01394-0