1 Introduction

Multi-objective linear programming (MOLP) deals with multi-objective optimization problems where all the objectives and constraints are linear. These problems arise in many fields, including engineering, finance, and medicine [3, 12, 21]. When it comes to multi-objective optimization, a typical optimality concept used in single-objective optimization is replaced with the Pareto optimality. A feasible solution is Pareto optimal if it is impossible to improve some objective functions without compromising others.

Inverse optimization (IO), studied by Burton and Toint [6] in 1992, is a new field of optimization dealing with problems in which some of the parameters may not be precisely known, only estimates of these parameters may be known. Instead, it is possible that some information about the problem, such as some solutions or some values for objective function, are given from experience or from some experiments. Thus, the aim is to determine values of the parameters by using this information. In the past years, several types of IOs have been studied by the researchers [7, 13, 16, 23].

This paper deals with an important type of IO in which we are looking for the least problem modifications in order to make a given feasible solution of the problem an optimal solution. In another words, if \(\hat{x}\) is a feasible solution of the problem, how can we modify the problem, for example by changing the cost function, as least as possible according to some metrics so that \(\hat{x}\) becomes an optimal solution. This problem has been well studied for different types of single-objective optimization problems including linear programming [1, 2, 22, 23], combinatorial optimization [13], conic programming [14], integer programming [19], and countably infinite linear programming [11]. Roland et al. [18] studied inverse optimization for a special class of multi-objective combinatorial optimization problems where the least modification of the criteria matrix is sought to turn a set of feasible points to a set of Pareto optimal points. They demonstrated that the inverse multi-objective combinatorial optimization under some norms can be solved by algorithms based on mixed integer programming. This paper deals with inverse multi-objective linear programming (IMOLP) where the least criteria matrix modification is sought to turn a given feasible solution into a Pareto optimal solution.

For IO problems studied in this paper, and all the aforementioned papers, while a feasible point \(\hat{x}\) could be in principle an interior point of feasible region, this would turn the IO problem into an evident problem where the cost function is modified to zero so that all feasible solutions, including \(\hat{x}\), would become weakly efficient. There is a new type of IO, inspired by noisy data appear in some applications, that has been recently studied by Chan et al. [9] where \(\hat{x}\) could be an interior point of the feasible region. For this type of IO, both cost vector and \(\hat{x}\) are modified to achieve the optimality. Chan et al. [7, 8] have also studied a very special multi-objective version of their IO, inspired by application in radiotherapy cancer treatment, where the criteria matrix is remained unchanged and the objective weights are sought to turn a given point \(\hat{x}\), that could be an interior point or even infeasible point, into a near-optimal solution. This paper deals with the natural extension of a classical IO problem, introduced earlier, for multi-objective programs where the least perturbations in criteria matrix is sought to turn a feasible solution into a Pareto optimal solution.

The paper is organized as follows. Section 2 includes some preliminaries. Section 3 reviews inverse linear programming (ILP), and Sect. 4 generalizes ILP to inverse multi-objective linear programming (IMOLP) and discusses the non-convexity of the problem. Afterwards, some special characteristics of IMOLP are proved, providing necessary tools to develop an efficient convex-optimization-based algorithm. A simple numerical example and a geometrical interpretation are also provided in Sect. 4. And finally, Sect. 5 concludes the paper.

2 Preliminaries

In this paper, the row and column vectors are distinguishable from each other by text. Let \({\mathbb {R}}^{n}\) and \({\mathbb {R}}^{m\times n}\) denote the set of all real n-vectors and \(m\times n\) matrices respectively, and \(a_{i}^{}\) denotes the i-th row of the matrix A. For \(x\in {\mathbb {R}}^{n}\), \({||x ||}_{p}^{}\) denotes the \(\ell _{p}\)-norm of the vector x. Consider the following two problems:

figure a

where \(A\in {\mathbb {R}}^{m\times n}\), \(C\in {\mathbb {R}}^{k\times n}\), \(c\in {\mathbb {R}}^{n}\), \(b\in {\mathbb {R}}^{m}\), and \(S=\left\{ x\in {\mathbb {R}}^{n}\;|\;Ax\ge b \right\} \) is the feasible region. A feasible point \(\hat{x}\in S\) is called weak efficient or weak Pareto optimal of MOLP(C) if there exists no \(x\in S\) such that \(Cx<C\hat{x}\). All the ordering in this paper are component-wise. The set of all weak efficient points of MOLP(C) is denoted by \(S_{we}(C)\). The set of all optimal solutions of  LP(c) is denoted by \(S_{o}(c)\).

For a set \(\mathbf {y}=\left\{ y_{1}^{},\dots ,y_{l}^{}\right\} \subseteq {\mathbb {R}}^{n}\), \(cone\left( \mathbf {y}\right) \) and \(conv\left( \mathbf {y}\right) \) denote the conic and convex hull of \(\mathbf {y}\) defined as:

$$\begin{aligned} \begin{aligned}&cone\left( \mathbf {y}\right) =\left\{ x\in {\mathbb {R}}^{n}\;|\;x=\sum \limits _{i=1}^{l}{\beta }_{i}^{}y_{i}^{}, \beta _{i}^{}\ge 0 \right\} ,\\&conv\left( \mathbf {y}\right) =\left\{ x\in {\mathbb {R}}^{n}\;|\;x=\sum \limits _{i=1}^{l}{\beta }_{i}^{}y_{i}^{}, \sum \limits _{i=1}^{l}{\beta }_{i}^{}=1, {\beta }_{i}^{}\ge 0\right\} . \end{aligned} \end{aligned}$$

For a feasible point \(\hat{x}\in S\), let \(I(\hat{x})=\left\{ i\;|\;a_{i}^{}\hat{x}=b_{i}^{}\right\} \) be the set of all active constraints indices at \(\hat{x}\). The conic hull of the set \(\left\{ a_{i}^{}\;|\;i\in I(\hat{x})\right\} \) is:

$$\begin{aligned} \hat{K}=cone\left( \left\{ a_{i}\;|\;i\in I(\hat{x})\right\} \right) =\left\{ x\in {\mathbb {R}}^{n}\;|\; x=\sum \limits _{i\in I(\hat{x})}{\beta }_{i}^{}a_{i}^{},\;{\beta }_{i}^{}\ge 0\right\} . \end{aligned}$$

As we would see in the next sections, \(\hat{K}\) plays an important role in the optimality of \(\hat{x}\) for LP(c) and MOLP(C).

Definition 1

The distance between two non-empty sets A, \(B\subseteq {\mathbb {R}}^{n}\) is defined as:

$$\begin{aligned} d(A,B)=inf\left\{ ||x-y ||_{p}^{}\;|\;x\in A, y\in B\right\} . \end{aligned}$$

It is well-known that for a compact set A and a closed set B, we have [5]:

  1. i)

    \(d(A,B)=min\left\{ ||x-y ||_{p}^{}\;|\;x\in A, y\in B\right\} \).

  2. ii)

    \(A\cap B=\emptyset \) if and only if \(d(A,B)>0\).

3 Inverse single-objective linear programming

This section reviews an inverse single-objective linear programming (ILP) problem under \(\ell _p\)-norm with the cost vector modification. This problem has been initially studied by Zhang and Liu [22, 23] under \(\ell _1\)-norm and \(\ell _{\infty }\)-norm, and later been extended to the weighted norms by Ahuja and Orlin [1].

An ILP problem under \(\ell _p\)-norm for LP(c) looks for the minimal modification in the cost vector in order to make a given feasible point \(\hat{x}\) an optimal solution. This can be formulated as follows:

figure b

Using the Karush-Kuhn-Tucker (KKT) optimality conditions of LP [4], ILP(\(c,\hat{x}\)) can be re-written as:

$$\begin{aligned} \begin{aligned} min\;&||c-\hat{c}{||}_{p}^{}\\ s.t.\;\;&y(A\hat{x}-b)=0,\quad (\text {complementary slackness}),\\&yA=\hat{c},\quad (\text {dual feasibility} ), \\&y\ge 0,\quad (\text {dual feasibility}),\\&\hat{c}\in {\mathbb {R}}^{n}. \end{aligned} \end{aligned}$$
(1)

Problem (1) is a convex non-linear optimization problem which could be transferred into LP for \(p=1\), and \(\infty \) [1, 22, 23]. The problem is always feasible with an optimal solution, and \((y^{*},c)\) is an optimal solution if and only if \(\hat{x}\in S_{o}(c)\).

Yet another equivalent formulation of  ILP(\(c,\hat{x}\)) can be obtained, based on the conic hull concept introduced in the last section, by employing the following Lemma.

Lemma 1

[4, 17] Let \(\hat{x}\in S\) be a feasible point of LP(\(\hat{c}\)). Then, \(\hat{x}\in S_{o}(\hat{c})\) if and only if \(\hat{c}\in \hat{K}\).

Using Lemma 1, we can replace the constraint \(\hat{x}\in S_{o}(\hat{c})\) in ILP(\(c,\hat{x}\)) with \(\hat{c}\in \hat{K}\), which would result into the following equivalent problem:

$$\begin{aligned} \begin{aligned}&min\;||c-\hat{c}{||}_{p}^{}\\&s.t.\;\;\; \hat{c}\in \hat{K},\\&\qquad \,\hat{c}\in {\mathbb {R}}^{n}. \end{aligned} \end{aligned}$$
(2)

4 Inverse multi-objective linear programming

This section provides a natural extension of inverse single-objective linear programming provided in the previous section for multi-objective programs. In the following subsections, we first introduce the inverse MOLP (IMOLP) and then prove two special characteristics of IMOLPs. Afterwards, we introduce the new algorithm followed by a numerical example and a geometric interpretation.

4.1 Problem formulation and characteristics

Given that optimality for LP is replaced by Pareto optimality in MOLP and the cost vector is replaced by the criteria matrix, it then makes sense to define the inverse MOLP as a minimal modifications in the criteria matrix in order to make a given feasible solution \(\hat{x}\) a weak efficient solution. This leads to the following problem:

figure c

where \(S_{we}(\hat{C})\) is a set of all weak efficient points of MOLP(\(\hat{C}\)), and \(\Vert .\Vert \) denotes a matrix norm on \({\mathbb {R}}^{k\times n}\), gauging the modifications between two matrices C and \(\hat{C}\). It is worth mentioning that IMOLP(\(C,\hat{x}\)) is simply reduced to ILP(\(c,\hat{x}\)) if there is only one objective function.

Now we take advantage of the following weighted-sum theorem to make connection between ILPs and IMOLPs.

Theorem 1

[10, 15, 20] \(\hat{x}\in S\) is a weak efficient solution of MOLP(\(\hat{C}\)) if and only if there exists a vector \(w\in W=\{w\in {\mathbb {R}}^{k}\;|\; \sum \nolimits _{i=1}^{k}w_{i}^{}=1,\;w_{i}^{}\ge 0\}\) such that \(\hat{x}\) is an optimal solution of the weighted-sum LP \(min \{w\hat{C}x\;|\;x\in S\}\).

Let conv(C) denotes the convex hull of the rows of the matrix C. The following lemma can be immediately obtained from Lemma 1 and Theorem  1.

Lemma 2

The following statements are equivalent.

  1. (i)

    \(\hat{x}\in S_{we}(\hat{C})\).

  2. (ii)

    There exists \(w\in W\) such that \(w\hat{C}\in \hat{K}\).

  3. (iii)

    \(d(conv(\hat{C}),\hat{K})=0\).

By the second part of Lemma 2, IMOLP(\(C,\hat{x}\)) is equivalent to:

$$\begin{aligned} \begin{aligned}&min\;\Vert C-\hat{C}\Vert \\&s.t.\;\;\; w\hat{C}\in \hat{K},\\&\;\;\;\;\;\;\;\;w\in W,\\&\;\;\;\;\;\;\;\;\hat{C}\in {\mathbb {R}}^{k\times n}. \end{aligned} \end{aligned}$$
(3)

Problem (3) provides an interpretation for IMOLPs. If \(\hat{x}\) is not a weak efficient solution, then IMOLP looks for the least modification of the criteria matrix C such that a convex combination of its rows belongs to \(\hat{K}\) which is the convex cone created by the active constraints at \(\hat{x}\). We study Problem (3) under the matrix norm \(||C-\hat{C}||=\sum \nolimits _{i=1}^{k} ||c_{i}^{} -\hat{c}_{i}^{}{||}_{p}^{}\)\(1\le p\le \infty \), although one may study other norms (e.g., Forbenius, maximum absolute row sum,...).

Now, by simply injecting the definition of the conic hull \(\hat{K}\) into (3), we obtain:

$$\begin{aligned} \begin{aligned}&\min \;\rho =\sum \limits _{i=1}^{k} ||c_{i}^{} -\hat{c}_{i}^{}{||}_{p}^{} \\&s.t.\;\;\sum _{i=1}^{k}w_{i}^{} \hat{c}_{i}^{}-\sum _{r\in I(\hat{x})}\beta _{r}^{} a_{r}^{}=0, \\&\;\;\;\;\;\;\;\;\sum _{i=1}^{k}w_{i}^{}=1,\\&\;\;\;\;\;\;\;w_{i}^{}\ge 0,\;\;\;\;\;\;\; i=1,\ldots ,k,\\&\;\;\;\;\;\;\;\beta _{r}^{}\ge 0,\;\;\;\;\;\;\; r\in I(\hat{x}),\\&\;\;\;\;\;\;\;\hat{c}_{i}^{}\in {\mathbb {R}}^{n},\;\;\;\;\; i=1,\ldots ,k. \end{aligned} \end{aligned}$$
(4)

In analogy to ILP(\(c,\hat{x}\)) (1), IMOLP(\(C,\hat{x}\)) (4) is always feasible with an optimal solution. For instance, an obvious feasible point is \((\hat{C}=0, w_{i}^{}=\frac{1}{k},\; \text {for}\; i=1,\dots , k, \beta =0)\). Furthermore, by employing Lemma 2, the objective value of IMOLP(\(C,\hat{x}\)) (4) is zero if and only if \(\hat{x}\in S_{we}(C)\). However, as opposed to ILP(\(c,\hat{x}\)) (1), IMOLP(\(C,\hat{x}\)) (4) is not convex due to the presence of the variable multiplication \(w_{i}^{}\hat{c}_{i}^{}\).

The following two theorems show that even though IMOLP(\(C,\hat{x}\)) (4) is a non-convex optimization problem, it can be solved using a series of convex optimization problems. The first part of Theorem 2 reveals that there always exists an optimal solution for which only one objective function has been modified, and the second part provides a lower bound for the optimal objective value of IMOLP(\(C,\hat{x}\)) (4) that can be later on employed in the algorithm design as a termination criterion.

Theorem 2

For IMOLP(\(C,\hat{x}\)) (4) if \( d(conv(C),\hat{K})>0 \), then

  1. (i)

    IMOLP(\(C,\hat{x}\)) (4) has an optimal solution \(\hat{C}^{*}\) for which \( \hat{c}^{*}_{i}=c_{i}^{}\) for \(i\in \{1,\ldots ,k\}\), and \(i\ne j\).

  2. (ii)

    \(d(conv(C),\hat{K})\) provides a lower bound for the optimal value of IMOLP(\(C,\hat{x}\)) (4), i.e., \(\rho ^{*}\ge d(conv(C),\hat{K})\).

Proof

Let \({\xi }^{'}=(\hat{C}^{'}, w^{'},\beta ^{'})\) be an optimal solution of IMOLP(\(C,\hat{x}\)) (4) with the optimal objective value \(\rho ^{'}\). We prove part (i) by constructing a new optimal solution \({\xi }^{*}=(\hat{C}^{*}, w^{'},\beta ^{'})\) for which \(\hat{C}^{*}\) only differs C in one row. And we prove part (ii), by showing \(\rho ^{'}\ge d(conv(C),\hat{K})\).

(i)\(\Longrightarrow \) Since \({\xi }^{'}\) is a feasible solution of IMOLP(\(C,\hat{x}\)) (4), we have:

$$\begin{aligned} \begin{aligned}&\sum _{i=1}^{k}w^{'}_{i} \hat{c}^{'}_{i}-\sum _{r\in I(\hat{x})}\beta ^{'}_{r} a_{r}^{}=0. \end{aligned} \end{aligned}$$
(5)

Now we define \(\hat{C}^{*}\) as follows:

$$\begin{aligned} \hat{c}^{*}_{i}= {\left\{ \begin{array}{ll} c_{i}^{}&{}\;\;\;i\ne s\\ c_{s}^{}+\sum \limits _{i=1}^{k}\frac{w^{'}_{i}}{w^{'}_{s}} v_{i}^{}&{}\;\;\;i=s \end{array}\right. } \end{aligned}$$
(6)

where \(v_{i}^{}=\hat{c}^{'}_{i}-c_{i}^{}\) and \(w^{'}_{s}=max \left\{ w^{'}_{i}\;|\;i=1,\dots ,k\right\} >0\). Now we have:

$$\begin{aligned} \begin{aligned} \sum \limits _{i=1}^{k}w^{'}_{i} \hat{c}^{*}_{i}&=w^{'}_{s} \hat{c}^{*}_{s}+\sum \limits _{\begin{array}{c} i=1\\ i\ne s \end{array} }^{k}w^{'}_{i} \hat{c}^{*}_{i}\\&=w^{'}_{s}\left( c_{s}^{}+\sum \limits _{i=1}^ {k}\frac{w^{'}_{i}}{w^{'}_{s}}\left( \hat{c}^{'}_{i}- c_{i}^{}\right) \right) +\sum \limits _{\begin{array}{c} i=1\\ i\ne s \end{array} }^{k}w^{'}_{i} c_{i}^{}\\&=w^{'}_{s}c_{s}^{}+\sum \limits _{i=1}^{k}w^{'}_{i}\left( \hat{c}^{'}_{i}-c_{i}^{}\right) +\sum \limits _{\begin{array}{c} i=1\\ i\ne s \end{array} }^{k}w^{'}_{i} c_{i}^{}\\&=\sum _{i=1}^{k}w^{'}_{i} \hat{c}^{'}_{i}=\sum _{r\in I(\hat{x})}\beta ^{'}_{r} a_{r}^{}.\;\;\;\;\;\;\;\left( \text {according to (5)}\right) \end{aligned} \end{aligned}$$

So, \({\xi }^{*}\) is a feasible solution of IMOLP(\(C,\hat{x}\)) (4). Now we just need to show that \({\xi }^{*}\) is at least as good as \({\xi }^{'}\) in terms of objective function:

$$\begin{aligned} ||C-\hat{C}^{*}||= & {} \sum \limits _{i=1}^{k} ||c_{i}^{}-\hat{c}^{*}_{i}{||}_{p}^{}\\= & {} \left\| \sum \limits _{i=1}^{k} \frac{w^{'}_{i}}{w^{'}_{s}} v_{i} \right\| _{p}^{}\;\;\;\;\left( \text {according to (6)}\right) \\\le & {} \sum \limits _{i=1}^{k}\left\| \frac{w^{'}_{i}}{w^{'}_{s}} v_{i}^{} \right\| _{p}^{}\;\;\;\;\left( \text {according to triangle inequality in norms}\right) \\= & {} \sum \limits _{i=1}^{k}\left( \frac{w^{'}_{i}}{w^{'}_{s}}\right) ||v_{i}^{} {||}_{p}^{}\\\le & {} \sum \limits _{i=1}^{k}||v_{i}^{} {||}_{p}^{}=||C-\hat{C}^{'}||.\;\;\;\;\;\;(\text {since }w_{s}^{'}\ge w_{i}^{'}) \end{aligned}$$

(ii)\( \Longrightarrow \) Let us define h and \(\hat{k}\) as follows:

$$\begin{aligned} \begin{aligned}&h=\sum \limits _{i=1}^{k}w^{'}_{i} \hat{c}^{*}_{i}-w^{'}_{s}(\hat{c}^{*}_{s}-c_{s}^{}),\\&\hat{k}=\sum \limits _{r\in I(\hat{x})}\beta ^{'}_{r} a_{r}^{}. \end{aligned} \end{aligned}$$

Given the definition of \(\hat{C}^{*}\) in (6), we have

$$\begin{aligned} \begin{aligned} h=&\sum \limits _{i=1}^{k}w^{'}_{i} \hat{c}^{*}_{i}-w^{'}_{s}(\hat{c}^{*}_{s}-c_{s}^{})=\sum \limits _{\begin{array}{c} i=1\\ i\ne s \end{array} }^{k}w^{'}_{i} \hat{c}^{*}_{i}+\left( w^{'}_{s} \hat{c}^{*}_{s}-w^{'}_{s} \hat{c}^{*}_{s}\right) +w^{'}_{s}c_{s}^{}\\ =&\sum \limits _{\begin{array}{c} i=1\\ i\ne s \end{array}}^{k}w^{'}_{i}c_{i}^{}+w^{'}_{s}c_{s}^{} =\sum \limits _{i=1}^{k}w^{'}_{i}c_{i}^{} \end{aligned} \end{aligned}$$

So, \(h\in conv\left( C\right) \). On the other hand, since \({\xi }^{*}\) is a feasible solution of IMOLP(\(C,\hat{x}\)) (4), \( \sum \nolimits _{i=1}^{k}w^{'}_{i} \hat{c}^{*}_{i}=\sum \nolimits _{r\in I(\hat{x})}\beta ^{'}_{r} a_{r}^{} \) and so:

$$\begin{aligned} \begin{aligned} d(conv(C),\hat{K})&\le ||h-\hat{k}||_{p}^{}\;\;\;\;\;\;\;\text {{(according to Definition~1 in Sect.~2)}}\\&=w_{s}^{'}||c_{s}^{}-{\hat{c}}_{s}^{*}||_{p}^{}\\&\le ||c_{s}^{}-{\hat{c}}_{s}^{*}||_{p}^{}=||C-\hat{C}^{*}||\;\;\;\;\;\;\;(\text {since }w_{s}^{'}\le 1) \end{aligned} \end{aligned}$$

which completes the proof. \(\square \)

If we assume that only the j-th objective function in IMOLP(\(C,\hat{x}\)) (4) is modified and the rest are the same, then we reach the following optimization problem:

figure d

According to the first part of Theorem 2, we just need to solve \(\text {P}_{j}\) for all j and then the problem with the smallest optimal objective value will provide the optimal solution of the IMOLP(\(C,\hat{x}\)). There is only one problem. \(\text {P}_{j}\) is still a non-convex problem due to the presence of \(w_{j}^{}\hat{c}_{j}\), although \(\text {P}_{j}\) is already much easier than Problem (4) which has \(w_{i}^{}\hat{c}_{i}^{}\) for all \(i\in \{1,\dots ,k\}\). The following theorem reveals that \(\text {P}_{j}\) can be transferred to the equivalent convex optimization problem. It is worth mentioning that \(w_{j}^{}>0\) in \(\text {P}_{j}\), otherwise the optimal objective value of \(\text {P}_{j}\) is zero meaning \(\hat{x}\) is already a weak efficient solution.

Theorem 3

If \(d(conv(C),\hat{K})>0\), then \(\text {P}_{j}\) is equivalent to the following convex optimization problem, \(\text {Q}_{j}\):

figure e

Proof

We prove by showing that there is a one-to-one correspondence between the points in the feasible region of the problem \(\text {P}_{j}\) and \(\text {Q}_{j}\). Let \({\xi }_{j}^{}=(\hat{c}_{j}^{}, w,\beta )\) be a feasible point of \(\text {P}_{j}\). By defining \({\lambda }_{i}^{}=\frac{{w}_{i}^{}}{{w}_{j}^{}}\) (\(i=1,\dots ,k, i\ne j\)), and \({\alpha }_{r}^{}=\frac{{\beta }_{r}^{}}{{w}_{j}^{}}\) (\(r\in I(\hat{x})\)), \({\zeta }_{j}^{}=(\hat{c}_{j}^{}, \lambda ,\alpha )\) is a feasible point of \(\text {Q}_{j}\) with the same objective function.

Now let \({\zeta }_{j}^{}=(\hat{c}_{j}^{}, \lambda ,\alpha )\) be a feasible point of \(\text {Q}_{j}\). By defining \({w}_{j}^{}=\frac{1}{{{\tau }}}\), \({w}_{i}^{}=\frac{{\lambda }_{i}^{}}{{{\tau }}}\) (\(i=1,\dots ,k, i\ne j\)), and \({\beta }_{r}^{}=\frac{{\alpha }_{r}^{}}{{{\tau }}}\) (\(r\in I(\hat{x})\)) where \({{\tau }}=1+\sum \nolimits _{\begin{array}{c} i=1\\ i\ne j \end{array} }^{k}{\lambda }_{i}^{}\), we obtain \({\xi }_{j}^{}=(\hat{c}_{j}^{}, w,\beta )\) as a feasible point of \(\text {P}_{j}\) with the same objective function. \(\square \)

Now we provide a geometric interpretation of \(\text {Q}_{j}\). Let \(K_{j}^{}\) be the conic hull of \(\{\{a_{r}^{}\}_{r\in I(\hat{x})}^{},\{-c_{i}^{}\}_{i=1,\ne j}^{k}\}\). Then, \(\text {Q}_{j}\) can be equivalently re-written as

$$\begin{aligned} \begin{aligned} d(c_{j}^{},K_{j}^{})=&min\;||c_{j}^{}-\hat{c}_{j}^{}{||}_{p}^{}\\&s.t.\;\;\; \hat{c}_{j}^{}\in K_{j}^{},\\&\;\;\;\hat{c}_{j}^{}\in {\mathbb {R}}^{n}. \end{aligned} \end{aligned}$$

So, the optimal solution of \(\text {Q}_{j}\) is the \(\ell _p\)-projection of \(c_{j}^{}\) onto \(K_{j}^{}\). This will be demonstrated in the numerical example in the next section.

4.2 Algorithm and a numerical example

The algorithm consists of two phases. The first phase calculates \(d(conv(C),\hat{K})\) which could be used to determine whether or not \(\hat{x}\) is already a weak efficient solution. It can also be served as a termination criterion by employing the second part of Theorem 2. However, Phase I could be eliminated if one already knows that \(\hat{x}\) is not weak efficient and looking for the exact optimal solution.

Phase I: Solve the following convex optimization problem

$$\begin{aligned} d(conv(C),\hat{K})=min\left\{ ||x-y ||_{p}^{}\;|\;x\in conv\left( C\right) , y\in \hat{K}\right\} . \end{aligned}$$
(7)

Let \(d^{*}:=d(conv(C),\hat{K})\) be the optimal objective value of Problem (7). If \(d^{*}=0\), it means \(\hat{x}\in S_{we}(C)\) and go to Step 4, otherwise let \(j=1\) and go to Step 1.

Phase II:

Step 1: Solve Problem (\(\text {Q}_{j}\)). If \(|\rho _{j}^{*}-d^{*}|\le \epsilon \), then let \(j^{*}=j\) and go to Step 3, otherwise go to Step 2.

Step 2: If \(j=k\), let \(j^{*}=argmin_{j=1,\ldots ,k}\;\rho _{j}^{*}\), and go to Step 3, otherwise let \(j=j+1\) and go to Step 1.

Step 3: Let \({\zeta }_{j^{*}}=(\hat{c}_{j^{*}}^{*}, \lambda ^{*},\alpha ^{*})\) be an optimal solution of \(\text {Q}_{j^{*}}\). Then, \({\xi }^{*}=(\hat{C}^{*}, w^{*},\beta ^{*})\) is an optimal solution of IMOLP(\(C,\hat{x}\)) (4) where \(w^{*}\) and \(\beta ^{*}\) defined as follow, and \(\hat{C}^{*}\) and C agree in all rows except the \(j^{*}\)-th one.

$$\begin{aligned} \begin{aligned}&w^{*}_{i}=\frac{\lambda ^{*}_{i}}{1+\sum \nolimits _{\begin{array}{c} i=1\\ i\ne j^{*} \end{array} }^{k}\lambda ^{*}_{i}},\quad i=1,\dots ,k, i\ne j^{*},\\&w_{j^{*}}^{*}=\frac{1}{1+\sum \nolimits _{\begin{array}{c} i=1\\ i\ne j^{*} \end{array} }^{k}\lambda ^{*}_{i}},\\&\beta ^{*}_{r}=\frac{{\alpha }^{*}_{r}}{1+\sum \nolimits _{\begin{array}{c} i=1\\ i\ne j^{*} \end{array} }^{k}\lambda ^{*}_{i}},\quad r\in I(\hat{x}). \end{aligned} \end{aligned}$$

Step 4: End.

Example 1

Consider the MOLP problem \(min\left\{ Cx\;|\;Ax\ge b \right\} \) and its IMOLP(\(C,\hat{x}\)) counterpart with \(\hat{x}=(8,7)\) and the following data:

$$\begin{aligned} C={\small \begin{pmatrix} -6 &{} -1.5\\ -3 &{} 0.5\\ 2 &{} 1.5 \end{pmatrix}},\quad A={\small \begin{pmatrix} -2 &{} -1\\ -3 &{} -4\\ -1 &{} 0\\ 0 &{} -1\\ 1 &{} 0\\ 0 &{} 1 \end{pmatrix}},\quad \text {and}\quad b={\small \begin{pmatrix} -23\\ -52\\ -10\\ -10\\ 0\\ 0 \end{pmatrix}}. \end{aligned}$$

We have \(I(\hat{x})=\{1,2\}\) and \(\hat{K}=\left\{ x\in {\mathbb {R}}^{2}\;|\;x=\beta _{1}^{}a_{1}^{}+\beta _{2}^{}a_{2}^{}, \beta _{1}^{},\beta _{2}^{}\ge 0\right\} \), where \(a_{1}^{}=(-2,-1)\) and \(a_{2}^{}=(-3,-4)\).

It is obvious from Fig. 1a that the distance between \(\hat{K}\)—the hatched area—and conv(C)—the triangle area—is positive and so \(\hat{x}\) is not weak efficient. The distance is depicted as \(d^{*}=||h-\hat{k}||_{p}^{}\).

IMOLP(\(C,\hat{x}\)) is:

$$\begin{aligned} \begin{aligned}&\min \;\rho =||c_{1}^{} -\hat{c}_{1}^{}{||}_{p}^{}+||c_{2}^{} -\hat{c}_{2}^{}{||}_{p}^{}+||c_{3}^{} -\hat{c}_{3}^{}{||}_{p}^{} \\&s.t.\;\;w_{1}^{} \hat{c}_{1}^{}+w_{2}^{} \hat{c}_{2}^{}+w_{3}^{} \hat{c}_{3}^{}-\beta _{1}^{} a_{1}^{}-\beta _{2}^{} a_{2}^{}=0,\\&\;\;\;\;\;\;\;\;w_{1}^{}+w_{2}^{}+w_{3}^{}=1,\\&\;\;\;\;\;\;\;\;w_{1}^{}, w_{2}^{}, w_{3}^{}, \beta _{1}^{}, \beta _{2}^{} \ge 0,\\&\;\;\;\;\;\;\;\;\hat{c}_{1}^{},\hat{c}_{2}^{}, \hat{c}_{3}^{}\in {\mathbb {R}}^{2}. \end{aligned} \end{aligned}$$

We apply the proposed algorithm for \(p=2\). We assume that we are looking for the exact solution with the threshold \(\epsilon =0\).

Phase I: Solving the convex problem (7) yields \(d^{*}=||h-\hat{k}||_{2}^{}=\frac{6}{\sqrt{73}},\) where \(h=\left( -\frac{18}{73},\frac{48}{73}\right) \) and \(\hat{k}=\left( 0,0\right) \) (see Fig. 1a). Since \(d^{*}>0\), let \(j=1\) and go to Step 1.

Fig. 1
figure 1

ad Different steps of the algorithm for Example 1. e The final result of the algorithm. f Changing multiple objective functions could result in less modification if there are restrictions on the criteria matrix modifications

Phase II:

Step 1: We solve \(\text {Q}_{1}^{}\) as follows:

$$\begin{aligned} \begin{aligned}&\min \;\rho _{1}^{}=||c_{1}^{} -\hat{c}_{1}^{}{||}_{2}^{} \\&s.t.\;\;\hat{c}_{1}^{}+\lambda _{2}^{} c_{2}^{}+\lambda _{3}^{} c_{3}^{}-\alpha _{1}^{} a_{1}^{}-\alpha _{2}^{} a_{2}^{}=0,\\&\;\;\;\;\;\;\;\lambda _{2}^{}, \lambda _{3}^{}, \alpha _{1}^{}, \alpha _{2}^{} \ge 0,\\&\;\;\;\;\;\;\;\hat{c}_{1}^{}\in {\mathbb {R}}^{2}. \end{aligned} \end{aligned}$$

Figure 1b illustrates the geometric interpretation of \(\text {Q}_{1}^{}\). This problem finds the minimum distance between \(c_{1}^{}\) and \(K_{1}^{}=cone\left( \left\{ a_{1}^{},a_{2}^{},-c_{2}^{},-c_{3}^{}\right\} \right) =cone\left( \left\{ a_{1}^{},-c_{2}^{}\right\} \right) \). The optimal objective value is \(\rho ^{*}_{1}=\frac{3}{\sqrt{5}}\) and since \(\rho ^{*}_{1}- d^{*}>0\), we solve \(\text {Q}_{2}^{}\):

$$\begin{aligned} \begin{aligned}&\min \;\rho _{2}^{}=||c_{2}^{} -\hat{c}_{2}^{}{||}_{2}^{} \\&s.t.\;\;\hat{c}_{2}^{}+\lambda _{1}^{} c_{1}^{}+\lambda _{3}^{} c_{3}^{}-\alpha _{1}^{} a_{1}^{}-\alpha _{2}^{} a_{2}^{}=0,\\&\;\;\;\;\;\;\;\lambda _{1}^{}, \lambda _{3}^{}, \alpha _{1}^{}, \alpha _{2}^{} \ge 0,\\&\;\;\;\;\;\;\;\hat{c}_{2}^{}\in {\mathbb {R}}^{2}. \end{aligned} \end{aligned}$$

Figure 1c corresponds to \(\text {Q}_{2}^{}\) and the optimal solution here is the minimum distance between \(c_{2}^{}\) and \(K_{2}^{}=cone\left( \left\{ a_{1}^{},a_{2}^{},-c_{1}^{},-c_{3}^{}\right\} \right) =cone\left( \left\{ a_{1}^{},-c_{1}^{}\right\} \right) \).The optimal objective value is \(\rho ^{*}_{2}=\frac{4}{\sqrt{5}}>d^{*}\), so we need to solve \(\text {Q}_{3}^{}\).

$$\begin{aligned} \begin{aligned}&\min \;\rho _{3}^{}=||c_{3}^{} -\hat{c}_{3}^{}{||}_{2}^{} \\&s.t.\;\;\hat{c}_{3}^{}+\lambda _{1}^{} c_{1}^{}+\lambda _{2}^{} c_{2}^{}-\alpha _{1}^{} a_{1}^{}-\alpha _{2}^{} a_{2}^{}=0,\\&\;\;\;\;\;\;\;\lambda _{1}^{}, \lambda _{2}^{}, \alpha _{1}^{}, \alpha _{2}^{} \ge 0,\\&\;\;\;\;\;\;\;\hat{c}_{3}^{}\in {\mathbb {R}}^{2}. \end{aligned} \end{aligned}$$

Figure 1d corresponds to \(\text {Q}_{3}^{}\), and the optimal objective value is \(\rho ^{*}_{3}=\frac{4}{\sqrt{17}}\). Since \(\text {Q}_{3}^{}\) has the smallest objective value among all the problems, \(j^{*}=3\). The optimal solution of \(\text {Q}_{3}^{}\) is \({\zeta }^{*}_{3}=\left( \hat{c}^{*}_{3},\lambda ^{*}_{1},\lambda ^{*}_{2},\alpha ^{*}_{1},\alpha ^{*}_{2}\right) =\left( ({\frac{38}{17}}, {\frac{19}{34}}),\frac{19}{51},0,0,0\right) \). Therefore, the optimal solution of IMOLP(\(C,\hat{x}\)) is \({\xi }^{*}=\left( \hat{c}^{*}_{1},\hat{c}^{*}_{2},\hat{c}^{*}_{3}, w^{*}_{1},w^{*}_{2},w^{*}_{3},\beta ^{*}_{1},\beta ^{*}_{2}\right) =(c_{1}^{},c_{2}^{},\hat{c}_{3}^{*},\frac{19}{70},0,\frac{51}{70},0,0)\).

Figure 1e illustrates the new criteria matrix \(\hat{C}^{*}\), obtained by moving \(c_{3}^{}\) to \(\hat{c}_{3}^{*}\). It is seen that \(conv(\hat{C}^{*})\) intersects \(\hat{K}\), which according to Lemma 2 means that \(\hat{x}\) is weakly efficient for the new criteria matrix. \(\square \)

Before we close this session, we would like to show that, through a simple example, that the results presented in this paper do not necessarily hold true if there are some constraints on how much the criteria matrix can be changed. In the above example, let us assume that \(c_{3}^{}\) cannot be modified more than 0.95, according to the \(\ell _{2}\)-norm (i.e., \(||c_{3}^{} -\hat{c}_{3}^{}{||}_{2}^{}\le 0.95\)). This is less than what \(c_{3}^{}\) needs to move (\(\frac{4}{\sqrt{17}}\approx 0.97\)) in order for \(conv(\hat{C})\) to intersect \(\hat{K}\). Now, we show that moving \(c_{1}^{}\) and \(c_{3}^{}\) simultaneously leads to less modification than moving \(c_{1}^{}\) or \(c_{2}^{}\) individually. If \(c_{1}^{}\) moves to \(\hat{c}_{1}^{}=\left( -\frac{2610}{449}, -\frac{1827}{898}\right) \) and \(c_{3}^{}\) moves to \(\hat{c}_{3}^{}=\left( \frac{1010}{449}, \frac{707}{898}\right) \), then as it can be seen in Fig. 1f, \(conv(\hat{C})\) intersects \(\hat{K}\), meaning \(\hat{x}\) is a weakly efficient point. Therefore, moving \(c_{1}^{}\) and \(c_{3}^{}\) at the same time, results in \(||c_{1}^{} -\hat{c}_{1}^{}{||}_{2}^{}+||c_{3}^{} -\hat{c}_{3}^{}{||}_{2}^{}\approx 1.32\) modification in the criteria matrix which is less than the modification required by moving only \(c_{1}^{}\) (\(\rho ^{*}_{1}=\frac{3}{\sqrt{5}}\approx 1.34\)) or only \(c_{2}^{}\) (\(\rho ^{*}_{2}=\frac{4}{\sqrt{5}}\approx 1.78\)). Since moving \(c_{3}^{}\) alone is not an option due to the imposed restriction on the criteria matrix, individual modification of neither of the objective functions results in the least required modification in the criteria matrix to turn \(\hat{x}\) into a weakly efficient point.

5 Conclusion

We generalized the existing inverse linear programming to inverse multi-objective linear programming. The generalized version specializes to the existing one if there is only one objective function. We discussed the non-convexity challenge of the resulting problem, and explained how it could be circumvented by exploiting some very special characteristics of the problem and taking advantage of the fact that there is always an optimal solution for which only one objective function is modified. The results only hold true if there is no constraints on how much the criteria matrix could be modified.