1 Introduction

Most real world optimization problems are inherently characterized by multiple and conflicting objective functions. In this context, multiobjective linear programming (MOLP) has an important role to formulate many real world problems. Due to its extensive usage in many fields of science, MOLP has been an important topic of research since the 1960s (Ehrgott et al. 2007). From the numerous relevant publications in MOLP we just mention one book (Ehrgott 2005), where most of the theoretical issues concerning MOLP are comprehensively treated. In conventional MOLP problems, the coefficients are assumed to be deterministic. However, there are many situations where the coefficients are not exactly known. Interval programming is one of the approaches for tackling uncertainty in MOLP problems. Since interval programming does not require the specifications or the assumptions which are needed in the other methods such as fuzzy or stochastic programming (Oliveira and Antunes 2007), it has attracted many researchers’ attention (Hladík 2013, 2014; Ishibuchi and Tanaka 1990). Interval programming approach models the uncertain coefficients by closed intervals. Indeed, determining the closed intervals for uncertain parameters is not a difficult task for the decision maker. A survey on interval linear programming is given by Hladík (2012).

Bitran (1980) discussed MOLP problems with interval objectives coefficients and introduced two kinds of efficient solutions, possibly and necessarily efficient solutions. Inuiguchi and Kume (1991) considered optimistic and pessimistic attitudes of the decision maker to find a compromise solution via the goal programming approach. In this context, they formulated and solved four kinds of goal programming problems with interval coefficients in which the target values were also assumed to be closed intervals. Urli and Nadeau (1992) used an interactive method to solve MOLP problems with interval coefficients. Oliveira and Antunes (2007) provided an overview of MOLP problems with interval coefficients by illustrating some numerical examples. They also proposed an interactive method (Oliveira and Antunes 2009). Wu (2009) proposed some solution concepts for a multiobjective programming problem with interval objectives coefficients. In fact, these solution concepts follow from some ordering relationships between two closed intervals and the efficiency concept in conventional multiobjective programming. Under these settings, Wu derived the Karush–Kuhn–Tucker optimality condition.

Necessarily efficient solutions are the most important solutions to an MOLP problem with interval objective functions coefficients, since they are efficient for all values within the interval data. Bitran (1980) proposed a test for recognizing such solutions. Inuiguchi and Sakawa (1996) discussed some basic properties and theoretical foundations for necessarily efficient solutions. Hladík (2010) stated some test problems to distinguish such solutions. A sufficient condition for checking necessarily efficient solutions was proposed in Hladík (2008). In spite of the importance of such kinds of solutions, Hladík (2012) proved that checking necessarily efficiency is a co-NP-hard problem. This means that it is computationally difficult problem, and we can hardly hope for a simple method. Here we should point out a relation to robust optimization (Ben-Tal et al. 2009) since necessarily efficient solutions can be viewed as robust solutions.

The concept of maximum regret has long been proposed as a criterion for making decision under uncertainty (Mausser and Laguna 1998). Inuiguchi and Sakawa (1995) applied the concept of maximum regret to introduce the minimax regret approach to a linear programming problem with a single interval objective function. The concept of maximum regret recently was used by Rivaz and Yaghoobi (2013) to deal with MOLP problems with interval objectives coefficients. Other researchers, also used this approach to solve real world problems. For example, Dong et al. (2011) incorporated interval linear programming and the minimax regret approach to support the power management systems planning. Also, Loulou and Kanudia (1999) proposed a minimax regret strategy for greenhouse gas abatement in Canada.

The aim of this paper is to propose a new model for solving multiobjective linear programming problems with interval objective functions coefficients. These problems, for simplicity reasons, are called interval MOLP problems. The new model is based on the maximum regret criterion. Actually, it is attempted to construct the new model such that it yields a necessarily efficient solution whenever the set of necessarily efficient solutions is nonempty. Moreover, the proposed model has other nice properties. For instance, when the set of necessarily efficient solutions is empty, the new model attains at least a possibly weak efficient. Further, an algorithm is suggested for solving the new model.

Rest of the paper is organized as follows. In Sect. 2, an interval MOLP problem and some preliminaries are discussed. Section 3 investigate the new model and its properties. An algorithm is presented in Sect. 4 to obtain an optimal solution of the proposed model. Moreover, a numerical example for testing the validity of the proposed algorithm is given. Section 5 discusses a special case in an interval MOLP problem. Finally, Sect. 6 is devoted to conclusion.

2 Preliminaries

An MOLP problem can be formulated as follows:

$$\begin{aligned} \max \ Cx= & {} (c_{1}x, \ldots ,c_{p}x)^t, \nonumber \\ s.t. \ \ \ \ \ x\in & {} X=\{x\in \mathbb {R}^{n}|Ax\le b,x\ge 0 \}, \end{aligned}$$
(1)

where \({c}_{i}x=\sum _{j=1}^{n}{c_{ij}x_{j}}\) is a linear real-valued objective function for \(i=1, \ldots ,p\). Thus, C is a \(p\times n\) matrix with each row of the form \({c}_{i}=(c_{i1}, \ldots ,c_{in})\) for \(i=1, \ldots ,p\). A is an \(m\times n\) matrix, \(b\in \mathbb {R}^{m}\) is the right hand side vector, and \(x\in \mathbb {R}^{n}\) is the vector of variables. The superscript t over a vector or matrix denotes the transpose.

Consider two vectors \(\mathcal {A}=(a_{1}, \ldots ,a_{p})^t\) and \(\mathcal {B}=(b_{1}, \ldots ,b_{p})^t\) in \(\mathbb {R}^{p}\). Then:

  • \(\mathcal {A}\succeq \mathcal {B}\) if \(a_{i}\ge b_{i}\) for    \(i=1, \ldots ,p\) and there is at least one \(1\le q \le p\) with \(a_{q}>b_{q}\).

  • \(\mathcal {A}\geqq \mathcal {B}\) if \(a_{i}\ge b_{i}\) for \(i=1, \ldots ,p\).

  • \(\mathcal {A}\succ \mathcal {B}\) if \(a_{i}>b_{i}\) for \(i=1, \ldots ,p\).

Definition 1

(Ehrgott 2005) For Problem (1), a solution \(x^0\in X\) is:

  • efficient if there is no \(x\in X\) such that \(Cx\succeq C{x^0}\).

  • weak efficient if there is no \(x\in X\) such that \(Cx\succ C{x^0}\).

  • strict efficient if there is no \(x\in X\) such that \(x\ne {x}^0\) and \(Cx\geqq Cx^0\).

  • ideal (complete optimal) if \({c}_{i}x^0\ge {c}_{i}x\) for \(i=1, \ldots ,p\) and for all \(x\in X\).

We consider an interval MOLP problem as follows:

$$\begin{aligned} \max \ Z(x)= & {} Cx, \nonumber \\ s.t.\ \ \ \ x \in X= & {} \{x|Ax\le b,x\ge 0 \}, \end{aligned}$$
(2)

where \({C}\in {\varPsi }\) and \({\varPsi }\) is a set of \(p\times n\) matrices, with each row of the form \(\mathbf c _{i}\), whose generic elements are \(c_{ij}\in [c_{ij} ^{l},c_{ij} ^{u}]\) for \(i=1, \ldots ,p\), \(j=1, \ldots ,n\). In fact, by using interval arithmetic (Moore et al. 2009), \(Z(x)=(\mathbf c _{1}x, \ldots ,\mathbf c _{p}x)^{t}\) where \(\mathbf c _{i}x=\sum _{j=1}^{n}[c_{ij}^{l},c_{ij}^{u}]x_{j}\) for \(i=1, \ldots ,p\).

Problem (2) converts to a traditional MOLP problem if C is a fixed \(p\times n\) matrix. In MOLP, efficient and weak efficient solutions are the most desirable ones (Ehrgott 2005). Indeed, an efficient solution is an element of the feasible region that cannot improve some objective functions without sacrificing others. Also, a solution that cannot improve all the objective functions simultaneously is a weak efficient solution. With regard to these concepts, some kinds of solutions are defined to Problem (2) (Bitran 1980; Hladík 2010; Oliveira and Antunes 2007; Rivaz and Yaghoobi 2013).

Definition 2

(Bitran 1980; Rivaz and Yaghoobi 2013) For Problem (2), a solution \(x^{0}\in X\) is:

  • necessarily efficient if it is efficient for any \(C\in {\varPsi }\).

  • possibly efficient if it is efficient for at least one \(C\in {\varPsi }\).

  • possibly weak efficient if it is weak efficient for at least one \(C\in {\varPsi }\).

The set of all necessarily efficient, possibly efficient, and possibly weak efficient solutions of Problem (2) are denoted by \(N_E\), \(P_E\), and \(P_{WE}\), respectively. In what follows, the set \({\varLambda }_{i}\), \(i=1, \ldots ,p\), is:

$$\begin{aligned} {\varLambda }_{i}=\left\{ \mathbf c _{i}=(c_{i1}, \ldots ,c_{in})|\ c_{ij}=c_{ij}^l \ \text{ or }\ c_{ij}=c_{ij}^u,\quad j=1, \ldots ,n\right\} . \end{aligned}$$
(3)

It is clear that the number of elements of \({\varLambda }_i\), \(|{\varLambda }_i|=q_i\), is at most \(2^n\) (\(q_i \le 2^n\)).

Theorem 1

(Rivaz and Yaghoobi 2013) Consider the following MOLP problem:

$$\begin{aligned} \max _{x\in X}\left( \mathbf c _{1}^{1}x,\mathbf c _{1}^{2}x, \ldots ,\mathbf c _{1}^{q_{1}}x,\mathbf c _{2}^{1}x,\mathbf c _{2}^{2}x, \ldots , \mathbf c _{2}^{q_{2}}x, \ldots ,\mathbf c _{p}^{1}x,\mathbf c _{p}^{2}x, \ldots ,\mathbf c _{p}^{q_{p}}x\right) , \end{aligned}$$
(4)

where \(\mathbf{c }_{i}^{k}\), \(k=1, \ldots ,q_{i}\), are all elements of \({\varLambda }_{i}\), \(i=1, \ldots ,p\). A solution is possibly weak efficient to Problem (2) if and only if it is weak efficient to Problem (4).

Theorem 2

(Rivaz and Yaghoobi 2013) A solution is possibly efficient to Problem (2) if it is efficient to Problem (4).

3 Main results

The minimax regret criterion tries to avoid regrets that may result from making a non-optimal solution and is a conservative criterion. It is one of the more credible criteria for selecting decisions under uncertainty. A treatment of linear programming problems with an interval objective function using the minimax regret criterion was firstly proposed by Inuiguchi and Sakawa (1995). Rivaz and Yaghoobi (2013) also applied the minimax regret criterion to solve interval MOLP problems. Some properties of their method are investigated in Rivaz and Yaghoobi (2013). Actually, they generalized the idea of Inuiguchi and Sakawa (1995) to deal with interval MOLPs.

In an interval MOLP problem, a suitable solution should be selected among the elements of \(N_E\), \(P_E\), or at least \(P_{WE}\). Since these sets may contain infinite number of elements, it is necessary to use a convenient method.

We suggest a minimax weighted regret criterion, in which all feasible solutions, objective functions and uncertain coefficients are considered, as a useful approach for dealing with interval MOLP’s. To realize more precisely the motivation for the minimax weighted regret approach, consider a fixed \(\overline{C}\in {\varPsi }\) and given feasible solutions \(x^0,y^0\in X\) of the Problem (2). Suppose that \(\overline{C}y^0\succeq \overline{C}x^0\), which means that \(y^0\) is better than \(x^0\) with respect to \(\overline{C}\). Define \(\max _{1\le i\le p} w_i(\overline{c}_iy^0-\overline{c}_ix^0)\) as a weighted regret of \(x^0\) related to \(\overline{C}\) and \(y^0\), where \(w_i\) is the preferential weight associated with the ith objective function. Obviously, the weighted regret of \(x^0\) will be changed when \(\overline{C}\) changes in \({\varPsi }\) and \(y^0\) changes in X. Consequently, a solution can be a good candidate to Problem (2) if it has a minimum weighted regret according to all feasible solutions and all matrices in \({\varPsi }\). This is investigated as minimax weighted regret criterion for obtaining a reasonable solution of an interval MOLP problem.

In what follows, an attempt is being made to explicitly propose the suggested method based on minimax weighted regret criterion. To obtain the new model, consider a fixed objective functions coefficients matrix \(C=(\mathbf c _{1}, \ldots ,\mathbf c _{p})^t\in {\varPsi }\) and a given feasible solution \(x\in X\) of Problem (2). Similar to what has been done in Rivaz and Yaghoobi (2013), the weighted regret corresponding to C and x can be stated as:

$$\begin{aligned} r(x,C)=\max \{w_{i}\mathbf c _{i}(y-x)|\ y\in X,\quad i=1, \ldots ,p\}, \end{aligned}$$
(5)

where \(w=(w_{1}, \ldots ,w_{p})\) with \(w_{i}> 0\), \(i=1, \ldots ,p\), is the vector of weights according to p objective functions.

Recall that checking \(x\in N_E\) for a given feasible solution \(x\in X\) is a co-NP-hard problem (Hladík 2012). Thus, deciding whether \(N_E\not =\emptyset \), or even computing a necessarily efficient solution is not less difficult problem. On the other hand, necessarily efficient solutions are of interest since they remain efficient for any admissible perturbation of coefficients in the objective functions. In order to combine positive properties of both maximum regret and necessary efficiency approaches, we introduce a modified weighted regret function

$$\begin{aligned} r^{\prime }(x,C)=\max \{w_{i}\mathbf c _{i}(y-x)|\ y\in X,C(y-x)\geqq 0,\quad i=1, \ldots ,p\}. \end{aligned}$$
(6)

For a given \(x\in X\), since C can vary in \({\varPsi }\), the maximum value of the modified weighted regret (6) is

$$\begin{aligned} R^{\prime }(x)=\max \{w_{i}\mathbf c _{i}(y-x)|\ y\in X,C\in {\varPsi },C(y-x)\geqq 0,\quad i=1, \ldots ,p\}. \end{aligned}$$
(7)

Due to the minimax weighted regret criterion, a solution with the minimum \(R^{\prime }\) is the desirable one. Therefore, solving the following model is suggested to obtain a convenient solution of the interval MOLP Problem (2).

$$\begin{aligned}&V=\min _{x\in X}R^{\prime }(x)=\min _{x\in X}\max \{w_{i}\mathbf c _{i}(y-x)| y\in X, C\in {\varPsi }, C(y-x)\geqq 0,\nonumber \\&\quad i=1, \ldots ,p \}. \end{aligned}$$
(8)

In the sequel, some new results related to Problems (2) and (8) are presented.

Theorem 3

The set of necessarily efficient solutions of Problem (2) is nonempty \((N_{E}\ne \emptyset )\) if and only if the optimal value of Problem (8) is zero.

Proof

Let \(V^*=w_{k}\mathbf c _{k}^{*}(y^*-x^*)\) be the optimal value of Problem (8). Firstly, suppose that \(N_{E}\ne \emptyset \). Thus, there exists \(\hat{x}\in N_E\) which means \(\forall \ \bar{C}\in {\varPsi }\), \(\not \exists \ \bar{x}\in X\ : \ \bar{C}\bar{x}\succeq \bar{C}\hat{x} \). On the contrary, suppose that \(V^*\ne 0\). Then, \(V^*=w_{k}\mathbf c _{k}^{*}(y^*-x^*)>0\). Since \(x^*\) is an optimal solution of Problem (8), it can be concluded that \(\max \{w_{i}\mathbf c _{i}(y-\hat{x})| y\in X, C\in {\varPsi }, C(y-\hat{x})\geqq 0,i=1, \ldots ,p \}\ge w_{k}\mathbf c _{k}^{*}(y^*-x^*)>0\). Hence, there exists \(\hat{y}\in X\) and \(\hat{C}\in {\varPsi }\) with \(\hat{C}(\hat{y}-\hat{x})\geqq 0\) such that \(\max _{1\le i\le p}w_{i}\hat{\mathbf{c }}_{i}(\hat{y}-\hat{x})>0\). Therefore, \(\hat{C}\hat{y}\succeq \hat{C}\hat{x}\) that means \(\hat{x}\notin N_E\), which is a contradiction.

Conversely, suppose that \(V^*=w_{k}\mathbf c _{k}^{*}(y^*-x^*)=0\) and on the contrary, \(N_E=\emptyset \). Thus, \(x^*\notin N_E\) and there exists \(\hat{C}\) and \(\hat{x}\) such that \(\hat{C}\hat{x}\succeq \hat{C}x^*\). It implies that \(\max \{w_{i}\mathbf c _{i}(y-x^*)| y\in X, C\in {\varPsi }, C(y-x^*)\geqq 0,i=1, \ldots ,p \}>0\), which is a contradiction. \(\square \)

Corollary 1

The set of necessarily efficient solutions is empty \((N_{E}=\emptyset )\) if and only if the optimal value of Problem (8) is positive.

The following example illustrates that a direct analogy of Theorem 3 does not hold for the classical maximum regret approach.

Example 1

Consider the interval MOLP problem

$$\begin{aligned}&\max \ z_1(x) =x_1+[0,1]x_2, \\&\max \ z_2(x) =-x_1+[0,1]x_2, \\&s.t.\quad x\in X, \end{aligned}$$

where

$$\begin{aligned} X=\{(x_1,x_2)^T\mid x_1\ge 0,\ x_2\ge 0,\ x_1\le 1,\ x_2\le 1\}. \end{aligned}$$

Herein, \(x^0=(1,1)^T\) is a necessarily efficient solution since the weighted sum scalarization \(2z_1(x)+z_2(x)\) yields \(x^0\) as an optimal solution for each realization of interval coefficients, or, from another perspective, the optimal value of (8) is 0. On the other hand, if we remove the constraint \(C(y-x)\geqq 0\) from the formulation of (8), then the optimal value will be positive. Even for the realization \(z_1(x)=x_1+x_2\) and \(z_2(x)=-x_1+x_2\), the maximum regret will be at least 0.5 for each feasible solution.

Theorem 4

Problem (8) is equivalent to the following problem:

$$\begin{aligned} V=\min _{x\in X}\max \{w_{i}\mathbf c _{i}(y-x)| y\in X, C\in {\varLambda }, C(y-x)\geqq 0,\quad i=1, \ldots ,p \}, \end{aligned}$$
(9)

where

$$\begin{aligned} {\varLambda }=\{C=(\mathbf c _{1}, \ldots ,\mathbf c _{p})^t|\mathbf c _{i}\in {\varLambda }_{i},\quad i=1, \ldots ,p\}. \end{aligned}$$
(10)

Proof

It is sufficient to show that \(v_{1}=v_{2}\), where

$$\begin{aligned} v_{1}= & {} \max \{w_{i}\mathbf c _{i}(\hat{y}-\hat{x})| C\in {\varPsi }, C(\hat{y}-\hat{x})\geqq 0,\quad i=1, \ldots ,p \},\\ v_{2}= & {} \max \{w_{i}\mathbf c _{i}(\hat{y}-\hat{x})| C\in {\varLambda }, C(\hat{y}-\hat{x})\geqq 0,\quad i=1, \ldots ,p \}, \end{aligned}$$

when \(\hat{x},\hat{y}\in X\) are given. It is clear that \(v_{2}\le v_{1}\). On the other hand, suppose that \(v_{1}=w_{k}{\hat{\mathbf{c }}_{k}}(\hat{y}-\hat{x})\) where \(\hat{C}=(\hat{\mathbf{c }}_{1}, \ldots ,\hat{\mathbf{c }}_{p})^t\in {\varPsi }\) with \(\hat{C}(\hat{y}-\hat{x})\geqq 0\). Since \(c_{ij}^{l}\le \hat{c}_{ij}\le c_{ij}^{u}\) for \(j=1, \ldots ,n\), \(i=1, \ldots ,p\), we have:

$$\begin{aligned} {\left\{ \begin{array}{ll} 0\le w_{i}\hat{c}_{ij}(\hat{y}_{j}-\hat{x}_{j})\le w_{i}c_{ij}^u(\hat{y}_{j}-\hat{x}_{j}) &{} \text {if } \hat{y}_{j}-\hat{x}_{j}\ge 0, \\ 0\le w_{i}\hat{c}_{ij}(\hat{y}_{j}-\hat{x}_{j})\le w_{i}c_{ij}^l(\hat{y}_{j}-\hat{x}_{j}) &{} \text {if } \hat{y}_{j}-\hat{x}_{j}< 0,\quad i=1, \ldots ,p. \end{array}\right. } \end{aligned}$$

For \(i=1, \ldots ,p\), \(j=1, \ldots ,n\), define:

$$\begin{aligned} c_{ij}^{\prime }= {\left\{ \begin{array}{ll} c_{ij}^{u} &{}\quad \text {if } \hat{y}_{j}-\hat{x}_{j}\ge 0, \\ c_{ij}^{l} &{}\quad \text {if } \hat{y}_{j}-\hat{x}_{j}< 0. \end{array}\right. } \end{aligned}$$

It is obvious that \(C^{\prime }=(\mathbf c ^{\prime }_{1}=(c_{11}^{\prime }, \ldots ,c_{1n}^{\prime }), \ldots ,\mathbf c ^{\prime }_{p}=(c_{p1}^{\prime }, \ldots ,c_{pn}^{\prime }))^t\in {\varLambda }\). Moreover, \(C^{\prime }(\hat{y}-\hat{x})\geqq 0\) and \(v_{1}=w_{k}\hat{\mathbf{c }}_{k}(\hat{y}-\hat{x})\le w_{k}\mathbf{c }_{k}^{\prime }(\hat{y}-\hat{x})\le v_{2}\). Hence, \(v_{1}=v_2\) and the proof is complete. \(\square \)

Theorem 5

If \(x^*\) is an optimal solution of Problem (8) with \(V^*=w_{k}\mathbf c _{k}^{*}(y^*-x^*)>0\), then \(x^*\in P_{WE}\).

Proof

On the contrary, suppose that \(x^*\notin P_{WE}\). Thus, by Theorem 1, \(x^*\) is not a weakly efficient solution of Problem (4), i.e.

$$\begin{aligned} \exists \ \bar{x}\in X\ :\ \mathbf c _{i}^{k}\bar{x}>\mathbf c _{i}^{k}x^*,\ \forall \ k=1, \ldots ,q_{i},\quad i=1, \ldots ,p. \end{aligned}$$

It can be concluded that for an arbitrary \({C}=(\mathbf{c }_{1}, \ldots ,\mathbf{c }_{p})^t\in {\varLambda }\) we have:

$$\begin{aligned} \mathbf{c }_{i}(y-x^{*})>\mathbf{c }_{i}(y-\bar{x}),\quad \forall \ y\in X,\quad i=1, \ldots ,p. \end{aligned}$$

Therefore,

$$\begin{aligned} \max \{w_{i}\hat{\mathbf{c }}_{i}(y-x^{*})|\ i=1, \ldots ,p\}>\max \{w_{i}\hat{\mathbf{c }}_{i}(y-\bar{x})|\ i=1, \ldots ,p\}\ \ \forall \ y\in X.\nonumber \\ \end{aligned}$$
(11)

Now, suppose that:

$$\begin{aligned}&\max \{w_{i}\mathbf c _{i}(y-x^{*})|\ y\in X, C\in {\varLambda }, C(y-x^{*})\geqq 0,\quad i=1, \ldots ,p\} \nonumber \\&\quad = \{w_{i}\mathbf c ^{*}_{i}(y^{*}-x^{*})|\ i=1, \ldots ,p\}, \end{aligned}$$
(12)

and

$$\begin{aligned}&\max \{w_{i}\mathbf c _{i}(y-\bar{x})|\ y\in X, C\in {\varLambda }, C(y-\bar{x})\geqq 0,\quad i=1, \ldots ,p\} \nonumber \\&\quad = \{w_{i}\bar{\mathbf{c }}_{i}(\bar{y}-\bar{x})|\ i=1, \ldots ,p\}, \end{aligned}$$
(13)

where \(C^{*}(y^{*}-x^{*})\geqq 0\) and \(\bar{C}(\bar{y}-\bar{x})\geqq 0\). Indeed, \((C^*,y^*)\) and \((\bar{C},\bar{y})\) are optimal solutions of Problems (12) and (13), respectively. By considering (11) and the fact that \(\{C\in {\varLambda }|\ C(\bar{y}-\bar{x})\geqq 0\}\subseteq \{C\in {\varLambda }|\ C(\bar{y}-{x}^{*})\geqq 0\}\), it can be written that:

$$\begin{aligned}&\max \{w_{i}\mathbf c _{i}(y-x^{*})|\ y\in X, C\in {\varLambda }, C(y-x^{*})\geqq 0,\quad i=1, \ldots ,p\} \\&\quad = \max \{w_{i}\mathbf c _{i}(y^{*}-x^{*})|\ C\in {\varLambda }, C(y^{*}-x^{*})\geqq 0,\quad i=1, \ldots ,p\} \\&\quad \ge \max \{w_{i}\mathbf c _{i}(\bar{y}-x^{*})|\ C\in {\varLambda }, C(\bar{y}-x^{*})\geqq 0,\quad i=1, \ldots ,p\} \\&\quad >\max \{w_{i}\mathbf c _{i}(\bar{y}-\bar{x})|\ C\in {\varLambda }, C(\bar{y}-x^{*})\geqq 0,\quad i=1, \ldots ,p\} \\&\quad \ge \max \{w_{i}\mathbf c _{i}(\bar{y}-\bar{x})|\ C\in {\varLambda }, C(\bar{y}-\bar{x})\geqq 0,\quad i=1, \ldots ,p\} \\&\quad =\max \{w_{i}\mathbf c _{i}({y}-\bar{x})|\ y\in X, C\in {\varLambda }, C({y}-\bar{x})\geqq 0,\quad i=1, \ldots ,p\}. \end{aligned}$$

Consequently,

$$\begin{aligned}&\max \{w_{i}\mathbf c _{i}({y}-{x^{*}})|\ y\in X, C\in {\varLambda }, C({y}-{x^{*}})\geqq 0,\quad i=1, \ldots ,p\}\\&\quad > \max \{w_{i}\mathbf c _{i}({y}-\bar{x})|\ y\in X, C\in {\varLambda }, C({y}-\bar{x})\geqq 0,\quad i=1, \ldots ,p\}, \end{aligned}$$

which is a contradiction to the fact that \(x^*\) is an optimal solution of Problem (8). \(\square \)

Theorem 6

If \(x^*\) is a unique optimal solution of Problem (8) with \(V^*=w_{k}\mathbf c _{k}^{*}(y^*-x^*)>0\), then \(x^*\in P_{E}\).

Proof

By considering Theorem 2, the proof is similar to that of Theorem 5. \(\square \)

4 An algorithm

In this section, we propose an algorithm for solving Problem (8). To this end, firstly, Problem (8) is transformed into an optimization problem with an infinite number of constraints by use of a new variable \(\sigma \) as follows:

$$\begin{aligned}&\min \quad \sigma , \end{aligned}$$
(14)
$$\begin{aligned}&s.t. \quad \max _{1\le i \le p}w_{i}\mathbf c _{i}(y-x)\le \ \sigma ,\quad \forall y\in X,\quad \forall C\in {\varPsi } \quad \text{ if }\ C(y-x)\geqq 0,\qquad \end{aligned}$$
(15)
$$\begin{aligned} x \in X=\{x|Ax\le b,x\ge 0 \},\quad \sigma \ge 0. \end{aligned}$$
(16)

This problem belongs to the class of semi-infinite programming problems (Goberna and López 2002; López and Still 2007), however, for its very specific structure, it is more convenient to investigate it directly.

A useful way to find a solution for Problems (1416) is to solve a series of the following relaxed version of the problem:

$$\begin{aligned}&\min \quad \sigma , \end{aligned}$$
(17)
$$\begin{aligned}&s.t. \quad \max _{1\le i \le p}w_{i}\mathbf c _{i}^{h}(y^{h}-x)\le \ \sigma ,\quad h=1, \ldots ,k \quad \text{ if }\ C^h(y^{h}-x)\geqq 0, \end{aligned}$$
(18)
$$\begin{aligned} x \in X,\ \sigma \ge 0, \end{aligned}$$
(19)

where \(y^h\) and \(C^h\), \(h=1, \ldots ,k\), are some special choices in X and \({\varPsi }\), respectively.

Proposition 1

Let \((x^{k},\sigma ^{k})\) be an optimal solution of Problems (1719), and feasible for Problems (1416). Then it is optimal for Problems (1416) and \(x^k\) is an optimal solution to the original Problem (8).

Proof

It follows from the fact that the constraint (15) implies (18). \(\square \)

Let \((x^{k},\sigma ^{k})\) be an optimal solution of Problems (1719). If \((x^{k},\sigma ^{k})\) is feasible for Problems (1416), then \(x^k\) is an optimal solution to the original Problem (8) by Proposition 1. Otherwise, if \((x^{k},\sigma ^{k})\) is not feasible for Problems (1416), then there exists at least one constraint violated by \((x^{k},\sigma ^{k})\) among the infinite number of constraints (15). In this case, the most violated constraint (i.e., the constraint giving the maximum regret of \(x^k\)) among those is generated and is added to the constraints of the Problems (1719). Then the updated problem, by setting \(k:=k+1\), should be solved. The test for feasibility (i.e. whether the optimal solution \((x^{k},\sigma ^{k})\) of the Problems (1719) is feasible for Problems (1416) or not) and the generation of the most violated constraint can be accomplished as:

  • If \(\max _{1\le i\le p}w_{i}\mathbf c _{i}(y-x^{k})\le \sigma ^k\) for all \(y\in X\) and for all \(C\in {\varPsi }\) whenever \(C(y-x^{k})\geqq 0\) then \((x^{k},\sigma ^{k})\) is feasible for constraint (15).

  • If not \(\max _{1\le i\le p}w_{i}\mathbf c _{i}(y-x^{k})\le \sigma ^k\) for all \(y\in X\) and for all \(C\in {\varPsi }\) whenever \(C(y-x^{k})\geqq 0\) then by solving \(\max \{w_{i}\mathbf c _{i}(y-x^k)| y\in X, C\in {\varPsi }, C(y-x^k)\geqq 0,i=1, \ldots ,p \}\), the most violated constraint can be obtained. The new constraint is added to the k constraints in (18) to make the updated problem.

Algorithm 4.1:

  • Input An instance of an interval MOLP problem and the weights of objective functions \(w_i\), \(i=1, \ldots ,p\).

  • Step 1 Solve the linear programming problems \(\max _{x\in X}\mathbf c _{i}^{u}x\) where \(\mathbf c _{i}^{u}=(c_{i1}^{u}, \ldots ,c_{in}^{u})\), \(i=1, \ldots ,p\). Suppose an optimal solution of the ith problem is \(x^{i*}\), \(i=1, \ldots ,p\), and \(\mathbf c _{i_0}^{u}x^{i_{0}*}=\max _{1\le i\le p}\mathbf c _{i}^{u}x^{i*}\). Then choose \(y^{1}\) and \(C^1\) as \(x^{i_{0}*}\) and \(C^{u}=(\mathbf c _{1}^{u}, \ldots ,\mathbf c _{p}^{u})^t\), respectively.

  • Step 2 Set \(k=2\), \(\sigma ^1=0\) and \(x^1=y^1\).

  • Step 3 Solve the following problem:

    $$\begin{aligned} \max \left\{ w_{i}\mathbf c _{i}(y-x^{k-1})| y\in X, C\in {\varPsi }, C(y-x^{k-1})\geqq 0,\quad i=1, \ldots ,p \right\} .\nonumber \\ \end{aligned}$$
    (20)

    Suppose \((y^k,C^k)\) and \({\varPhi }^{k-1}\) are an optimal solution and the optimal value of Problem (20), respectively.

  • Step 4 If \({\varPhi }^{k-1}\le \sigma ^{k-1}\), then stop. In this case, \(x^{k-1}\) is an optimal solution of Problem (8).

  • Step 5 Solve the following problem:

    $$\begin{aligned}&\min \quad \sigma , \nonumber \\&s.t. \quad \max _{1\le i\le p}w_{i}\mathbf c _{i}^{h}(y^{h}-x)\le \sigma ,\quad h=1, \ldots ,k,\quad \text{ if }\ C^{h}(y^h-x)\geqq 0, \nonumber \\&x \in X,\ \sigma \ge 0. \end{aligned}$$
    (21)

    Let \((x^k,\sigma ^k)\) be an optimal solution of Problem (21). Set \(k:=k+1\) and return to Step 3.

  • Output An optimal solution of Problem (8).

In Algorithm 4.1, after solving p linear programming problems, \(y^1\) and \(C^1\) are easily determined in Step 1. To solve Problem (20) in Step 3, we suggest solving p mixed integer programming problems when X is nonempty and bounded. By Hladík (2012) and Hladík (2013), the \(i\hbox {th}\) problem of (20),

$$\begin{aligned} \max \left\{ w_{i}\mathbf c _{i}(y-x^{k-1})| y\in X, C\in {\varPsi }, C(y-x^{k-1})\geqq 0\right\} , \end{aligned}$$

is equivalent to

$$\begin{aligned}&\max \left\{ w_{i}\mathbf c _{i}^{c}(y-x^{k-1})+w_{i}\mathbf c _{i}^{\Delta }|y-x^{k-1}|:\ y\in X,C^{c}(y-x^{k-1})\right. \nonumber \\&\quad \qquad \left. +\,C^{\Delta }|y-x^{k-1}|\geqq 0\right\} , \end{aligned}$$
(22)

where \(\mathbf c _{i}^{c}=\frac{1}{2}(\mathbf c _{i}^{l}+\mathbf c _{i}^{u})\) and \(\mathbf c _{i}^{\Delta }=\frac{1}{2}(\mathbf c _{i}^{u}-\mathbf c _{i}^{l})\, (\mathbf c _{i}^{u}=(c_{i1}^{u}, \ldots ,c_{in}^{u}), \mathbf c _{i}^{l}=(c_{i1}^{l}, \ldots ,c_{in}^{l}))\) are the center and the radius of \(\mathbf c _{i}\), respectively. Moreover, \(C^{c}=(\mathbf c _{1}^{c}, \ldots ,\mathbf c _{p}^{c})^t\) and \(C^{\Delta }=(\mathbf c _{1}^{\Delta }, \ldots ,\mathbf c _{p}^{\Delta })^t\). Considering new variables \(u^{i}=|y-x^{k-1}|\), (22) can be rewritten as:

$$\begin{aligned}&\max \left\{ w_{i}\mathbf c _{i}^{c}(y-x^{k-1})+w_{i}\mathbf c _{i}^{\Delta }u^i|\ y\in X,C^{c}(y-x^{k-1})+C^{\Delta }u^i\geqq 0,\right. \\&\quad \qquad \quad \left. u^i=|y-x^{k-1}|\right\} . \end{aligned}$$

Thus, according to the \(i\hbox {th}\) problem, the following model should be solved:

$$\begin{aligned}&\max \quad z^{i}=w_{i}\mathbf{c _{i}^{c}}(y^{i}-x^{k-1})+w_{i}\mathbf{c _{i}^{\Delta }}u^{i}, \nonumber \\&s.t. \quad \mathbf{c _{i}^{c}}(y^{i}-x^{k-1})+\mathbf{c _{i}^{\Delta }}u^{i}\ge 0, \quad i=1, \ldots ,p,\\&u^{i}_{j}\le (y^{i}_{j}-x^{k-1}_{j})+b_{j}^{i}M,\quad j=1, \ldots ,n, \nonumber \\&u^{i}_{j}\le -(y^{i}_{j}-x^{k-1}_{j})+(1-b_{j}^{i})M,\quad j=1, \ldots ,n, \nonumber \\&u^{i}_{j}\ge 0,\quad j=1, \ldots ,n, \nonumber \\&b^{i}_{j}\in \{0,1\},\quad j=1, \ldots ,n, \nonumber \\&y^{i}\in X,\nonumber \end{aligned}$$
(23)

where M is a sufficiently large constant. Since (23) is a maximization problem, \(u^{i}_{j}\), \(j=1, \ldots ,n\), tends to be as large as possible. Whenever \(y^{i}_{j}\ge x^{k-1}_{j}\), the largest value for \(u^{i}_{j}\) is \(y^{i}_{j}-x^{k-1}_{j}\). In this case, \(b_{j}^{i}=0\). On the other hand, when \(y^{i}_{j}< x^{k-1}_{j}\), the largest value for \(u^{i}_{j}\) is \(-(y^{i}_{j}-x^{k-1}_{j})\). In this case, \(b_{j}^{i}=1\). Now, suppose \(z^{i*}\) is the optimal value of Problem (23). Let \(z^{t}=\max _{1\le i\le p}z^{i*}\), then in Step 3 consider \(y^k=y^{t}\), \({\varPhi }^{k-1}=z^{t}\), and \(C^k=(\mathbf c _{1}^{k}, \ldots ,\mathbf c _{p}^{k})^t\) where for every \(i=1, \ldots ,p\), \(j=1, \ldots ,n\),

$$\begin{aligned} c_{ij}^{k}= {\left\{ \begin{array}{ll} c_{ij}^{u} &{} \text {if } y_{j}^{k}-x_{j}^{k-1}\ge 0, \\ c_{ij}^{l} &{} \text {if } y_{j}^{k}-x_{j}^{k-1}< 0. \end{array}\right. } \end{aligned}$$

Actually, the optimal value of Problem (20), \({\varPhi }^{k-1}\), is the maximum value of the modified weighted regret of the feasible solution \(x^{k-1}\). Now, suppose \(\max _{1\le i\le p}w_{i}\mathbf c _{i}^k(y^k-x^{k-1})={\varPhi }^{k-1}\). Then the maximum value of the modified weighted regret of \(x^{k-1}\) can be obtained by choosing \(C^k\in {\varPsi }\) and \(y^k\in X\).

For solving the \(i\hbox {th}\) Problem of (20), the method proposed by Mausser and Laguna (1998), can also be used. But our suggested method is more efficient because it needs less variables.

According to Step 4, if \({\varPhi }^{k-1}\le \sigma ^{k-1}\) then \(x^{k-1}\) is an optimal solution of Problem (8) since all constraints in (15) are satisfied and the minimum objective value is attained. Otherwise, if \({\varPhi }^{k-1}>\sigma ^{k-1}\), then all constraints in (15) are not satisfied by the computed solution \((x^{k-1,}\sigma ^{k-1})\). In other words, \((x^{k-1,}\sigma ^{k-1})\) is not feasible for Problems (1416). Therefore, Problems (1719) has to be updated. More explicitly, \(y^k\) and \(C^k\) obtained in Step 3 are added to Problem (18). Thus, the number of constraints in the relaxed version of Problems (1416), i.e., Problems (1719), is increased. The new constraint is

$$\begin{aligned} \max _{1\le i\le p}w_{i}\mathbf c _{i}^{k}(y^{k}-x)\le \sigma ,\quad \text{ if }\ C^{k}(y^k-x)\geqq 0, \end{aligned}$$

which is named as the most violated constraint.

Finally, Problem (21) in Step 5 is discussed. In order to solve Problem (21), we suggest solving a mixed integer linear program. To do that, let h be fixed. From (21), we need the following implication:

$$\begin{aligned} \mathbf c _{i}^{h}(y^{h}-x)\geqq 0\ \ \forall \ i=1, \ldots ,p\ \Rightarrow \ w_{i}\mathbf c _{i}^{h}(y^{h}-x)\le \sigma \quad \forall \ i=1, \ldots ,p, \end{aligned}$$
(24)

which is equivalent to

$$\begin{aligned} (\exists \ 1\le i\le p\ : \ \mathbf c _{i}^{h}(y^{h}-x)<0)\ {\vee } \ (w_{i}\mathbf c _{i}^{h}(y^{h}-x)\le \sigma \quad \forall \ i=1, \ldots ,p), \end{aligned}$$
(25)

where \(a{\vee } b\) means a or b. We can rewrite (25) as:

$$\begin{aligned}&\left( (\mathbf c _{1}^{h}(y^{h}-x)<0) {\vee } \cdots {\vee } (\mathbf c _{p}^{h}(y^{h}-x)<0)\right) \ {\vee }\ (w_{i}\mathbf c _{i}^{h}(y^{h}-x)\nonumber \\&\quad \le \sigma \quad \forall \ i=1, \ldots ,p). \end{aligned}$$
(26)

Next, (26) is linearized by using p binary variables \(b_{i}^{h}\), \(i=1, \ldots ,p\), as follows:

$$\begin{aligned} \mathbf c _{i}^{h}(y^{h}-x)< & {} Kb_{i}^{h},\quad i=1, \ldots ,p, \end{aligned}$$
(27)
$$\begin{aligned} w_{i}\mathbf c _{i}^{h}(y^{h}-x)\le & {} \sigma +K\left( p-\sum _{j=1}^{p}b_{j}^{h}\right) ,\ \quad i=1, \ldots ,p, \end{aligned}$$
(28)

where K is a sufficiently large constant. If \(b_{i}^{h}=0\) for some \(i=1, \ldots ,p\), \(\mathbf c _{i}^{h}(y^{h}-x)<0\) holds true [by (27)]. When \(b_{i}^{h}=1\) for all \(i=1, \ldots ,p\), then from (28), \(w_{i}\mathbf c _{i}^{h}(y^{h}-x)\le \sigma \), \(i=1, \ldots ,p\), must be satisfied. To get ride of the strict inequalities, (27) can be written as:

$$\begin{aligned} \mathbf c _{j}^{h}(y^{h}-x)\le Kb_{j}^{h}-\epsilon ^{\prime }, \end{aligned}$$

where \(\epsilon \) is a sufficiently small positive number. Thus, the final mixed integer linear program for solving Problem (21) in Step 5 of Algorithm 4.1 is:

$$\begin{aligned}&\min \quad \sigma , \nonumber \\&s.t. \quad \mathbf c _{i}^{h}(y^{h}-x)-Kb_{i}^{h}\le -\epsilon ,\quad h=1, \ldots ,k,\quad i=1, \ldots ,p, \end{aligned}$$
(29)
$$\begin{aligned}&w_{i}\mathbf c _{i}^{h}(y^{h}-x)\le \sigma +K\left( p-\sum _{i=1}^{p}b_{i}^{h}\right) ,\ h=1, \ldots ,k, i=1, \ldots ,p, \\&b_{i}^{h}\in \{0,1\},\quad h=1, \ldots ,k,\quad i=1, \ldots ,p,\\&\quad x \in X,\ \sigma \ge 0. \end{aligned}$$

4.1 Numerical example

In this subsection, to illustrate Algorithm 4.1, a numerical example is given.

Example 2

Consider the following interval MOLP problem:

$$\begin{aligned}&\max \quad z_{1}(x)=[1,2]x_{1}+[2.5,6]x_{2}, \nonumber \\&\max \quad z_{2}(x)=[2,5]x_{1}+[2,2.5]x_{2}, \nonumber \\&s.t. \qquad x\in X, \end{aligned}$$
(30)

where

$$\begin{aligned} X= & {} \{(x_1,x_2)^t |\ x_{1}+2x_{2}\le 50,\ -x_1+x_2\le 12,\ 5x_{1}+2x_{2}\le 96,\ 2x_{1}-2x_{2}\le 22, \\&x_{1}\le 16,\ x_{2}\le 20, x_{1},\ x_{2}\ge 0\}. \end{aligned}$$

Considering \(w=(1,1)\), Algorithm 4.1 proceeds as follows:

Iteration 1:

Step 1 \(y^{1}=(10,20)^t\), \(C^{1}=\left( \begin{array}{cc} 2 &{} 6 \\ 5 &{} 2.5 \\ \end{array} \right) \).

Step 2 \(k=2\), \(\sigma ^{1}=0\) and \(x^1=y^1\).

Step 3 \(y^{2}=(11.5,19.25)^t\), \({\varPhi }^1=6\), \(C^{2}=\left( \begin{array}{cc} 2 &{} 2.5 \\ 5 &{} 2 \\ \end{array} \right) \).

Step 4 Since \({\varPhi }^1=0.15\nleqslant \sigma ^1=0\), the algorithm must be continued.

Step 5 \(x^2=(11.5,19.25)^t\), \(\sigma ^2=0\). Set \(k=3\) and return to Step 3.

Iteration 2:

Step 3 \(y^{3}=(11.5,19.25)^t\), \({\varPhi }^2=0\), \(C^{3}=\left( \begin{array}{cc} 2 &{} 6 \\ 5 &{} 2.5 \\ \end{array} \right) \).

Step 4 Since \({\varPhi }^2=0\le \sigma ^2=0\), the algorithm terminates.

Therefore, \(x^{k-1}=x^{2}=(11.5,19.25)^t\) is the optimal solution of Problem (8) according to (30). Theorem 3 implies that \((11.5,19.25)^t\) is a necessarily efficient solution.

5 Special case

In interval MOLP Problem (2) the objective functions coefficients are not determined precisely, they are given as intervals. However, sometimes the decision maker is eager to study the problem with fixed values as objective coefficients. For this reason, he or she assigns fixed values, within the intervals, to the coefficients. Accordingly, the calculation of the Problem (8) for fixed objective functions coefficients is worthwhile. Consequently, we are interested in investigating Problem (31):

$$\begin{aligned} \nu =\min _{x\in X}\max \{w_{i}\mathbf c ^{0}_{i}(y-x)| y\in X, C^{0}(y-x)\geqq 0,\quad i=1, \ldots ,p \}, \end{aligned}$$
(31)

where \(C^0\in {\varPsi }\) is a fixed matrix given by the decision maker for solving the following MOLP:

$$\begin{aligned}&\max \ C^0 x \nonumber \\&\quad s.t.\ \ x\in X. \end{aligned}$$
(32)

It should be noted that the optimal value of Problem (31) is always equal or greater than zero.

In what follows, we denote the efficient and weak efficient solutions sets of Problem (32) by \(X_{E}\) and \(X_{WE}\), respectively.

Theorem 7

The set of efficient solutions of Problem (32) is nonempty \((X_E\ne \emptyset )\) if and only if the optimal value of Problem (31) is zero.

Proof

It follows directly from Theorem 3. \(\square \)

Corollary 2

The set of efficient solutions of Problem (32) is empty \((X_E=\emptyset )\) if and only if the optimal value of Problem (31) is positive.

Proposition 2

Suppose that the optimal value of Problem (31) is \(\nu ^*=w_{k}\mathbf c ^{0}_{k}(y^{*}-x^{*})=0\), where \(x^*\) is the unique optimal solution. Then \(x^*\) is a strict efficient solution of Problem (32).

Proof

Suppose that \(x^*\) is not a strict efficient solution of Problem (32). Thus, there exists \(\hat{x}\in X\) such that \(\hat{x}\ne x^*\) and \(C^0\hat{x}\geqq C^0 x^*\). Two cases can be occurred as follows:

  • \(C^0\hat{x}\succeq C^0 x^*\). In this case,

    $$\begin{aligned} \max \{w_{i}\mathbf c ^{0}_{i}(y-x^*)| y\in X, C^{0}(y-x^*)\geqq 0,\quad i=1, \ldots ,p \}>0, \end{aligned}$$

    which is a contradiction.

  • \(C^0\hat{x}= C^0 x^*\). In this case,

    $$\begin{aligned} \max \{w_{i}\mathbf c ^{0}_{i}(y-x^*)| y\in X, C^{0}(y-x^*)\geqq 0,\quad i\!=\!1, \ldots ,p \}\!=\!w_{k}\mathbf c _{k}^{0}(\hat{x}-x^*)\!=\!0. \end{aligned}$$

    Thus, \(\hat{x}\) is an optimal solution of Problem (31), which is a contradiction with the uniqueness of \(x^*\).

\(\square \)

Theorem 8

If \(x^*\) is an optimal solution of Problem (31) with \(V^*=w_{k}\mathbf c ^{0}_{k}(y^{*}-x^{*})>0\), then \(x^*\in X_{WE}\).

Proof

It follows directly from Theorem 5. \(\square \)

In order to solve Problem (31), an algorithm similar to Algorithm 4.1 can be used. The changes are as follows:

  1. 1.

    Step 1 of Algorithm 4.1 should be replaced by: Step 1 Solve the linear programming problems \(\max _{x\in X}\mathbf c ^{0}_{i}x\), \(i=1, \ldots ,p\). Suppose an optimal solution of the \(i\hbox {th}\) problem is \(x^{i*}\), \(i=1, \ldots ,p\), and \(\mathbf c _{t}^{0}x^{t*}=\max _{1\le i\le p}\mathbf c _{i}^{0}x^{i*}\). Then choose \(y^{1}\) as \(x^{t*}\).

  2. 2.

    The problem of Step 3 is:

    $$\begin{aligned} \max \{w_{i}\mathbf c ^{0}_{i}(y-x^{k-1})| y\in X, C^0(y-x^{k-1})\geqq 0,\quad i=1, \ldots ,p \}. \end{aligned}$$
    (33)
  3. 3.

    The problem of Step 5 is:

    $$\begin{aligned}&\min \quad \sigma , \nonumber \\&s.t. \quad \max _{1\le i\le p}w_{i}\mathbf c _{i}^{0}(y^{h}-x)\le \sigma ,\quad h=1, \ldots ,k, \quad \text{ if }\ C^{0}(y^h-x)\geqq 0,\nonumber \\&\quad x \in X,\ \sigma \ge 0. \end{aligned}$$
    (34)

It should be noted that in modified Algorithm 4.1 for the special case, solving the problem of Step 3 is much easier. In fact, the optimal value of Problem (33) could be obtained by solving p linear programs.

5.1 Numerical example

Problem (31) is used to deal with an MOLP problem in the following example.

Example 3

Consider the following MOLP problem which is taken from Ehrgott (2005):

$$\begin{aligned}&\max \quad \ z_{1}(x)=x_{1}+2x_{2}, \nonumber \\&\max \quad \ z_{2}(x)=x_{1}-2x_{3}, \nonumber \\&\max \quad \ z_{3}(x)=-x_{1}+x_{3}, \nonumber \\&\quad s.t. \qquad x\in X=\left\{ x_{1}+x_{2}\le 1,x2\le 2,x_{1}-x_{2}+x_{3}\le 4, x_{i}\ge 0,\quad i=1,2,3 \right\} .\nonumber \\ \end{aligned}$$
(35)

Considering \(w=(1,1)\), the modified Algorithm 4.1 proceeds as follows:

Iteration 1:

Step 1 \(y^{1}=(0,1,5)^t\).

Step 2 \(k=2\), \(\sigma ^{1}=0\) and \(x^1=y^1\).

Step 3 \(y^{2}=(0,1,5)^t\), \({\varPhi }^1=0\).

Step 4 Since \({\varPhi }^1\le \sigma ^1=0\), the algorithm terminates.

After solving Problem (35) by using (31), an optimal solution \(x^*=(0,1,5)^t\) is achieved and the optimal value is zero. By Theorem 7, \(x^*=(0,1,5)^t\) is an efficient solution which was also shown in Ehrgott (2005).

6 Conclusion

This paper focused on multiobjective linear programming problems with interval objective functions coefficients. In order to solve such problems, the minimax regret criterion was used which is a credible criterion for dealing with uncertainty. A new model based on the minimax regret criterion was suggested which has nice properties. One of the important properties of the new model is obtaining a necessarily efficient solution as an optimal one whenever the set of necessarily efficient solutions is nonempty. Moreover, an optimal solution of the model is at least possibly weak efficient. An algorithm is proposed for obtaining an optimal solution of the new model. A numerical example is given to illustrate the algorithm and present its performance. Finally, the model for fixed objective functions coefficients, as a special case, is discussed.