1 Introduction

The interval linear programming (IvLP) problems are of perennial interest, because of their direct relevance to practical modeling and optimization of real-world processes [17]. An interval matrix is defined as

$$\begin{aligned} {{\mathbf {A}}}=[\underline{A},\overline{A}]=\{A\in \mathbb {R}^{m\times n}:\underline{A}\le A \le \overline{A}\}, \end{aligned}$$

where \(\underline{A}, \overline{A}\in {\mathbb {R}^{m\times n}}\) are given matrices and matrix inequalities \(\underline{A}\le \overline{A}\) are understood componentwise. Similarly, an interval vector is defined as a one column interval matrix

$$\begin{aligned} {{\mathbf {b}}}=[\underline{b},\overline{b}]=\{b\in {\mathbb {R}^{m}}:\underline{b}\le b \le \overline{b}\}, \end{aligned}$$

where \(\underline{b},\overline{b}\in {\mathbb {R}^{m}}\), and \(\underline{b}\le \overline{b}\). The set of all \(m\)-by-\(n\) interval matrices will be denoted by \(\mathbb {IR}^{m\times n}\) and the set of all \(m\)-dimensional interval vectors by \(\mathbb {IR}^{m}\).

In 2012, Hladík [4] proposed an open problem: Given \(x^*\in \mathbb {R}^n\), is it optimal for some realization? Some special cases have been discussed. Steuer, Hladík, Ishibuchi and Tanaka discussed the LP problems with interval objective function coefficients [4, 8, 9]. Li and Wang [10], Gabrel et al. [11] discussed the LP problems with interval right hand side. Optimal solutions of IvLP are also discussed in the recent papers [12, 13]. Methods for checking weak optimality of the solution to three types LP with interval right-hand side, with equality constraint and with inequality constraint, are investigated separately in [14].

Recently, Hladík [15] studied the interval linear systems consisting of mixed equations and inequalities with mixed free and sign-restricted variables. He generalizes the well known weak solvability characterizations by Oettli–Prager (for equations) and Gerlach (for inequalities) to a unified framework. By virtue of the result in [15], in this paper, we consider the IvLP problems in the general form (see details in next section) and propose necessary and sufficient conditions for checking weak optimality of a given vector, based on the KKT conditions of linear programming and weak solvability characterizations of mixed interval systems by Hladík.

The remainder of this paper is organized as follows: Sect. 2 provides some preliminary results used in our proofs. Section 3 deals with the problem of checking weak optimality of the IvLP in the general form. Some corollaries are given in Sect. 4.

2 Preliminary

Let us introduce some notations. The \(i\)th row of a matrix \( A\in \mathbb {R}^{m\times n}\) is denoted by \(A_{i,\cdot }\), the \(j\)th column by \(A_{\cdot ,j}\). By

$$\begin{aligned} A_{c}=\frac{1}{2}(\underline{A}+\overline{A}), A_{\Delta }=\frac{1}{2}(\overline{A}-\underline{A}), \end{aligned}$$

we denote the center and the radius matrices of \({{\mathbf {A}}}\), respectively. Similarly, the center and radius vectors of \({{\mathbf {b}}}\) are defined as

$$\begin{aligned} b_{c}=\frac{1}{2}(\underline{b}+\overline{b}),\quad b_{\Delta }=\frac{1}{2}(\overline{b}-\underline{b}), \end{aligned}$$

respectively. Let \(e=(1, \ldots , 1)^{T}\) be the \(m\)-dimensional vector of all \(1's\) and for a given \(y\in R^m\), let

$$\begin{aligned} T_y=diag(y_1, \ldots , y_m) \end{aligned}$$

denote the corresponding diagonal matrix.

Let \({{\mathbf {A}}}\in \mathbb {IR}^{m_1 \times n_1}\), \({{\mathbf {B}}}\in \mathbb {IR}^{m_1 \times n_2}\), \({{\mathbf {C}}}\in \mathbb {IR}^{m_2 \times n_1}\), \({{\mathbf {D}}}\in \mathbb {IR}^{m_2 \times n_2}\), \({{\mathbf {b}}}\in \mathbb {IR}^{m_1}\), \({{\mathbf {d}}}\in \mathbb {IR}^{m_2}\), \({{\mathbf {c}^{\mathbf {1}}}} \in \mathbb {IR}^{n_1}\), \({{\mathbf {c}^{\mathbf {2}}}} \in \mathbb {IR}^{n_2}\) and \({{\mathbf {c}^{\mathbf {1}}}},{{\mathbf {c}^{\mathbf {2}}}}\) are row vectors, and \(m_1+m_2=m,n_1+n_2=n\). Consider the IvLP problem in the general form

$$\begin{aligned}&\text {min}~{{\mathbf {c}^{\mathbf {1}}}}x^1 + {{\mathbf {c}^{\mathbf {2}}}}x^2\end{aligned}$$
(1a)
$$\begin{aligned}&{\left\{ \begin{array}{ll} \text {s.t. } &{} {{\mathbf {A}}}x^1 + {{\mathbf {B}}}x^2 = {{\mathbf {b}}}, \\ &{} {{\mathbf {C}}}x^1 + {{\mathbf {D}}}x^2 \le {{\mathbf {d}}},\\ &{} \quad x^1\ge 0 \end{array}\right. } \end{aligned}$$
(1b)

we understand the family of all LP problems

$$\begin{aligned}&\text {min}~{c^1}x^1 + {c^2}x^2\end{aligned}$$
(2a)
$$\begin{aligned}&{\left\{ \begin{array}{ll} \text {s.t.}&{} {A}x^1 + {B}x^2 = {b}, \\ &{} {C}x^1 + {D}x^2 \le {d},\\ &{}\quad x^1\ge 0 \end{array}\right. } \end{aligned}$$
(2b)

with data satisfying \(A\in {{\mathbf {A}}}\), \(B\in {{\mathbf {B}}}\), \(C\in {{\mathbf {C}}}\), \(D\in {{\mathbf {D}}}\), \(b\in {{\mathbf {b}}}\), \(d\in {{\mathbf {d}}}\), \(c^1 \in {{\mathbf {c}^{\mathbf {1}}}}\), and \({c^2}\in {\mathbf {c}}^{\mathbf {2}}\). A scenario means a concrete setting of (2).

Definition 2.1

A vector \(x\) is called a weak feasible solution of the IvLP problem (1) if it satisfies (2b), for some \(A\in {{\mathbf {A}}}, B\in {{\mathbf {B}}}\), \(C\in {{\mathbf {C}}}, D\in {{\mathbf {D}}}\), \(b\in {{\mathbf {b}}}\) and \(d\in {{\mathbf {d}}}\).

Definition 2.2

Let \(x\) be a weak feasible solution to (1). It is called weakly optimal if it is optimal for a concrete setting of (2) (\(x\) is called a weak optimal solution to (1)).

For interval systems of equations \({{\mathbf {A}}}x={{\mathbf {b}}}\), the set of all weak feasible solutions is described by the Oettli–Prager theorem \(|{A_c}x-b_c| \le A_\Delta |x|+b_\Delta \). For interval inequalities \({{\mathbf {A}}}x \le {{\mathbf {b}}}\), Gerlach’s theorem does the job by the description \(A_cx\le A_\Delta |x|+\overline{b}.\) The following results by Hladík [15] characterizes weak solvability of systems of mixed interval linear systems.

Theorem 2.1

Hladík [15]. A vector \(x=\begin{pmatrix} {x}^1 \\ {x}^2 \end{pmatrix}\in \mathbb {R}^{n} \) is a weak feasible solution of (1) if and only if it satisfies

$$\begin{aligned} \underline{A}x^1 + B_cx^2&\le B_\Delta |x^2| + \overline{b},\nonumber \\ -\overline{A}x^1 - B_cx^2&\le B_\Delta |x^2| - \underline{b},\nonumber \\ \underline{C}x^1 + D_cx^2&\le D_\Delta |x^2| + \overline{d},\nonumber \\ x^1&\ge 0, \end{aligned}$$
(3)

where \(x^1 \in \mathbb {R}^{n_1}, x^2 \in \mathbb {R}^{n_2}\), and \(n_1+n_2=n\).

Theorem 2.2

Hladík [15]. Let \(x=\begin{pmatrix} {x}^1 \\ {x}^2 \end{pmatrix}\) be a weak feasible solution to (1). Then \(x\) solves a realization of (1b) with

$$\begin{aligned} \begin{array}{lllllllllll} A&{}=&{}A_c-T_uA_\Delta ,&{}\quad B&{}=&{}B_c-T_uB_\Delta T_s,&{}\quad b&{}=&{}b_c+T_ub_\Delta ,\\ C&{}=&{}\underline{C},&{}\quad D&{}=&{}D_c-D_\Delta T_s,&{}\quad d&{}=&{}\overline{d},\\ \end{array} \end{aligned}$$
(4)

where \(s=sgn(x^2)\), and \(|u|\le e, u\in \mathbb {R}^{m_1}\) is defined as

$$\begin{aligned} u_i= {\left\{ \begin{array}{ll} \frac{\left( A_cx^1+B_cx^2-b_c\right) _i}{\left( A_\Delta x^1+B_\Delta |x^2|+b_\Delta \right) _i} &{} \mathrm{if}\,(A_\Delta x^1+B_\Delta |x^2|+b_\Delta )_i>0 \\ 1 &{} \mathrm{otherwise}\\ \end{array}\right. },\,i = 1, \ldots , m_1. \end{aligned}$$

In the next section, we propose a polynomial time method to check weak optimality of a given vector.

3 Main results

Theorem 3.1

Let \(\bar{x}=\begin{pmatrix} {\bar{x}}^1 \\ {\bar{x}}^2 \end{pmatrix}\in \mathbb {R}^n\) be a weak feasible solution of (1), that is, a realization of the interval system (1b) having \(\bar{x}\) as a feasible solution, where \(\bar{x}^1=(\bar{x}^1_1,\ldots , \bar{x}^1_{n_1})^T,\bar{x}^2=(\bar{x}^2_1,\ldots ,\bar{x}^2_{n_2})^T\), and \(A, B, C, D, b, d\) are given by (4). Assume that

$$\begin{aligned} \left\{ \begin{array}{l} \bar{x}^1_{k_i}>0 \quad i=1,\ldots ,p\\ \bar{x}^1_{k_i}=0 \quad i=p+1,\ldots ,n_1. \\ \end{array} \right. \end{aligned}$$

Denote by

$$\begin{aligned} F=\left\{ r_j|j=1,\ldots ,q, C_{r_j, \cdot }\bar{x}^1+D_{r_j, \cdot }\bar{x}^2 < d_{r_j} \right\} . \end{aligned}$$
(5)

Then \(\bar{x}=\begin{pmatrix} {\bar{x}}^1 \\ {\bar{x}}^2 \end{pmatrix}\) is a weak optimal solution to (1) if and only if the linear system

figure a

is solvable, where \(\underline{c^k}\) and \(\overline{c^k}\) are the lower and upper bounds of interval vector \({\mathbf {c}^{\mathbf {k}}}\), respectively. Here \(c^k_i\) is the \(i\)-th element of vector \(c^k\), and \(y^1=(y^1_1,\ldots ,y^1_{m_1}) \in \mathbb {R}^{m_1},\,y^2=(y^2_1,\ldots ,y^2_{m_2})\in \mathbb {R}^{m_2}\) are row vectors.

Proof

“If”: Assume that the linear system (6) is solvable, let \(\bar{y}=(\bar{y}^1,\bar{y}^2)\), \(c^1\in {\mathbf {c}^{\mathbf {1}}}\) and \(c^2 \in {\mathbf {c}^{\mathbf {2}}}\) be a solution to (6) , we show that \(\bar{x}=\begin{pmatrix} {\bar{x}}^1 \\ {\bar{x}}^2 \end{pmatrix}\) is a weak optimal solution to (1). From (6a) we have

$$\begin{aligned} \bar{y}^1(A_{\cdot ,k_1},A_{\cdot ,k_2},\ldots ,A_{\cdot ,k_p})+\bar{y}^2(C_{\cdot ,k_1},C_{\cdot ,k_2},\ldots ,C_{\cdot ,k_p})=\left( c^1_{k_1},c^1_{k_2},\ldots ,c^1_{k_p}\right) . \end{aligned}$$

and consequently

$$\begin{aligned}&\bar{y}^1(A_{\cdot ,k_1},A_{\cdot ,k_2},\ldots ,A_{\cdot ,k_p}) \begin{pmatrix} \bar{x}^1_{k_1}\\ \bar{x}^1_{k_2}\\ \vdots \\ \bar{x}^1_{k_p} \end{pmatrix}+y^2(C_{\cdot ,k_1},C_{\cdot ,k_2},\ldots ,C_{\cdot ,k_p}) \begin{pmatrix} \bar{x}^1_{k_1}\\ \bar{x}^1_{k_2}\\ \vdots \\ \bar{x}^1_{k_p} \end{pmatrix}\\&\quad =(c^1_{k_1},c^1_{k_2},\ldots ,c^1_{k_p})\begin{pmatrix} \bar{x}^1_{k_1}\\ \bar{x}^1_{k_2}\\ \vdots \\ \bar{x}^1_{k_p} \end{pmatrix} \end{aligned}$$

or

$$\begin{aligned} \bar{y}^1\sum \limits _{i=1}^{p}A_{\cdot ,k_i}\bar{x}^1_{k_i}+\bar{y}^2\sum \limits _{i=1}^{p}C_{\cdot ,k_i}\bar{x}^1_{k_i}=\sum \limits _{i=1}^{p}c^1_{k_i}\bar{x}^1_{k_i}. \end{aligned}$$
(7)

Note that \(\bar{x}^1_{k_{p+1}}=\bar{x}^1_{k_{p+2}}=\cdots =\bar{x}^1_{k_{n_1}}=0\), we get

$$\begin{aligned} A\bar{x}^1=\sum \limits _{i=1}^{n_1}A_{\cdot ,k_i}\bar{x}_{k_i}^1=\sum \limits _{i=1}^{p}A_{\cdot ,k_i}\bar{x}_{k_i}^1,\,\,C\bar{x}^1=\sum \limits _{i=1}^{n_1}C_{\cdot ,k_i}\bar{x}_{k_i}^1=\sum \limits _{i=1}^{p}C_{\cdot ,k_i}\bar{x}_{k_i}^1. \end{aligned}$$
(8)

Hence from (7), (8), we obtain

$$\begin{aligned} \bar{y}^1A\bar{x}^1+\bar{y}^2C\bar{x}^1 = c^1\bar{x}^1. \end{aligned}$$
(9)

From (6c) we have

$$\begin{aligned} \bar{y}^1B+\bar{y}^2D=c^2. \end{aligned}$$

So

$$\begin{aligned} \bar{y}^1B\bar{x}^2+\bar{y}^2D\bar{x}^2=c^2\bar{x}^2. \end{aligned}$$
(10)

Then (9) adds (10), we get

$$\begin{aligned} \bar{y}^1({A\bar{x}^1+B\bar{x}^2})+\bar{y}^2({C\bar{x}^1+D\bar{x}^2}) = c^1\bar{x}^1+c^2\bar{x}^2. \end{aligned}$$
(11)
  1. (a)

    If the set \(F\) defined by (5) is empty, i.e., \(F=\{r_j | j=1,\ldots ,q, C_{r_j, \cdot } \bar{x} ^1+D_{r_j, \cdot } \bar{x} ^2 < d _{r_j}\} = \emptyset \), so we get

    $$\begin{aligned} C{\bar{x} ^1}+D {\bar{x} ^2}=d. \end{aligned}$$
    (12)
    1. (a.1)

      Then \(\bar{x}^1,\bar{x}^2\) solve the system

      $$\begin{aligned} Cx^1+Dx ^2\le d, x^1 \ge 0 \end{aligned}$$
      (13)
    2. (a.2)

      Clearly \(\bar{x}^1,\bar{x}^2\) solve the system

      $$\begin{aligned} A x^1+B x^2 = b, x^1\ge 0 \end{aligned}$$
      (14)

      since \(\bar{x}^1,\bar{x}^2\) is a weak feasible solution to (1).

    3. (a.3)

      Obviously, \(\bar{y}=(\bar{y}^1,\bar{y}^2)\) solves the system

      $$\begin{aligned} y^1A+y^2C&\le c^1,\nonumber \\ y^1B+y^2D&= c^2,\nonumber \\ y^2&\le 0, \end{aligned}$$
      (15)

      since \(\bar{y}=(\bar{y}^1,\bar{y}^2)\) satisfies the system (6).

From (11), (12) and (14) we have

$$\begin{aligned} \bar{y}^1{b}+\bar{y}^2{d}=c^1\bar{x}^1+c^2\bar{x}^2. \end{aligned}$$
(16)

Conditions (13)–(15) and (16) imply the KKT conditions for LP problem (2). Thus \(\bar{x}=\begin{pmatrix} {\bar{x}}^1 \\ {\bar{x}}^2 \end{pmatrix}\) is an optimal solution to (2) and hence \(\bar{x}=\begin{pmatrix} {\bar{x}}^1 \\ {\bar{x}}^2 \end{pmatrix}\) is a weak optimal solution to (1).

  1. (b)

    If \(F=\{r_j|j=1,\ldots ,q, C_{r_j, \cdot }\bar{x}^1+D_{r_j,\cdot }\bar{x}^2< d_{r_j}\} \ne \emptyset \), we get \(\bar{y}^2_{r_j}=0,j=1,\ldots ,q \) from (6d). So

    $$\begin{aligned} C_{r_j, \cdot }\bar{x}^1+D_{r_j,\cdot }\bar{x}^2=d_{r_j},\quad j=q+1,\ldots ,m_2. \end{aligned}$$
    1. (b.1)

      Note that \((C \bar{x}^1+D \bar{x}^2)_k= C_{k, \cdot }\bar{x}^1+D_{k, \cdot } \bar{x}^2, k=1,\ldots ,m_2\). Thus for \(k\in F\), we have

      $$\begin{aligned} (C \bar{x}^1+D \bar{x}^2)_k=C_{k, \cdot }\bar{x}^1+D_{k, \cdot } \bar{x}^2<d_k. \end{aligned}$$

For \(k\notin F\), we have

$$\begin{aligned} (C \bar{x}^1+D \bar{x}^2)_k=C_{k, \cdot }\bar{x}^1+D_{k, \cdot } \bar{x}^2 =d_k. \end{aligned}$$

Clearly \(\bar{x}^1,\bar{x}^2\) solves the system

$$\begin{aligned} C x^1+D x^2\le d,\quad x^1\ge 0. \end{aligned}$$
(17)

Now we prove that

$$\begin{aligned} \bar{y}^2(C\bar{x}^1+D\bar{x}^2)=\bar{y}^2d \end{aligned}$$
(18)

In fact,

$$\begin{aligned} \bar{y}^2(C\bar{x}^1 +D\bar{x} ^2)&= \left( \bar{y}^2_{r_1},\ldots ,\bar{y}^2_{r_q},\bar{y}^2_{r_{q+1}},\ldots ,\bar{y}^2_{r_{m_2}}\right) \begin{pmatrix} C_{ r_1,\cdot }\bar{x}^1+D_{r_1,\cdot }\bar{x}^2\\ \vdots \\ C_{ r_q,\cdot }\bar{x}^1+D_{r_q,\cdot }\bar{x}^2\\ C_{r_{q+1},\cdot }\bar{x}^1+D_{r_{q+1},\cdot }\bar{x}^2\\ \vdots \\ C_{m_2,\cdot }\bar{x}^1+D_{m_2,\cdot }\bar{x}^2\\ \end{pmatrix}\\&= \left( 0,\ldots ,0,\bar{y}^2_{r_{q+1}},\ldots ,\bar{y}^2_{r_{m_2}}\right) \begin{pmatrix} d_1\\ \vdots \\ d_q\\ d_{q+1}\\ \vdots \\ d_{m_2}\\ \end{pmatrix} =\bar{y}^2d \end{aligned}$$
  1. (b.2)

    Clearly, \(\bar{x}^1,\bar{x}^2\) solves the system

    $$\begin{aligned} A x^1+B x^2 = b, x^1\ge 0, \end{aligned}$$
    (19)

    since \(\bar{x}=\begin{pmatrix} {\bar{x}}^1 \\ {\bar{x}}^2 \end{pmatrix}\) is a weak feasible solution to (1).

  2. (b.3)

    Obviously, \(\bar{y}=(\bar{y}^1,\bar{y}^2)\) solves the system (15), since \(\bar{y}=(\bar{y}^1,\bar{y}^2)\) satisfies the system (6).

From (11), (18) and (19) we know that (16) holds. Clearly, conditions (15)–(17) and (19) imply the KKT conditions for LP problem (2). Thus \(x=\begin{pmatrix} {\bar{x}}^1 \\ {\bar{x}}^2 \end{pmatrix}\) is an optimal solution to (2) and hence \(x=\begin{pmatrix} {\bar{x}}^1 \\ {\bar{x}}^2 \end{pmatrix}\) is a weak optimal solution to (1).

“Only if”: Assume that \(x=\begin{pmatrix} {\bar{x}}^1 \\ {\bar{x}}^2 \end{pmatrix}\) is a weak optimal solution to (1), thus there exist coefficients matrices \(A, B, C, D, b, d, c^1\) and \(c^2 \) defined by (4), such that \(x=\begin{pmatrix} {\bar{x}}^1 \\ {\bar{x}}^2 \end{pmatrix}\) solves the LP problem (2).

Due to the dual theory of linear programming, there exists a solution \(\bar{y}=(\bar{y}^1,\bar{y}^2)\) which solves LP problem

$$\begin{aligned} \mathrm{max}\,\,{y}^1 b+{y}^2 d&\quad \nonumber \\ \mathrm{s.t.}\,{y}^1 A+{y}^2 C&\le c^1\nonumber \\ {y}^1 B+{y}^2 D&= c^2 \nonumber \\ {y}^2&\le 0. \end{aligned}$$
(20)

and it holds that

$$\begin{aligned} (c^1-{y}^1A-{y}^2C){x}^1&=0 \end{aligned}$$
(21)
$$\begin{aligned} {y}^2(C{x}^1+D{x}^2-d)&=0 \end{aligned}$$
(22)

Thus, (21) leads to (6a), (20) leads to (6b), (6c) and (6e), (22) leads to (6d). The conditions (6f) and (6g) are apparently satisfied. So \(\bar{y}=(\bar{y}^1,\bar{y}^2)\) solves system (6).

This completes the proof of the theorem. \(\square \)

Remark

Obviously, Theorem 3.1 can be slightly strengthened as the condition (6) is equivalent to

figure b

where the variables \(c^1, c^2\) in (6) are eliminated.

Theorem 3.1 presents the necessary and sufficient conditions for checking weak optimality of given weak feasible solutions. The method is simple, easy to implement and very efficient, since it runs in polynomial time. Thus, the open problem proposed by Hladík [4] is completely solved. Given \(x^*\in \mathbb {R}^n\), we can first check the weak feasibility by conditions (3) in Theorem 2.1 and then check the weak optimality by conditions (6) in Theorem 3.1.

4 Some corollaries

For convenience, the IvLP problems are normally divided into three canonical forms: Type A, B and C [1, 3]. Several important corollaries which are suited for IvLP of Type A, B and C respectively can be obtained when we let some of the constraints or coefficients interval matrices (vectors) in problem (1) be zeros.

4.1 Type A

Consider the IvLP problem in the form of Type A:

$$\begin{aligned} \mathrm{min}~{\mathbf {c}}x ~~~~\nonumber \\ \mathrm{s.t.}~~{{\mathbf {A}}}x={{\mathbf {b}}},\nonumber \\ x\ge 0. \end{aligned}$$
(24)

Method for checking weak optimality of a given vector for problem (24) can be obtained when we let \({{\mathbf {B}}}, {{\mathbf {C}}}, {{\mathbf {D}}}, {{\mathbf {d}}}, {{\mathbf {c}^{\mathbf {2}}}}\) be zeros, and \({{\mathbf {A}}} \in \mathbb {IR}^{m\times n}, {{\mathbf {b}}} \in \mathbb {IR}^m, {{\mathbf {c}^{\mathbf {1}}}}={\mathbf {c}}\in \mathbb {IR}^n\) in the problem (1). In fact, for a given vector \(x\in \mathbb {R}^n\), we first check whether it is feasible by the condition \(\underline{A}x\le \bar{b}, -\bar{A}\le -\underline{b}\). Then from (4) we know that if \(x\) is weak feasible, then it solves a realization of \({{\mathbf {A}}}x={{\mathbf {b}}}, x\ge 0\) with

$$\begin{aligned} A=A_c - T_u A_\Delta ,\,b= b_c + T_u b_\Delta , \end{aligned}$$
(25)

where \(|u|\le e, u \in \mathbb {R}^m\), and \(u_i={\left\{ \begin{array}{ll} \frac{(A_cx-b_c)_i}{(A_\Delta x+b_\Delta )_i}, &{} \text {if} ~(A_\Delta x+b_\Delta )_i>0, \\ 1, &{} \text {otherwise}.\\ \end{array}\right. }\,i=1,\ldots ,m\).

Theorem 3.1 yields the following statement.

Corollary 4.1

Let \(x\in \mathbb {R}^n\) be a weak feasible solution to (24), \(A, b\) are obtained from (25), assume that

$$\begin{aligned} \left\{ \begin{array}{l} \bar{x}^1_{k_i}>0\quad i=1,\ldots ,p,\\ \bar{x}^1_{k_i}=0\quad i=p+1,\ldots ,n.\\ \end{array}\right. \end{aligned}$$

Then \(\bar{x}\) is a weak optimal solution to (24) if and only if the linear system

$$\begin{aligned} yA_{\cdot , k_i}&=c_{k_i},\quad i=1,\ldots ,p \nonumber \\ yA_{\cdot , k_i}&\le c_{k_i}, \quad i=p+1,\ldots ,n\nonumber \\ \underline{c} \le c&\le \bar{c}, \end{aligned}$$
(26)

is solvable, where \(\underline{c},\bar{c}\) are the lower and upper bounds of interval vector \({\mathbf {c}}\), respectively. Here \(c_k\) is the \(k\)-th element of vector \(c\), and \(y=(y_1,\ldots ,y_m) \in \mathbb {R}^m\) is a row vector. Similar results can be obtained for Type B and C IvLP.

4.2 Type B

Consider the IvLP problem in the form of Type B:

$$\begin{aligned} \mathrm{min}~~{\mathbf {c}}x ~~~~~\nonumber \\ \mathrm{s.t.}~{{\mathbf {A}}}x\le {{\mathbf {b}}}, \end{aligned}$$
(27)

For a given vector \(x\in \mathbb {R}^n\), we first check whether it is a weak solution to \({{\mathbf {A}}}x\le {{\mathbf {b}}}\) by the condition \(A_cx - A_\Delta |x| \le \bar{b}\). Then from (4) we know that if \(x\) is a weak solution to \({{\mathbf {A}}}x\le {{\mathbf {b}}}\) , then it solves a realization of \({{\mathbf {A}}}x \le {{\mathbf {b}}}\) with

$$\begin{aligned} A=A_c-A_{\Delta }T_{sgn(x)},\quad b=\bar{b}. \end{aligned}$$
(28)

Theorem 3.1 yields the following statement.

Corollary 4.2

Let \(x\in \mathbb {R}^n\) be a weak feasible solution to (27), \(A, b\) are obtained from (28). Denote by

$$\begin{aligned} F=\{r_j|j=1,\ldots ,q, A_{r_j, \cdot } x < b_{r_j}\}. \end{aligned}$$

Then \(\bar{x}\) is a weak optimal solution of (27) if and only if the linear system

$$\begin{aligned} yA&=c, \nonumber \\ y_{r_j}&= 0, \quad j=1,\ldots ,q \nonumber \\ y_{r_j}&\le 0,\quad j=q+1,\ldots ,m\nonumber \\ \underline{c}\le c&\le \bar{c}, \end{aligned}$$
(29)

is solvable, where \(\underline{c},\bar{c}\) are the lower and upper bounds of interval vector \({\mathbf {c}}\) respectively. Here \(c_{k}\) is the \(k\)-th element of vector \(c\) and \(y=(y_1,\ldots ,y_m) \in \mathbb {R}^m\) is a row vector.

4.3 Type C

Consider the IvLP problem in the form of Type C:

$$\begin{aligned} \mathrm{min}~~{\mathbf {c}}x ~~~~~\nonumber \\ \mathrm{s.t.}~{{\mathbf {A}}}x\le {{\mathbf {b}}},\nonumber \\ x\ge 0. \end{aligned}$$
(30)

For a given vector \(x\in \mathbb {R}^n\), we first check whether it is a weak feasible solution to \({{\mathbf {A}}}x\le {{\mathbf {b}}}, x\le 0\) by condition \(\underline{A}x \le \bar{b}, x \ge 0\). From (4) we know that if \(x\) is weakly feasible, then it solves a realization of \({{\mathbf {A}}}x\le {{\mathbf {b}}}, x\ge 0\) with

$$\begin{aligned} A=\underline{A},\quad \,b=\bar{b}. \end{aligned}$$
(31)

Theorem 3.1 yields the following statement.

Corollary 4.3

Let \(x\in \mathbb {R}^n\) be a weak feasible solution to (30), \(A, b\) are obtained from (31), assume that

$$\begin{aligned} \left\{ \begin{array}{l} \bar{x}^1_{k_i}>0 \quad i=1,\ldots ,p,\\ \bar{x}^1_{k_i} = 0 \quad i=p+1,\ldots ,n.\\ \end{array} \right. \end{aligned}$$

Denote by

$$\begin{aligned} F=\{r_j|j=1,\ldots ,q, A_{r_j, \cdot } x < b_{r_j}\}. \end{aligned}$$

Then \(\bar{x}\) is a weak optimal solution of (30) if and only if the linear system

$$\begin{aligned} yA_{\cdot , k_i}&= c_{k_i},\quad i=1,\ldots ,p \nonumber \\ yA_{\cdot , k_i}&\le c_{k_i},\quad i=p+1,\ldots ,n \nonumber \\ y_{r_j}&= 0,\quad j=1,\ldots ,q \nonumber \\ y_{r_j}&\le 0,\quad j=q+1,\ldots ,m \nonumber \\ \underline{c} \le c&\le \bar{c}, \end{aligned}$$
(32)

is solvable, where \(\underline{c}\) and \(\bar{c}\) are the lower and upper bounds of interval vector \({\mathbf {c}}\), respectively. Here \(c_{k}\) is the \(k\)th element of vector \(c\), and \(y=(y_1,\ldots ,y_m) \in \mathbb {R}^m\) is a row vector.

Repeating the remark at the end in Sect. 3, the conditions given by (26), (29) and (32) can be transformed to their equivalent forms respectively in a similar method.

5 Conclusion

We presented efficient methods for checking weak optimality of IvLP problem in the general form. It is known that there are different kinds of optimal solution concepts of IvLP. Recently, methods to check \(({\mathbf {b}}^{\exists }, {\mathbf {A}}^{\forall },\mathbf {c}^{\exists })\)-optimality, \(({\mathbf {A}}^{\forall }, {\mathbf {b}}^{\exists },\mathbf {c}^{\exists })\)-optimality and \(({\mathbf {b}}^{\exists }, {\mathbf {A}}^{\forall },\mathbf {c}^{\forall })\)-optimality of given vectors for type A IvLP are developed [16], based on the tangent cone characterization of the feasible region of TvLP. Clearly, how to describe characterizations of solutions to interval linear systems is an important step to investigate the optimality of IvLP problem. Some new formulation of solvability and feasibility of interval linear systems are discussed in [17]. Mixed integer programming approaches for checking strong solvability and feasibility of linear interval equation are discussed in [18]. These results are helpful to develop new methods for studding the optimality of IvLp. On this issue, other useful tools are interval Farkas type theorems. For new developments on interval Farkas type theorems, we refer the reader to [19, 20], among others.