1 Introduction

The famous theorem (often called a lemma) proved by Julius Farkas 111 years ago [1] asserts that a system of linear equations

$$\begin{aligned} Ax=b \end{aligned}$$
(1)

(with \(A\in {\mathbb{R }}^{m\times n}\) and \(b\in \mathbb{R }^m\)) has a nonnegative solution if and only if for each \(p\in \mathbb{R }^m, A^Tp\ge 0\) implies \(b^Tp\ge 0\). Several consequences for systems of other types than (1) can be drawn from this result. For instance, since a system of linear inequalities

$$\begin{aligned} Ax\le b \end{aligned}$$
(2)

has a solution if and only if the system of linear equations

$$\begin{aligned} Ax_1-Ax_2+x_3=b \end{aligned}$$

has a nonnegative solution, from the Farkas theorem we obtain that (2) is solvable if and only if for each \(p\ge 0, A^Tp=0\) implies \(b^Tp\ge 0\).

Rohn [4] and recently independently Karademir and Prokopyev [3] (see also [6]) formulated a Farkas-type theorem for interval linear equations. Given an \(m\times n\) interval matrix \({{{{\varvec{A}}}}}=[{\underline{A}},{\overline{A}}]=[A_c-\Delta ,A_c+\Delta ]=\{\, A \mid A_c-\Delta \le A\le A_c+\Delta \,\}\) (where \(A_c=({\underline{A}}+{\overline{A}})/2\) and \(\Delta =({\overline{A}}-{\underline{A}})/2\) are the midpoint matrix and the radius matrix of \({{{{\varvec{A}}}}}\), respectively) and an interval \(m\)-vector \({{{{\varvec{b}}}}}={[\underline{b},\overline{b}]}=[b_c-\delta ,b_c+\delta ]=\{\, b \mid b_c-\delta \le b\le b_c+\delta \,\}\) (where again \(b_c=({\underline{b}}+{\overline{b}})/2\) and \(\delta =({\overline{b}}-{\underline{b}})/2\)), they proved that the system

$$\begin{aligned} Ax=b \end{aligned}$$

has a nonnegative solution for each \(A\in {{{{\varvec{A}}}}}, b\in {{{{\varvec{b}}}}}\) if and only if for each \(p, A_c^Tp+\Delta ^T|p|\ge 0\) implies \(b_c^Tp-\delta ^T|p|\ge 0\). The presence of absolute values renders, however, this result practically useful for small values of \(m\) only: checking this Farkas-type condition is NP-hard [4].

In this paper we formulate a Farkas-type condition for interval linear inequalities. Given an interval matrix \({{{{\varvec{A}}}}}\) and an interval vector \({{{{\varvec{b}}}}}\) as above, we are interested in solvability of each system

$$\begin{aligned} Ax\le b \end{aligned}$$
(3)

with data satisfying

$$\begin{aligned} A\in {{{{\varvec{A}}}}},\ b\in {{{{\varvec{b}}}}}. \end{aligned}$$
(4)

This type of solvability is called strong solvability of a formally written system of interval linear inequalities \({{{{\varvec{A}}}}}x\le {{{{\varvec{b}}}}}\), see Chapter 2 in [2] for a survey of results. In Theorem 1 we prove a Farkas-type condition for strong solvability which we then use to obtain another proof of the result by Rohn and Kreslová [7] saying that if \({{{{\varvec{A}}}}}x\le {{{{\varvec{b}}}}}\) is strongly solvable, then all the systems (3), (4) have a common solution which is called a strong (it could also be termed “universal”) solution of \({{{{\varvec{A}}}}}x\le {{{{\varvec{b}}}}}\). As the main result of this paper we give in Theorem 4 four alternative descriptions of the set of strong solutions, and in the concluding Sect. 5 we show an interconnection between strong solvability of interval linear equations and that of interval linear inequalities.

2 The Farkas-type theorem

We have the following Farkas-type characterization.

Theorem 1

A system \({{{{\varvec{A}}}}}x\le {{{{\varvec{b}}}}}\) is strongly solvable if and only if for each \(p\ge 0, {\underline{A}}^Tp\le 0 \le {\overline{A}}^Tp\) implies \({\underline{b}}^Tp\ge 0.\)

Proof

“If”: Let for each \(p\ge 0, {\underline{A}}^Tp\le 0 \le {\overline{A}}^Tp\) imply \({\underline{b}}^Tp\ge 0\). Assume to the contrary that some system (3) with data satisfying (4) is not solvable. Then according to the above-quoted Farkas condition for (3) there exists a vector \(p_0\ge 0\) such that \(A^Tp_0=0\) and \(b^Tp_0<0\). But from \({\underline{A}}\le A\le {\overline{A}}\) and \(p_0\ge 0\) we obtain \({\underline{A}}^T\le A^T\le {\overline{A}}^T\) and \({\underline{A}}^Tp_0\le A^Tp_0\le {\overline{A}}^Tp_0\), hence \({\underline{A}}^Tp_0\le 0 \le {\overline{A}}^Tp_0\). Thus our assumption implies \({\underline{b}}^Tp_0\ge 0\), but we also have \({\underline{b}}^Tp_0\le b^Tp_0<0\), which is a contradiction.

“Only if”: Let each system (3) with data satisfying (4) be solvable. Assume to the contrary that the Farkas-type condition does not hold, i.e., that there exists a vector \(p_1\ge 0\) such that

$$\begin{aligned} {\underline{A}}^Tp_1\le 0 \le {\overline{A}}^Tp_1 \end{aligned}$$

and

$$\begin{aligned} {\underline{b}}^Tp_1<0. \end{aligned}$$
(5)

For each \(i=1,\ldots ,n\) define a real function of one real variable by

$$\begin{aligned} f_i(t)=(t{\underline{A}}_{\bullet i}+(1-t){\overline{A}}_{\bullet i})^Tp_1. \end{aligned}$$

Then \(f_i(0)=({\overline{A}}_{\bullet i})^Tp_1=({\overline{A}}^Tp_1)_i\ge 0\) and \(f_i(1)=({\underline{A}}_{\bullet i})^Tp_1=({\underline{A}}^Tp_1)_i\le 0\), hence by continuity there exists a \(t_i\in [0,1]\) such that \(f_i(t_i)=0\). Now define \(A\) columnwise by

$$\begin{aligned} A_{\bullet i}=t_i{\underline{A}}_{\bullet i}+(1-t_i){\overline{A}}_{\bullet i} \qquad (i=1,\ldots ,n). \end{aligned}$$

Then \(A_{\bullet i}\), as a convex combination of \({\underline{A}}_{\bullet i}\) and \({\overline{A}}_{\bullet i}\), belongs to \([{\underline{A}}_{\bullet i},\,{\overline{A}}_{\bullet i}]\) for each \(i\), hence \(A\in {{{{\varvec{A}}}}}\). Moreover, from the definition of \(t_i\) we have \((A^Tp_1)_i=(A_{\bullet i})^Tp_1=f_i(t_i)=0\) for each \(i\), hence \(A^Tp_1=0\) which together with (5) implies that the system \(Ax\le {\underline{b}}\) has no solution in contradiction to our assumption. \(\square \)

3 Strong solvability of interval linear inequalities

Let us now have a closer look at the Farkas-type condition of Theorem 1. If we write it in a slightly different form: for each \(p\),

$$\begin{aligned} p\ge 0,\,{\overline{A}}^Tp\ge 0,\, -{\underline{A}}^Tp\ge 0 \mathrm{\ implies\ }{\underline{b}}^Tp\ge 0, \end{aligned}$$

we can see that it is just the Farkas condition for nonnegative solvability of the system

$$\begin{aligned} {\overline{A}}x_1-{\underline{A}}x_2\le {\underline{b}}. \end{aligned}$$
(6)

In this way we have found another proof of a theorem by Rohn and Kreslová [7] formulated, unlike the Farkas condition, in primal terms:

Theorem 2

A system of interval linear inequalities \({{{{\varvec{A}}}}}x\le {{{{\varvec{b}}}}}\) is strongly solvable if and only if the system (6) has a nonnegative solution.

This result has a nontrivial consequence which is also contained in [7]: namely, the existence of strong solutions.

Theorem 3

If a system of interval linear inequalities \({{{{\varvec{A}}}}}x\le {{{{\varvec{b}}}}}\) is strongly solvable, then there exists an \(x_0\) such that

$$\begin{aligned} Ax_0\le b \end{aligned}$$
(7)

holds for each \(A\in {{{{\varvec{A}}}}}\) and \(b\in {{{{\varvec{b}}}}}\).

Proof

According to Theorem 2, strong solvability of \({{{{\varvec{A}}}}}x\le {{{{\varvec{b}}}}}\) implies nonnegative solvability of (6). If \(x_1\ge 0\) and \(x_2\ge 0\) solve (6), then for each \(A\in {{{{\varvec{A}}}}}\) and \(b\in {{{{\varvec{b}}}}}\) we have \(A(x_1-x_2) \le {\overline{A}}x_1-{\underline{A}}x_2 \le {\underline{b}}\le b\), so that \(x_0=x_1-x_2\) possesses the required property. \(\square \)

In order to be able to convert solutions of (6) into complementary ones, we need the following auxiliary result. Minimum of two vectors is understood entrywise.

Proposition 1

If \(x_1, x_2\) is a nonnegative solution of (6), then for each \(d\) with

$$\begin{aligned} 0\le d\le \min \{x_1,x_2\}, \end{aligned}$$

\(x^{\prime }_1=x_1-d, x^{\prime }_2=x_2-d\) is also a nonnegative solution of (6).

Proof

Since \(d\le \min \{x_1,x_2\}\le x_1\), we have \(x^{\prime }_1\ge 0\), and similarly \(x^{\prime }_2\ge 0\). Next, \({\overline{A}}x^{\prime }_1-{\underline{A}}x^{\prime }_2={\overline{A}}x_1-{\underline{A}}x_2+({\underline{A}}-{\overline{A}})d\le {\overline{A}}x_1-{\underline{A}}x_2\le {\underline{b}}\), so that \(x^{\prime }_1, x^{\prime }_2\) solve (6).

\(\square \)

A vector \(x_0\) satisfying (7) for each \(A\in {{{{\varvec{A}}}}}, b\in {{{{\varvec{b}}}}}\) is called a strong solution of \({{{{\varvec{A}}}}}x\le {{{{\varvec{b}}}}}\). Denote by \({{{\varvec{X}}}}_S({{{{\varvec{A}}}}},{{{{\varvec{b}}}}})\) the set of strong solutions. For \(x=(x_i)_{i=1}^n\) let

$$\begin{aligned} x^+&= (\max \{ x_i,0\})_{i=1}^n, \\ x^-&= (\max \{-x_i,0\})_{i=1}^n, \end{aligned}$$

and

$$\begin{aligned} |x|=(|x_i|)_{i=1}^n, \end{aligned}$$

then \(x=x^+-x^-, |x|=x^++x^-, (x^+)^Tx^-=0, x^+\ge 0\) and \(x^-\ge 0\). The following theorem brings several alternative descriptions of the set of strong solutions.

Theorem 4

We have

$$\begin{aligned} {{{\varvec{X}}}}_S({{{{\varvec{A}}}}},{{{{\varvec{b}}}}})&= \{\, x_1-x_2 \mid {\overline{A}}x_1-{\underline{A}}x_2\le {\underline{b}},\,x_1\ge 0,\,x_2\ge 0 \,\} \end{aligned}$$
(8)
$$\begin{aligned}&= \{\, x \mid {\overline{A}}x^+-{\underline{A}}x^-\le {\underline{b}} \,\} \end{aligned}$$
(9)
$$\begin{aligned}&= \{\, x \mid A_cx +\Delta |x|\le {\underline{b}} \,\} \end{aligned}$$
(10)
$$\begin{aligned}&= \{\, x \mid A_cx +\Delta t\le {\underline{b}},\,-t\le x\le t \,\}. \end{aligned}$$
(11)

Proof

The equality (8) was proved in [7, Prop. 1]. Denote by \(X_1, X_2, X_3, X_4\) the right-hand side sets in (8)–(11), respectively. We shall prove that \(X_1\subseteq X_2\subseteq X_3\subseteq X_4\subseteq X_1\).

\(X_1\subseteq X_2\)”: Let \(x_1\ge 0, x_2\ge 0\) satisfy \({\overline{A}}x_1-{\underline{A}}x_2\le {\underline{b}}\). Put \(d=\min \{x_1,x_2\}\) (entrywise) and \(x=x_1-x_2\). Then \(d\ge 0, x^+=x_1-d, x^-=x_2-d\), and \({\overline{A}}x^+-{\underline{A}}x^- \le {\underline{b}}\) by Proposition 1 which proves that \(X_1\subseteq X_2\).

\(X_2\subseteq X_3\)”: From \(x=x^+-x^-, |x|=x^++x^-\) we have \(x^+=(|x|+x)/2, x^-=(|x|-x)/2\). Substituting these quantities into \({\overline{A}}x^+-{\underline{A}}x^-\le {\underline{b}}\) leads to \(A_cx +\Delta |x|\le {\underline{b}}\).

\(X_3\subseteq X_4\)”: If \(A_cx +\Delta |x|\le {\underline{b}}\), then for \(t=|x|\) we have \(A_cx +\Delta t\le {\underline{b}}\) and \(-t\le x\le t\).

\(X_4\subseteq X_1\)”: If \(A_cx +\Delta t\le {\underline{b}}\) and \(-t\le x\le t\), then \(|x|\le t\) and nonnegativity of \(\Delta \) implies \(A_cx+\Delta |x|\le A_cx+\Delta t\le {\underline{b}}\). Substituting \(x=x^+-x^-\) and \(|x|=x^++x^-\), we obtain \({\overline{A}}x^+-{\underline{A}}x^-\le {\underline{b}}\), hence \(x_1=x^+, x_2=x^-\) satisfy \(x_1-x_2=x, x_1\ge 0, x_2\ge 0\) and \({\overline{A}}x_1-{\underline{A}}x_2\le {\underline{b}}\). \(\square \)

In the proof we have relied on the equality (8) proved in [7]. But we can also pursue another path starting from the following general-purpose theorem.

Theorem 5

Let \({{{{\varvec{A}}}}}=[A_c-\Delta ,A_c+\Delta ]\) be an \(m\times n\) interval matrix, \({{{{\varvec{b}}}}}=[b_c-\delta ,b_c+\delta ]\) an interval \(m\)-vector, and let \(x\in \mathbb{R }^n\). Then we have

$$\begin{aligned} \{\, Ax-b \mid A\in {{{{\varvec{A}}}}},\,b\in {{{{\varvec{b}}}}} \,\}=[A_cx-b_c-(\Delta |x|+\delta ),\,A_cx-b_c+(\Delta |x|+\delta )]. \nonumber \\ \end{aligned}$$
(12)

Proof

According to [2, Prop. 2.27], there holds

$$\begin{aligned} \{\, Ax \mid A\in {{{{\varvec{A}}}}} \,\}=[A_cx-\Delta |x|,\,A_cx+\Delta |x|]. \end{aligned}$$

Then it is sufficient to apply this result to the augmented interval matrix \({{{{\varvec{A}}}}}^{\prime }=({{{{\varvec{A}}}}}\mid -{{{{\varvec{b}}}}})\) and to the augmented vector \(x^{\prime }=(x^T \mid 1)^T\) to obtain (12). \(\square \)

Now, if \(x\) is a strong solution of \({{{{\varvec{A}}}}}x\le {{{{\varvec{b}}}}}\), then \(Ax-b\le 0\) for each \(A\in {{{{\varvec{A}}}}}, b\in {{{{\varvec{b}}}}}\) which is equivalent to nonpositivity of the upper bound of the right-hand side interval in (12) which is the case if and only if \(A_cx+\Delta |x|\le {\underline{b}}\). This proves (10), and the other three descriptions can be proved in the same way as in the main proof.

As a direct consequence of (11) we obtain the next result. As the terminology varies, we mention explicitly that a polytope can be bounded as well as unbounded.

Corollary 1

The set \({{{\varvec{X}}}}_S({{{{\varvec{A}}}}},{{{{\varvec{b}}}}})\) is a convex polytope.

Proof

The above set \(X_4\), described by a set of linear inequalities (11), is a convex polytope in \({\mathbb{R }}^{2n}\), hence \({{{\varvec{X}}}}_S({{{{\varvec{A}}}}},{{{{\varvec{b}}}}})\), as an \(x\)-projection of it, is a convex polytope as well. \(\square \)

4 Computation of a strong solution

The inequalities in (9), (10) provide for direct checks of whether a given \(x\) is a strong solution, or not, whereas the description in (8) makes it possible to find a strong solution by solving the linear program

$$\begin{aligned} \min \{\, e^Tx_1+e^Tx_2 \mid {\overline{A}}x_1-{\underline{A}}x_2\le {\underline{b}},\,x_1\ge 0,\,x_2\ge 0 \,\}, \end{aligned}$$
(13)

where \(e=(1,1,\ldots ,1)^T\in \mathbb{R }^n\). Indeed, solve (13); if an optimal solution \(x_1^*,x_2^*\) is found, then \({{{{\varvec{A}}}}}x\le {{{{\varvec{b}}}}}\) is strongly solvable and \(x=x_1^*-x_2^*\) is a strong solution of it; if (13) is infeasible, then \({{{{\varvec{A}}}}}x\le {{{{\varvec{b}}}}}\) is not strongly solvable (the problem (13) cannot be unbounded since its objective is nonnegative).

Finally we show that the very form of (6) enforces complementarity of any optimal solution.

Theorem 6

Each optimal solution \(x_1^*, x_2^*\) of (13) satisfies \((x_1^*)^Tx_2^*=0\).

Proof

Assume that it is not so, so that some optimal solution \(x_1^*, x_2^*\) of (13) satisfies \((x_1^*)^Tx_2^*>0\). Then \((x_1^*)_i(x_2^*)_i>0\) for some \(i\). Set \(d=\min \{x_1^*,x_2^*\}\), then \(d\ge 0\) and \(d_i>0\), and by Proposition 1, \(x_1^*-d, x_2^*-d\) is a feasible solution of (13) whose objective value \(e^T(x_1^*-d)+e^T(x_2^*-d)=e^Tx_1^*+e^Tx_2^*-2e^Td<e^Tx_1^*+e^Tx_2^*\) is less than the optimal value, a contradiction. \(\square \)

5 Strong solvability of interval linear equations and inequalities

For each \(y\in \{-1,1\}^m(\hbox {a} \pm 1\)-vector) denote by \(T_y=\mathrm{diag}(y)\) the \(m\times m\) diagonal matrix with diagonal vector \(y\). Given an \(m\times n\) interval matrix \({{{{\varvec{A}}}}}\), for each \(y\in \{-1,1\}^m\) let

$$\begin{aligned} {{{{\varvec{A}}}}}_y=\{\, T_yA \mid A\in {{{{\varvec{A}}}}} \,\}. \end{aligned}$$
(14)

It can be easily seen that if \({{{{\varvec{A}}}}}=[A_c-\Delta ,A_c+\Delta ]\), then

$$\begin{aligned} {{{{\varvec{A}}}}}_y=[T_yA_c-\Delta ,\,T_yA_c+\Delta ], \end{aligned}$$

i.e., \({{{{\varvec{A}}}}}_y\) is again an interval matrix. The same notation also applies to interval vectors that are special cases of interval matrices.

Given an \(m\times n\) interval matrix \({{{{\varvec{A}}}}}\) and an interval \(m\)-vector \({{{{\varvec{b}}}}}\), the system of interval linear equations \({{{{\varvec{A}}}}}x={{{{\varvec{b}}}}}\) is called strongly solvable [2] if each system

$$\begin{aligned} Ax=b \end{aligned}$$

with data satisfying

$$\begin{aligned} A\in {{{{\varvec{A}}}}},\,b\in {{{{\varvec{b}}}}}\end{aligned}$$

is solvable.

The following theorem establishes an interconnection between strong solvability of interval linear equations and inequalities. It shows that with each strongly solvable system of interval linear equations \({{{{\varvec{A}}}}}x={{{{\varvec{b}}}}}\) with an \(m\times n\) interval matrix \({{{{\varvec{A}}}}}\) we may associate \(2^m\) strongly solvable systems of interval linear inequalities.

Theorem 7

A system \({{{{\varvec{A}}}}}x={{{{\varvec{b}}}}}\) is strongly solvable if and only if \({{{{\varvec{A}}}}}_y^{}x\le {{{{\varvec{b}}}}}_y^{}\) is strongly solvable for each \(y\in \{-1,1\}^m\).

Proof

“Only if”: Let \(y\in \{-1,1\}^m, A^{\prime }\in {{{{\varvec{A}}}}}_y^{}\) and \(b^{\prime }\in {{{{\varvec{b}}}}}_y^{}\). In view of (14), \(A^{\prime }=T_yA\) and \(b^{\prime }=T_yb\) for some \(A\in {{{{\varvec{A}}}}}\) and \(b\in {{{{\varvec{b}}}}}\). Since \({{{{\varvec{A}}}}}x={{{{\varvec{b}}}}}\) is strongly solvable, \(Ax=b\) and thus also \(T_yAx=T_yb\) and \(T_yAx\le T_yb\) are solvable, so that \({{{{\varvec{A}}}}}_y^{} x\le {{{{\varvec{b}}}}}_y^{}\) is strongly solvable.

“If”: Conversely, let each \({{{{\varvec{A}}}}}_y^{}x\le {{{{\varvec{b}}}}}_y^{}, y\in \{-1,1\}^m\), have a strong solution \(x_y\), so that

$$\begin{aligned} T_yAx_y\le T_yb, \end{aligned}$$

which we can write as

$$\begin{aligned} T_y(b-Ax_y)\ge 0, \end{aligned}$$
(15)

holds for each \(A\in {{{{\varvec{A}}}}}, b\in {{{{\varvec{b}}}}}\). Take some fixed \(A\in {{{{\varvec{A}}}}}\) and \(b\in {{{{\varvec{b}}}}}\). Then (15) shows that the residual set

$$\begin{aligned} \{\, b-Ax \mid x\in \mathbb{R }^n \,\} \end{aligned}$$

intersects all orthants, hence Theorem 3 in [5] implies that \(Ax=b\) has a solution. Since \(A\in {{{{\varvec{A}}}}}\) and \(b\in {{{{\varvec{b}}}}}\) were arbitrary, the system \({{{{\varvec{A}}}}}x={{{{\varvec{b}}}}}\) is strongly solvable, which was to be proved. \(\square \)

This theorem offers some answer to the question why checking strong solvability on interval linear inequalities is essentially easier than checking that of interval linear equations: in the former case only one linear program (13) needs to be solved whereas \(2^m\) of them are required in the latter case.