Abstract
We describe a Farkas-type condition for strong solvability of interval linear inequalities. The result is used to derive several descriptions of the set of strong solutions and to show that this set forms a convex polytope.
Avoid common mistakes on your manuscript.
1 Introduction
The famous theorem (often called a lemma) proved by Julius Farkas 111 years ago [1] asserts that a system of linear equations
(with \(A\in {\mathbb{R }}^{m\times n}\) and \(b\in \mathbb{R }^m\)) has a nonnegative solution if and only if for each \(p\in \mathbb{R }^m, A^Tp\ge 0\) implies \(b^Tp\ge 0\). Several consequences for systems of other types than (1) can be drawn from this result. For instance, since a system of linear inequalities
has a solution if and only if the system of linear equations
has a nonnegative solution, from the Farkas theorem we obtain that (2) is solvable if and only if for each \(p\ge 0, A^Tp=0\) implies \(b^Tp\ge 0\).
Rohn [4] and recently independently Karademir and Prokopyev [3] (see also [6]) formulated a Farkas-type theorem for interval linear equations. Given an \(m\times n\) interval matrix \({{{{\varvec{A}}}}}=[{\underline{A}},{\overline{A}}]=[A_c-\Delta ,A_c+\Delta ]=\{\, A \mid A_c-\Delta \le A\le A_c+\Delta \,\}\) (where \(A_c=({\underline{A}}+{\overline{A}})/2\) and \(\Delta =({\overline{A}}-{\underline{A}})/2\) are the midpoint matrix and the radius matrix of \({{{{\varvec{A}}}}}\), respectively) and an interval \(m\)-vector \({{{{\varvec{b}}}}}={[\underline{b},\overline{b}]}=[b_c-\delta ,b_c+\delta ]=\{\, b \mid b_c-\delta \le b\le b_c+\delta \,\}\) (where again \(b_c=({\underline{b}}+{\overline{b}})/2\) and \(\delta =({\overline{b}}-{\underline{b}})/2\)), they proved that the system
has a nonnegative solution for each \(A\in {{{{\varvec{A}}}}}, b\in {{{{\varvec{b}}}}}\) if and only if for each \(p, A_c^Tp+\Delta ^T|p|\ge 0\) implies \(b_c^Tp-\delta ^T|p|\ge 0\). The presence of absolute values renders, however, this result practically useful for small values of \(m\) only: checking this Farkas-type condition is NP-hard [4].
In this paper we formulate a Farkas-type condition for interval linear inequalities. Given an interval matrix \({{{{\varvec{A}}}}}\) and an interval vector \({{{{\varvec{b}}}}}\) as above, we are interested in solvability of each system
with data satisfying
This type of solvability is called strong solvability of a formally written system of interval linear inequalities \({{{{\varvec{A}}}}}x\le {{{{\varvec{b}}}}}\), see Chapter 2 in [2] for a survey of results. In Theorem 1 we prove a Farkas-type condition for strong solvability which we then use to obtain another proof of the result by Rohn and Kreslová [7] saying that if \({{{{\varvec{A}}}}}x\le {{{{\varvec{b}}}}}\) is strongly solvable, then all the systems (3), (4) have a common solution which is called a strong (it could also be termed “universal”) solution of \({{{{\varvec{A}}}}}x\le {{{{\varvec{b}}}}}\). As the main result of this paper we give in Theorem 4 four alternative descriptions of the set of strong solutions, and in the concluding Sect. 5 we show an interconnection between strong solvability of interval linear equations and that of interval linear inequalities.
2 The Farkas-type theorem
We have the following Farkas-type characterization.
Theorem 1
A system \({{{{\varvec{A}}}}}x\le {{{{\varvec{b}}}}}\) is strongly solvable if and only if for each \(p\ge 0, {\underline{A}}^Tp\le 0 \le {\overline{A}}^Tp\) implies \({\underline{b}}^Tp\ge 0.\)
Proof
“If”: Let for each \(p\ge 0, {\underline{A}}^Tp\le 0 \le {\overline{A}}^Tp\) imply \({\underline{b}}^Tp\ge 0\). Assume to the contrary that some system (3) with data satisfying (4) is not solvable. Then according to the above-quoted Farkas condition for (3) there exists a vector \(p_0\ge 0\) such that \(A^Tp_0=0\) and \(b^Tp_0<0\). But from \({\underline{A}}\le A\le {\overline{A}}\) and \(p_0\ge 0\) we obtain \({\underline{A}}^T\le A^T\le {\overline{A}}^T\) and \({\underline{A}}^Tp_0\le A^Tp_0\le {\overline{A}}^Tp_0\), hence \({\underline{A}}^Tp_0\le 0 \le {\overline{A}}^Tp_0\). Thus our assumption implies \({\underline{b}}^Tp_0\ge 0\), but we also have \({\underline{b}}^Tp_0\le b^Tp_0<0\), which is a contradiction.
“Only if”: Let each system (3) with data satisfying (4) be solvable. Assume to the contrary that the Farkas-type condition does not hold, i.e., that there exists a vector \(p_1\ge 0\) such that
and
For each \(i=1,\ldots ,n\) define a real function of one real variable by
Then \(f_i(0)=({\overline{A}}_{\bullet i})^Tp_1=({\overline{A}}^Tp_1)_i\ge 0\) and \(f_i(1)=({\underline{A}}_{\bullet i})^Tp_1=({\underline{A}}^Tp_1)_i\le 0\), hence by continuity there exists a \(t_i\in [0,1]\) such that \(f_i(t_i)=0\). Now define \(A\) columnwise by
Then \(A_{\bullet i}\), as a convex combination of \({\underline{A}}_{\bullet i}\) and \({\overline{A}}_{\bullet i}\), belongs to \([{\underline{A}}_{\bullet i},\,{\overline{A}}_{\bullet i}]\) for each \(i\), hence \(A\in {{{{\varvec{A}}}}}\). Moreover, from the definition of \(t_i\) we have \((A^Tp_1)_i=(A_{\bullet i})^Tp_1=f_i(t_i)=0\) for each \(i\), hence \(A^Tp_1=0\) which together with (5) implies that the system \(Ax\le {\underline{b}}\) has no solution in contradiction to our assumption. \(\square \)
3 Strong solvability of interval linear inequalities
Let us now have a closer look at the Farkas-type condition of Theorem 1. If we write it in a slightly different form: for each \(p\),
we can see that it is just the Farkas condition for nonnegative solvability of the system
In this way we have found another proof of a theorem by Rohn and Kreslová [7] formulated, unlike the Farkas condition, in primal terms:
Theorem 2
A system of interval linear inequalities \({{{{\varvec{A}}}}}x\le {{{{\varvec{b}}}}}\) is strongly solvable if and only if the system (6) has a nonnegative solution.
This result has a nontrivial consequence which is also contained in [7]: namely, the existence of strong solutions.
Theorem 3
If a system of interval linear inequalities \({{{{\varvec{A}}}}}x\le {{{{\varvec{b}}}}}\) is strongly solvable, then there exists an \(x_0\) such that
holds for each \(A\in {{{{\varvec{A}}}}}\) and \(b\in {{{{\varvec{b}}}}}\).
Proof
According to Theorem 2, strong solvability of \({{{{\varvec{A}}}}}x\le {{{{\varvec{b}}}}}\) implies nonnegative solvability of (6). If \(x_1\ge 0\) and \(x_2\ge 0\) solve (6), then for each \(A\in {{{{\varvec{A}}}}}\) and \(b\in {{{{\varvec{b}}}}}\) we have \(A(x_1-x_2) \le {\overline{A}}x_1-{\underline{A}}x_2 \le {\underline{b}}\le b\), so that \(x_0=x_1-x_2\) possesses the required property. \(\square \)
In order to be able to convert solutions of (6) into complementary ones, we need the following auxiliary result. Minimum of two vectors is understood entrywise.
Proposition 1
If \(x_1, x_2\) is a nonnegative solution of (6), then for each \(d\) with
\(x^{\prime }_1=x_1-d, x^{\prime }_2=x_2-d\) is also a nonnegative solution of (6).
Proof
Since \(d\le \min \{x_1,x_2\}\le x_1\), we have \(x^{\prime }_1\ge 0\), and similarly \(x^{\prime }_2\ge 0\). Next, \({\overline{A}}x^{\prime }_1-{\underline{A}}x^{\prime }_2={\overline{A}}x_1-{\underline{A}}x_2+({\underline{A}}-{\overline{A}})d\le {\overline{A}}x_1-{\underline{A}}x_2\le {\underline{b}}\), so that \(x^{\prime }_1, x^{\prime }_2\) solve (6).
\(\square \)
A vector \(x_0\) satisfying (7) for each \(A\in {{{{\varvec{A}}}}}, b\in {{{{\varvec{b}}}}}\) is called a strong solution of \({{{{\varvec{A}}}}}x\le {{{{\varvec{b}}}}}\). Denote by \({{{\varvec{X}}}}_S({{{{\varvec{A}}}}},{{{{\varvec{b}}}}})\) the set of strong solutions. For \(x=(x_i)_{i=1}^n\) let
and
then \(x=x^+-x^-, |x|=x^++x^-, (x^+)^Tx^-=0, x^+\ge 0\) and \(x^-\ge 0\). The following theorem brings several alternative descriptions of the set of strong solutions.
Theorem 4
We have
Proof
The equality (8) was proved in [7, Prop. 1]. Denote by \(X_1, X_2, X_3, X_4\) the right-hand side sets in (8)–(11), respectively. We shall prove that \(X_1\subseteq X_2\subseteq X_3\subseteq X_4\subseteq X_1\).
“\(X_1\subseteq X_2\)”: Let \(x_1\ge 0, x_2\ge 0\) satisfy \({\overline{A}}x_1-{\underline{A}}x_2\le {\underline{b}}\). Put \(d=\min \{x_1,x_2\}\) (entrywise) and \(x=x_1-x_2\). Then \(d\ge 0, x^+=x_1-d, x^-=x_2-d\), and \({\overline{A}}x^+-{\underline{A}}x^- \le {\underline{b}}\) by Proposition 1 which proves that \(X_1\subseteq X_2\).
“\(X_2\subseteq X_3\)”: From \(x=x^+-x^-, |x|=x^++x^-\) we have \(x^+=(|x|+x)/2, x^-=(|x|-x)/2\). Substituting these quantities into \({\overline{A}}x^+-{\underline{A}}x^-\le {\underline{b}}\) leads to \(A_cx +\Delta |x|\le {\underline{b}}\).
“\(X_3\subseteq X_4\)”: If \(A_cx +\Delta |x|\le {\underline{b}}\), then for \(t=|x|\) we have \(A_cx +\Delta t\le {\underline{b}}\) and \(-t\le x\le t\).
“\(X_4\subseteq X_1\)”: If \(A_cx +\Delta t\le {\underline{b}}\) and \(-t\le x\le t\), then \(|x|\le t\) and nonnegativity of \(\Delta \) implies \(A_cx+\Delta |x|\le A_cx+\Delta t\le {\underline{b}}\). Substituting \(x=x^+-x^-\) and \(|x|=x^++x^-\), we obtain \({\overline{A}}x^+-{\underline{A}}x^-\le {\underline{b}}\), hence \(x_1=x^+, x_2=x^-\) satisfy \(x_1-x_2=x, x_1\ge 0, x_2\ge 0\) and \({\overline{A}}x_1-{\underline{A}}x_2\le {\underline{b}}\). \(\square \)
In the proof we have relied on the equality (8) proved in [7]. But we can also pursue another path starting from the following general-purpose theorem.
Theorem 5
Let \({{{{\varvec{A}}}}}=[A_c-\Delta ,A_c+\Delta ]\) be an \(m\times n\) interval matrix, \({{{{\varvec{b}}}}}=[b_c-\delta ,b_c+\delta ]\) an interval \(m\)-vector, and let \(x\in \mathbb{R }^n\). Then we have
Proof
According to [2, Prop. 2.27], there holds
Then it is sufficient to apply this result to the augmented interval matrix \({{{{\varvec{A}}}}}^{\prime }=({{{{\varvec{A}}}}}\mid -{{{{\varvec{b}}}}})\) and to the augmented vector \(x^{\prime }=(x^T \mid 1)^T\) to obtain (12). \(\square \)
Now, if \(x\) is a strong solution of \({{{{\varvec{A}}}}}x\le {{{{\varvec{b}}}}}\), then \(Ax-b\le 0\) for each \(A\in {{{{\varvec{A}}}}}, b\in {{{{\varvec{b}}}}}\) which is equivalent to nonpositivity of the upper bound of the right-hand side interval in (12) which is the case if and only if \(A_cx+\Delta |x|\le {\underline{b}}\). This proves (10), and the other three descriptions can be proved in the same way as in the main proof.
As a direct consequence of (11) we obtain the next result. As the terminology varies, we mention explicitly that a polytope can be bounded as well as unbounded.
Corollary 1
The set \({{{\varvec{X}}}}_S({{{{\varvec{A}}}}},{{{{\varvec{b}}}}})\) is a convex polytope.
Proof
The above set \(X_4\), described by a set of linear inequalities (11), is a convex polytope in \({\mathbb{R }}^{2n}\), hence \({{{\varvec{X}}}}_S({{{{\varvec{A}}}}},{{{{\varvec{b}}}}})\), as an \(x\)-projection of it, is a convex polytope as well. \(\square \)
4 Computation of a strong solution
The inequalities in (9), (10) provide for direct checks of whether a given \(x\) is a strong solution, or not, whereas the description in (8) makes it possible to find a strong solution by solving the linear program
where \(e=(1,1,\ldots ,1)^T\in \mathbb{R }^n\). Indeed, solve (13); if an optimal solution \(x_1^*,x_2^*\) is found, then \({{{{\varvec{A}}}}}x\le {{{{\varvec{b}}}}}\) is strongly solvable and \(x=x_1^*-x_2^*\) is a strong solution of it; if (13) is infeasible, then \({{{{\varvec{A}}}}}x\le {{{{\varvec{b}}}}}\) is not strongly solvable (the problem (13) cannot be unbounded since its objective is nonnegative).
Finally we show that the very form of (6) enforces complementarity of any optimal solution.
Theorem 6
Each optimal solution \(x_1^*, x_2^*\) of (13) satisfies \((x_1^*)^Tx_2^*=0\).
Proof
Assume that it is not so, so that some optimal solution \(x_1^*, x_2^*\) of (13) satisfies \((x_1^*)^Tx_2^*>0\). Then \((x_1^*)_i(x_2^*)_i>0\) for some \(i\). Set \(d=\min \{x_1^*,x_2^*\}\), then \(d\ge 0\) and \(d_i>0\), and by Proposition 1, \(x_1^*-d, x_2^*-d\) is a feasible solution of (13) whose objective value \(e^T(x_1^*-d)+e^T(x_2^*-d)=e^Tx_1^*+e^Tx_2^*-2e^Td<e^Tx_1^*+e^Tx_2^*\) is less than the optimal value, a contradiction. \(\square \)
5 Strong solvability of interval linear equations and inequalities
For each \(y\in \{-1,1\}^m(\hbox {a} \pm 1\)-vector) denote by \(T_y=\mathrm{diag}(y)\) the \(m\times m\) diagonal matrix with diagonal vector \(y\). Given an \(m\times n\) interval matrix \({{{{\varvec{A}}}}}\), for each \(y\in \{-1,1\}^m\) let
It can be easily seen that if \({{{{\varvec{A}}}}}=[A_c-\Delta ,A_c+\Delta ]\), then
i.e., \({{{{\varvec{A}}}}}_y\) is again an interval matrix. The same notation also applies to interval vectors that are special cases of interval matrices.
Given an \(m\times n\) interval matrix \({{{{\varvec{A}}}}}\) and an interval \(m\)-vector \({{{{\varvec{b}}}}}\), the system of interval linear equations \({{{{\varvec{A}}}}}x={{{{\varvec{b}}}}}\) is called strongly solvable [2] if each system
with data satisfying
is solvable.
The following theorem establishes an interconnection between strong solvability of interval linear equations and inequalities. It shows that with each strongly solvable system of interval linear equations \({{{{\varvec{A}}}}}x={{{{\varvec{b}}}}}\) with an \(m\times n\) interval matrix \({{{{\varvec{A}}}}}\) we may associate \(2^m\) strongly solvable systems of interval linear inequalities.
Theorem 7
A system \({{{{\varvec{A}}}}}x={{{{\varvec{b}}}}}\) is strongly solvable if and only if \({{{{\varvec{A}}}}}_y^{}x\le {{{{\varvec{b}}}}}_y^{}\) is strongly solvable for each \(y\in \{-1,1\}^m\).
Proof
“Only if”: Let \(y\in \{-1,1\}^m, A^{\prime }\in {{{{\varvec{A}}}}}_y^{}\) and \(b^{\prime }\in {{{{\varvec{b}}}}}_y^{}\). In view of (14), \(A^{\prime }=T_yA\) and \(b^{\prime }=T_yb\) for some \(A\in {{{{\varvec{A}}}}}\) and \(b\in {{{{\varvec{b}}}}}\). Since \({{{{\varvec{A}}}}}x={{{{\varvec{b}}}}}\) is strongly solvable, \(Ax=b\) and thus also \(T_yAx=T_yb\) and \(T_yAx\le T_yb\) are solvable, so that \({{{{\varvec{A}}}}}_y^{} x\le {{{{\varvec{b}}}}}_y^{}\) is strongly solvable.
“If”: Conversely, let each \({{{{\varvec{A}}}}}_y^{}x\le {{{{\varvec{b}}}}}_y^{}, y\in \{-1,1\}^m\), have a strong solution \(x_y\), so that
which we can write as
holds for each \(A\in {{{{\varvec{A}}}}}, b\in {{{{\varvec{b}}}}}\). Take some fixed \(A\in {{{{\varvec{A}}}}}\) and \(b\in {{{{\varvec{b}}}}}\). Then (15) shows that the residual set
intersects all orthants, hence Theorem 3 in [5] implies that \(Ax=b\) has a solution. Since \(A\in {{{{\varvec{A}}}}}\) and \(b\in {{{{\varvec{b}}}}}\) were arbitrary, the system \({{{{\varvec{A}}}}}x={{{{\varvec{b}}}}}\) is strongly solvable, which was to be proved. \(\square \)
This theorem offers some answer to the question why checking strong solvability on interval linear inequalities is essentially easier than checking that of interval linear equations: in the former case only one linear program (13) needs to be solved whereas \(2^m\) of them are required in the latter case.
References
Farkas, J.: Theorie der einfachen Ungleichungen. Journal für die Reine und Angewandte Mathematik 124, 1–27 (1902). doi:10.1515/crll.1902.124.1
Fiedler, M., Nedoma, J., Ramík, J., Rohn, J., Zimmermann, K.: Linear Optimization Problems with Inexact Data. Springer, New York (2006)
Karademir, S., Prokopyev, O.A.: A short note on solvability of systems of interval linear equations. Linear Multilinear Algebra 59(6), 707–710 (2011). doi:10.1080/03081087.2010.486403
Rohn, J.: Linear programming with inexact data is NP-hard. Zeitschrift für Angewandte Mathematik und Mechanik 78(Supplement 3), S1051–S1052 (1998). doi:10.1002/zamm.19980781594
Rohn, J.: A residual existence theorem for linear equations. Optim. Lett. 4, 287–292 (2010). doi:10.1007/s11590-009-0160-7
Rohn, J.: Letter to the editor. Linear Multilinear Algebra 61, 697–698 (2013). doi:10.1080/03081087.2012.698617
Rohn, J., Kreslová, J.: Linear interval inequalities. Linear Multilinear Algebra 38, 79–82 (1994). doi:10.1080/03081089508818341
Acknowledgments
The work was supported with institutional support RVO:67985807. The author wishes to thank an anonymous referee whose justified remarks on the earlier version of the manuscript resulted in essential reworking and enhancement of the paper.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Rohn, J. A Farkas-type theorem for interval linear inequalities. Optim Lett 8, 1591–1598 (2014). https://doi.org/10.1007/s11590-013-0675-9
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11590-013-0675-9