1 Introduction

Throughout the years, linear programming has become a widely used mathematical tool for modelling and solving practical optimization problems. However, real-world applications are often accompanied by various inaccuracies and measurement errors in the input data, which can impair the results obtained by solving a linear program. In order to achieve more realistic results, we need to consider these inexact or uncertain quantities when formulating such a model.

Several different approaches to handling uncertainty and inexactness in optimization have emerged, such as robust optimization, stochastic programming, fuzzy set theory, parametric programming or interval analysis. In this paper, we adopt the approach of interval linear programming, where the input data are given in the form of intervals enclosing the exact coefficient values, and the coefficients perturb independently within the fixed bounds. Interval linear programming can be viewed as a special case of multiparametric programming without any dependencies among the uncertain interval-valued parameters, but rather than focusing on the correspondence between the parameters and the optimal values or solutions, in interval programming we are also interested in the overall properties of the uncertain program. Some of the notions discussed in interval programming are also related to robust optimization [2]. However, unlike the robust approach, interval programming is mainly focused on the solutions of all possible realizations of the uncertain data and thus we do not necessarily assume the worst-case scenario. While an optimistic approach is also partially introduced in adjustable robust optimization by means of adjustable variables [3, 4, 33], the focus still remains on the feasible or optimal solutions, which are robust with respect to the realization of the uncertain parameters. Contrarily to this approach, we will address the properties of the united sets of feasible and optimal solutions over all possible realizations of the data. Thus, even though both approaches present mathematical tools for dealing with uncertain data in optimization, their respective solution methods and the types of problems solved are quite different. Unlike stochastic or fuzzy programming, we do not consider any assumptions about the probabilistic distributions or membership functions on the input data—we simply assume a lower and upper bound given for each coefficient.

Interval linear programs have been applied in solving practical problems from various fields, e.g. portfolio selection [8, 18], solid waste management [15, 21] or interval matrix games [20]. There are several different motivations for choosing interval programming over the other approaches to optimization under uncertainty. These include guaranteed approximations without stochastic nature, representation of parameters with fixed tolerances, discretization of continuous measurements or having insufficient information about the data. The interval approach is also useful for problems, where we need to handle a whole set of data at once (see [14]).

There are two main topics studied in interval linear programming: finding the range of the optimal objective values [30] and describing the set of all possible optimal solutions. Further problems related to these topics include formulating conditions for weak and strong feasibility, optimality, boundedness or basis stability of a given interval problem and the computational complexity of testing these properties (see [9] for an overview of the results). This paper is devoted to the latter problem of describing the set of all optimal solutions. From the theoretical point of view, we study the topological, metric and geometric properties of the optimal solution set, which may aid in the design of more efficient algorithms (see also [13, 17] or [27] for applications of similar ideas). Based on one of the theoretical results, we derive a decomposition algorithm for finding an outer approximation of the optimal set. The paper draws on findings acquired during the work on the thesis of the first author [5], which also provides further details on the discussed topics.

Regarding the description of the optimal set, the first algorithms designed to compute guaranteed bounds for optimal vertices and the range of the optimal value used an interval extension of the classical simplex method [16, 22]. Some newer methods use decomposition of the given interval program into submodels, such as the best and the worst case method [1] or the enhanced-interval linear programming model [34]. In analogy to linear programming, the theory of duality can be used to derive a parametric description of the optimal solution set. This idea leads to the exponential-time orthant decomposition method, and will also serve as a basis for the method introduced in this paper. Stronger results can be achieved by considering only a special class of programs, for which the exact solution set can be described. This includes methods exploiting basis stability [12] or solvers for linear programs with an interval objective function [23].

The paper is organized as follows. Section 2 introduces the basic notions used in interval linear algebra and optimization and reviews some of the known results in this area. In Sect. 3 we study various properties of the optimal solution set of an interval linear program, namely its polyhedrality, closedness, convexity, connectedness and boundedness. A new method for computing an approximation of the optimal solution set based on duality and complementary slackness is proposed in Sect. 4. We compare it with the existing orthant decomposition method and show that our method computes the exact optimal set for problems with a fixed coefficient matrix. Furthermore, we prove that finding the interval hull of the optimal set to a linear program with an interval right-hand-side vector is NP-hard. Section 5 provides some final conclusions and ideas for future research.

2 Interval linear programming

Let us first introduce some notation: for a vector \(x = (x_1, \ldots , x_n)^T\), we denote by \(\hbox {diag}(x)\) the diagonal matrix with entries \(x_1, \ldots , x_n\). The inequality relation \(\le \) on the set of real (interval) matrices, as well as the absolute value operator \(\left|\cdot \right|\), is understood entry-wise.

Given two matrices \(\underline{A}, \overline{A} \in \mathbb {R}^{m \times n}\) with \(\underline{A} \le \overline{A}\), we define an interval matrix\(\varvec{A} = [\underline{A}, \overline{A}]\) as the set

$$\begin{aligned} \left\{ A \in \mathbb {R}^{m \times n} {:}\, \underline{A} \le A \le \overline{A} \right\} . \end{aligned}$$

The matrices \(\underline{A}, \overline{A}\) are called the lower and the upper bound of \(\varvec{A}\), respectively. Hereinafter, bold uppercase letters are used to represent interval matrices and bold lowercase letters represent interval vectors and one-dimensional intervals. We denote by \(\mathbb {IR}^{m \times n}\) the set of all interval m-by-n matrices. For simplicity, we write \(\mathbb {IR}^{m}\) instead of \(\mathbb {IR}^{m \times 1}\) to denote the set of real interval vectors of dimension m, and \(\mathbb {IR}\) for the set of real intervals. Alternatively, we can specify an interval matrix in terms of the center and radius, which are in the following relation with the lower and upper bounds:

$$\begin{aligned} A_c = \frac{1}{2} \left( \overline{A} + \underline{A}\right) ,\qquad A_\Delta = \frac{1}{2} \left( \overline{A} - \underline{A}\right) . \end{aligned}$$

When approximating a (bounded) set \(M \subseteq \mathbb {R}^n\) by means of interval methods, we often aim to compute an interval vector enclosing the set. The tightest possible approximation of this type is the interval hull, which is defined as the interval vector

$$\begin{aligned} \square M = \bigcap \left\{ \varvec{x} \in \mathbb {IR}^n{:}\, M \subseteq \varvec{x} \right\} . \end{aligned}$$

Since computing the exact interval hull may be difficult, a weaker approximation is often sufficient for practical purposes. Such an approximation can be given in the form of an interval enclosure, which is any interval vector \(\varvec{x} \in \mathbb {IR}^n\) satisfying \(M \subseteq \varvec{x}\).

Let an interval matrix \(\varvec{A} \in \mathbb {IR}^{m \times n}\) and interval vectors \(\varvec{b} \in \mathbb {IR}^m, \varvec{c} \in \mathbb {IR}^n\) be given. Unless specified otherwise, the term interval linear program (ILP) will refer to a family of linear programs in the form

$$\begin{aligned} \hbox {minimize}\quad&c^T x \nonumber \\ \hbox {subject to}\quad&Ax = b, x \ge 0, \end{aligned}$$
(1)

with \(A \in \varvec{A}, b \in \varvec{b}, c \in \varvec{c}\). For the sake of brevity, we will write ILP (1) shortly as

$$\begin{aligned} \hbox {minimize}\quad&\varvec{c}^T x \nonumber \\ \hbox {subject to}\quad&\varvec{A}x = \varvec{b}, x \ge 0. \end{aligned}$$
(2)

A particular linear program in such a family is called a scenario. If some of the coefficients are fixed real values, we will also use the term “scenario” to refer to a choice of the remaining interval coefficients.

A vector \(x^* \in \mathbb {R}^n\) is said to be a (weakly) feasible solution to an interval linear system \(\varvec{A} x = \varvec{b}\), if there exist \(A \in \varvec{A}\) and \(b \in \varvec{b}\) such that \(Ax^*=b\). Moreover, \(x^*\) is said to be (weakly) optimal for an ILP determined by the triplet \((\varvec{A}, \varvec{b}, \varvec{c})\), if it is optimal for some scenario (Abc). Hereinafter, the term optimal solution set refers to the set of all weakly optimal solutions, and is denoted by \(\mathcal {S}\), i.e.

$$\begin{aligned} \mathcal {S}(\varvec{A}, \varvec{b}, \varvec{c}) = \{ x \in \mathbb {R}^n{:}\, x \text { is an optimum of } (1) \text { for some } A \in \varvec{A}, b \in \varvec{b}, c \in \varvec{c} \}. \end{aligned}$$

The following well-known theorem by Oettli and Prager [27] provides a convenient characterization of the feasible set of an interval linear system: Given \(\varvec{A} \in \mathbb {IR}^{m\times n}\) and \(\varvec{b} \in \mathbb {IR}^m\), a vector \(x \in \mathbb {R}^n\) is a weak solution to the system \(\varvec{A} x = \varvec{b}\) if and only if it satisfies

$$\begin{aligned} \left|A_c x - b_c \right| \le A_\Delta \left|x \right| + b_\Delta . \end{aligned}$$

A similar characterization due to Gerlach [6] describes the feasible set of inequality systems: A vector \(x \in \mathbb {R}^n\) is a weak solution to the system \(\varvec{A} x \le \varvec{b}\) if and only if it satisfies

$$\begin{aligned} A_c x \le A_\Delta \left|x \right| + \overline{b}. \end{aligned}$$

These descriptions also show an important geometric property: the feasible set forms a convex polyhedron when restricted to one orthant. Thus, if we impose non-negativity on the variables, the characterization of weak feasibility reduces to a linear system.

Using strong duality in linear programming, we can also obtain a straight-forward characterization of the optimal solution set by the parametric system

$$\begin{aligned}&Ax=b,\ x \ge 0, \end{aligned}$$
(3a)
$$\begin{aligned}&A^T y \le c,\end{aligned}$$
(3b)
$$\begin{aligned}&x^T(c-A^Ty) = 0,\end{aligned}$$
(3c)
$$\begin{aligned}&A \in \varvec{A}, b \in \varvec{b}, c \in \varvec{c}. \end{aligned}$$
(3d)

Constraints in system (3) ensure that a pair of optimal primal-dual solutions satisfies primal feasibility (3a), dual feasibility (3b) and complementary slackness (3c) for some scenario in the ILP. Alternatively, the complementary slackness constraint can also be stated in the form \(c^T x = b^T y\).

Note that while it is possible to rewrite an equation-constrained linear program into inequalities or to impose non-negativity on free variables, such transformation is not always applicable in the case of interval linear programs due to the so-called dependency problem. Therefore, different types of ILPs may need to be treated separately, when studying their properties. Other often used forms, in which the feasible set of an interval linear program is given, include the inequality-constrained interval systems \(\varvec{A}x \le \varvec{b}\) and \(\varvec{A}x \le \varvec{b}, x \ge 0\) or mixed systems.

3 Properties of the optimal solution set

Determining the exact optimal solution set of an interval linear program may be difficult, in general, because of its complex structure. In this section, we study the properties of the optimal set in order to identify the sources of complexity in the problem of describing the optimal set. A better understanding of the basic properties is essential for designing more efficient and accurate algorithms for computing a description of the optimal set. We are also interested in formulating sufficient conditions under which some stronger properties hold. Such conditions allow us to identify classes of programs, for which the optimal set might be easier to describe.

Problem formulation: We study the following topological and geometric properties, to aid in designing methods for finding a description of the optimal set:

  • Convexity, polyhedrality (see Sects. 3.1 and 3.3): A convex set can be described by means of classical linear programming. More generally, polyhedrality can lead to techniques based on decomposition into a number of linear programming subproblems.

  • Closedness (see Sect. 3.2): if a set is closed, we are able to utilize its limit points or its boundary, which belong to the set itself.

  • Connectedness (see Sect. 3.3): in a connected set, it is sufficient to inspect a single component to ensure that all possible solutions were considered.

  • Boundedness (see Sect. 3.4): a bounded set can be approximated by a finitely large enclosure, thus reducing the examined search space. Boundedness also provides a guarantee that a search in any given direction inside the set will eventually terminate in a point not included in the set.

First of all, it is easy to see that the optimal solution set of an interval linear program is not a convex polyhedron, in general, even if the feasible set is convex. This is illustrated by the following example of an ILP with a connected, but non-convex optimal set.

Example 1

Consider the interval linear program

(4)

The optimal solution set for the scenario determined by the objective \(0x_1 + x_2\) is the ray \((1+t,1)\) with \(t \ge 0\). For the scenario \(1x_1 + x_2\), we have the optimal set

$$\begin{aligned} \mathcal {M}(A,b) \cap \{(x_1, x_2) \in \mathbb {R}^2{:}\, x_1 + x_2 = 2\}, \end{aligned}$$

where \(\mathcal {M}(A,b)\) denotes the feasible set of (4). For any other scenario \(\alpha x_1 + x_2\) with \(0< \alpha < 1\), there is a unique optimal solution in the vertex (1, 1). Obviously, this set is non-convex (see Fig. 1). Also note that the (weakly) optimal set \(\mathcal {S}\) is different from the set of solutions, which remain optimal under all perturbations of the data, since this only includes the point (1, 1).

Fig. 1
figure 1

The feasible set (gray) and the set of optimal solutions (thick black) of ILP (4)

3.1 Polyhedrality

Let us now consider a special class of interval linear programs in form (2) with a fixed coefficient matrix A. Theorem 1 shows that in this case the optimal solution set \(\mathcal {S}\) is formed by a union of finitely many convex polyhedra.

Theorem 1

The set of optimal solutions of the interval linear program

$$\begin{aligned} \hbox {minimize}\quad&\varvec{c}^T x \nonumber \\ \hbox {subject to}\quad&Ax = \varvec{b}, x \ge 0 \end{aligned}$$
(5)

is a union of at most \(2^n\) convex polyhedra.

Proof

The optimal solution set of (5) can be described by the parametric system (3) with a fixed coefficient matrix A. Moreover, we can view the parameters b and c as variables (with given lower and upper bounds), thus forming a non-linear program. Note that this transformation does not change the set of feasible solution vectors x and y. We obtain the following system:

$$\begin{aligned}&Ax=b,\ x \ge 0,\nonumber \\&A^T y \le c,\nonumber \\&x^T(c-A^Ty) = 0,\nonumber \\&\underline{b} \le b \le \overline{b},\ \underline{c} \le c \le \overline{c}. \end{aligned}$$
(6)

Since A is a fixed real matrix, the only non-linear constraint is \(x^T(c-A^Ty) = 0\). In order to deal with the non-linearity, we introduce an auxiliary variable \(z = c - A^T y\). Taking into account the non-negativity conditions on x and z, the complementary slackness constraint can be equivalently restated as

$$\begin{aligned} \forall i \in \{1, \ldots , n\}: x_i = 0 \vee z_i = 0. \end{aligned}$$

This restatement leads to \(2^n\) linear programs obtained by replacing \(x^T z = 0\) with a collection of constraints in the form \(x_i = 0\) or \(z_i = 0\) for each index i. Therefore, the feasible set of (6) is a union of \(2^n\) convex polyhedra. The projection, which maps solutions of (6) onto the x-variable, preserves convexity and polyhedrality. Thus, the set of optimal solutions of (5) is also a union of \(2^n\) convex polyhedra. \(\square \)

Note that the proof of Theorem 1 can also be generalized to problems with linear dependencies between the objective function coefficients and the right-hand-side vector. Considering the dual problem as primal, we can derive an analogous result for inequality-constrained interval programs (and similarly for other types of ILPs).

Corollary 1

The set of optimal solutions of the interval linear program

$$\begin{aligned} \hbox {minimize}\quad&\varvec{c}^T x \\ \hbox {subject to}\quad&Ax \le \varvec{b} \end{aligned}$$

is a union of at most \(2^m\) convex polyhedra.

3.2 Closedness

Let us by d(xy) denote the Euclidean distance between points \(x,y \in \mathbb {R}^n\). A set \(M \subseteq \mathbb {R}^n\) is said to be open, if for every \(x \in M\) there exists a real number \(\varepsilon > 0\), such that every \(y \in \mathbb {R}^n\) satisfying \(d(x,y) < \varepsilon \) belongs to M. A set \(M \subseteq \mathbb {R}^n\) is said to be closed, if the complement of M is open. Note that the properties of being open and closed are not mutually exclusive, i.e. a set may be both open and closed. Moreover, a set may also be neither open nor closed.

Before we proceed with examining closedness of the optimal set, let us first state some continuity properties, which will be used later. Proofs of the following lemmas can be found in [26].

Lemma 1

([26, §19]) Let a function \(f:\mathbb {R}^n \rightarrow \mathbb {R}^m\) be defined by the rule

$$\begin{aligned} f(x) = (f_1(x), \ldots , f_m(x)), \end{aligned}$$

where \(f_i :\mathbb {R}^n \rightarrow \mathbb {R}\). Then the function f is continuous if and only if each component function \(f_i\) is continuous.

Lemma 2

([26, §18]) Let XY be topological spaces and let \(f :X \rightarrow Y\) be given. The function f is continuous if and only if for every closed set \(M \subseteq Y\), the preimage \(f^{-1}(M)\) is closed in X.

Lemma 3

([26, §26]) Let XY be topological spaces and assume Y is compact. Consider the projection \(\pi _X :X \times Y \rightarrow X\). If \(M \subseteq X \times Y\) is a closed set, then \(\pi _X(M)\) is a closed subset of X.

For the statement of the upcoming Theorem 2, we assume an ILP is given in the form (2). However, the proof of the theorem can be directly generalized to other types of ILPs as well.

Theorem 2

Assume the set of optimal solutions of the dual interval problem

$$\begin{aligned} \hbox {maximize}\quad&\varvec{b}^T y \\ \hbox {subject to}\quad&\varvec{A}^T y \le \varvec{c} \end{aligned}$$

is bounded. Then the set of optimal solutions \(\mathcal {S}\) is closed.

Proof

Let us describe the set of all optimal solutions of (2) by the non-linear system

$$\begin{aligned}&Ax=b,\ x \ge 0,\nonumber \\&A^T y \le c,\nonumber \\&c^T x = b^T y,\nonumber \\&\underline{A} \le A \le \overline{A},\; \underline{b} \le b \le \overline{b},\; \underline{c} \le c \le \overline{c}. \end{aligned}$$
(7)

We will now prove that each of the constraints describes a closed set. Since the feasible set \(\mathcal {M}\) of system (7) is an intersection of the sets defined by the individual constraints, this argument will also yield its closedness. Let us consider the function \(g(A,x,b) := Ax-b\). All components of g are polynomials of the form

$$\begin{aligned} \sum _{k=1}^{n} A_{ik}x_k - b_i \end{aligned}$$

for \(i \in \{1, \ldots , m\}\) and therefore continuous functions. By Lemma 1, the function g is also continuous. The set of all triplets (Axb) satisfying \(Ax = b\), or equivalently \(Ax - b = 0\), can also be viewed as the preimage of \(\{0\}\) under g. Using Lemma 2 and the fact that the set \(\{0\}\) is closed, we can also infer closedness of the set

$$\begin{aligned} \left\{ (A,x,b) \in \mathbb {R}^{m\times n + n + m}{:}\, Ax = b \right\} , \end{aligned}$$

and of the set of all quintuples (Abcxy) satisfying \(Ax=b\). By a similar argument, we can also prove closedness of the sets defined by the constraints \(A^T y \le c\) and \(c^T x = b^T y\). It is easy to see that the remaining inequality constraints in (7) also define closed sets, so the feasible set \(\mathcal {M}\) is closed.

The set of all dual optimal solutions is bounded by assumption, and thus we can restrict the space of dual variables y to a compact subset Y of \(\mathbb {R}^{m}\) (e.g. an interval envelope). Consider the set \(\mathcal {M}\) as a subset of the space

$$\begin{aligned} Z\, {:=} \,\varvec{A} \times \varvec{b} \times \varvec{c} \times \mathbb {R}^{n} \times Y \end{aligned}$$

and let \(\pi _x :Z \rightarrow \mathbb {R}^{n}\) denote the projection into the space of primal variables x. Since \(\varvec{A}, \varvec{b}, \varvec{c}\) and Y are compact, their product is also compact and \(\pi _x\) is a closed map by Lemma 3. Therefore, the set \(\pi _x(\mathcal {M}) = \mathcal {S}\) is also closed. \(\square \)

A simple sufficient condition ensuring that the assumption in Theorem 2 is satisfied is that the set of dual feasible solutions is also bounded. Boundedness of the optimal solution set will be further discussed in Sect. 3.4.

The following theorem states another sufficient condition for closedness of the optimal solution set, based on the results presented in Sect. 3.1.

Theorem 3

The set of optimal solutions of an interval linear program with a real coefficient matrix A is closed.

Proof

By Theorem 1 and Corollary 1, the optimal solution set of an ILP with a real coefficient matrix is formed by a finite union of convex polyhedra. Employing the fact that a finite union of closed sets is a closed set itself finishes the proof of the theorem. \(\square \)

3.3 Convexity and connectedness

Recall that a set is called convex, if for every pair of points from the set, the line segment joining the points is also contained within the set. In linear programming, convexity of the optimal set can be exploited to describe all optimal solutions. However, the optimal solution set of an interval linear program is only convex under some additional assumptions. One of the sufficient conditions for convexity can be formulated using the concept of basis stability.

Let us first introduce some terminology. An interval matrix \(\varvec{A} \in \mathbb {IR}^{n \times n}\) is said to be regular, if each real matrix \(A \in \varvec{A}\) is non-singular, otherwise it is said to be singular. Given an index set \(B \subseteq \{1, \ldots , n\}\), the symbol \(\varvec{A}_B\) denotes the submatrix of an interval matrix \(\varvec{A} \in \mathbb {IR}^{m \times n}\) formed by the columns of \(\varvec{A}\) corresponding to the indices in B. Further, let a linear program be given by the triplet (Abc). The index set \(B \subseteq \{1, \ldots , n\}\) is said to be a basis, if the matrix \(A_B\) is non-singular. A basis B is called feasible, if \(A_B^{-1} b \ge 0\) holds and it is called optimal, if it is feasible and for \(N = \{1, \ldots , n\} \backslash B\) we have

$$\begin{aligned} c_N^T - c_B^T A_B^{-1} A_N \ge 0^T. \end{aligned}$$

Given a basis B, an ILP is said to be B-stable, if B is an optimal basis for each scenario of the ILP. Furthermore, it is called unique B-stable if it is B-stable and the optimal solution in each scenario is unique. The following characterization given by Hladík [9], which is obtained by applying the Oettli–Prager theorem to the interval system \(\varvec{A}_B x_B = \varvec{b}\), summarizes the importance of (unique) basis stability in interval linear programming: If there exists a basis \(B \subseteq \{1, \ldots , n\}\), such that ILP (2) is unique B-stable, then the optimal solution set \(\mathcal {S}\) can be described by the linear system

$$\begin{aligned}&\underline{A}_B x_B \le \overline{b},\quad \overline{A}_B x_B \ge \underline{b},\nonumber \\&x_B \ge 0,\quad x_N = 0. \end{aligned}$$
(8)

Furthermore, if the ILP is B-stable, then each solution in the set described by (8) is optimal for some scenario, and conversely, each scenario has at least one optimal solution contained in this set.

We proceed by studying the property of (path-)connectedness, which is a weakening of convexity. A set \(M \subseteq \mathbb {R}^n\) is said to be path-connected, if for every \(x,y \in M\) there exists a continuous function (path) \(f :[0, 1] \rightarrow M\) with \(f(0) = x\) and \(f(1) = y\). The set M is said to be connected, if for each pair of sets \(X, Y \subseteq \mathbb {R}^n\) with \(M = X \cup Y\) and \(X \cap Y = \emptyset \), which are open in the subset topology induced on M, it holds that \(X = \emptyset \) or \(Y = \emptyset \).

In general, the feasible set (and thus also the optimal set) of an interval linear program may be disconnected. However, even if the feasible set is connected, it is still possible for the optimal set to be disconnected, as shown in the following example.

Example 2

Consider the inequality-constrained problem

(9)

For the scenario involving the constraint \(0x_1 + x_2 \le 0\), the set of optimal solutions is formed by the line \(x_2 = 0\). Further, consider a scenario with the constraint \(\alpha x_1 + x_2 \le 0\) for \(\alpha \ne 0\). If we take the union of all optimal sets for \(\alpha > 0\), we obtain the ray \((-1-t, 1)\) with \(t \ge 0\). For \(\alpha < 0\), we have the united optimal set \((1+t, 1)\) with \(t \ge 0\). The overall optimal solution set of the interval program, which is formed by the union of the two rays and the line, is (path-)disconnected (see Fig. 2).

Fig. 2
figure 2

The union of all feasible sets (gray) and the set of all optimal solutions (thick black) of ILP (9)

Note that every path-connected set is also connected, but the converse does not hold in general. Theorem 4 states a sufficient condition for path-connectedness of the optimal solution set based on basis stability.

Theorem 4

If there exists a basis \(B \subseteq \{1, \ldots , n\}\), such that ILP (2) is B-stable, then the optimal solution set \(\mathcal {S}\) is path-connected.

Proof

Let B be a basis, which is optimal for each scenario of the ILP and let \(\mathcal {S}(B)\) denote the set of all optimal basic solutions with the basis B. Furthermore, let \(x_1, x_2 \in \mathcal {S}\) be arbitrary solutions optimal for some scenarios \((A_1, b_1, c_1)\) and \((A_2, b_2, c_2)\), respectively.

Since the problem is B-stable, there exist basic solutions \(x^B_1, x^B_2 \in \mathcal {S}(B)\), which are optimal for the scenarios \((A_1, b_1, c_1)\) and \((A_2, b_2, c_2)\). From the theory of linear programming, we know that the optimal solution set of a fixed scenario is convex, and therefore also path-connected. Thus, there exists a path \(p_1:[0,1] \rightarrow \mathbb {R}^n\) with \(p_1(0) = x_1\) and \(p_1(1) = x_1^B\) and also a path \(p_2\) connecting \(x_2^B\) to \(x_2\). By the description stated in (8), the set \(\mathcal {S}(B)\) is convex, which implies that there also exists a path connecting \(x_1^B\) to \(x_2^B\). Using transitivity of the path-connectedness relation, we obtain a path \(p_3\) from \(p_3(0) = x_1\) to \(p_3(1) = x_2\). \(\square \)

A similar result can also be achieved for other types of problems: even if we drop the non-negativity constraint, the set \(\mathcal {S}(B)\) still remains path-connected (even though not necessarily convex) and the statement of the theorem holds.

Let us now return to the class of interval linear programs with a fixed coefficient matrix. For such problems, Theorem 5 states a general result regarding connectedness of the optimal set, which is a consequence of the continuity properties of linear programs proved by Meyer [24] (the result can also be extended to programs in other forms). Recall that a correspondence \(f:X \rightarrow 2^Y\) is called upper hemicontinuous at \(x \in X\), if for every open neighborhood U of f(x) there exists an open neighborhood V of x such that \(f(z) \subseteq U\) holds for all \(z \in V\).

Theorem 5

The optimal solution set of the interval linear program

$$\begin{aligned} \hbox {minimize}\quad&\varvec{c}^T x \nonumber \\ \hbox {subject to}\quad&Ax = \varvec{b}, x \ge 0 \end{aligned}$$
(10)

is connected.

Proof

Let us define the following sets:

$$\begin{aligned} \mathcal {B}&= \{ b \in \mathbb {R}^m{:}\, Ax=b,\quad x \ge 0 \text { for some } x \in \mathbb {R}^n\},\\ \mathcal {C}&= \{ c \in \mathbb {R}^n{:}\, A^T y \le c \quad \text {for some } y \in \mathbb {R}^m\}. \end{aligned}$$

Given two parameter vectors \(b \in \varvec{b}\) and \(c \in \varvec{c}\), the corresponding scenario of the interval program has an optimal solution if and only if \((b,c) \in \mathcal {B} \times \mathcal {C}\). We observe that the set \(\mathcal {B}\times \mathcal {C}\), as well as the set \((\mathcal {B}\cap \varvec{b})\times (\mathcal {C}\cap \varvec{c})\), is convex and thus also connected.

By [24, Theorem 2] we have that the optimal set correspondence \(\mathcal {S}_A\) from \(\mathcal {B}\times \mathcal {C}\) to the power set of \(\mathbb {R}^n\) is upper hemicontinuous. Since the image of a connected set under an upper hemicontinuous connected-valued correspondence is also connected [7], it only remains to show that the optimal set correspondence \(\mathcal {S}_A\) is connected-valued. However, this is clear from the fact that the optimal set of a fixed linear program is always a convex polyhedron. \(\square \)

3.4 Boundedness

A set \(M \subseteq \mathbb {R}^n\) is said to be bounded, if there exists a point \(s \in \mathbb {R}^n\) and a real number \(r > 0\), such that every \(x \in M\) satisfies \(d(x, s) < r\).

A sufficient condition for boundedness of the optimal solution set of an interval linear program was formulated by Mostafaee, Hladík and Černý [25], based on the results on continuity of some set-valued functions in linear programming proved by Wets [32]. They showed that if the property

$$\begin{aligned}&\{x \in \mathbb {R}^n{:}\, Ax = 0, c^T x \le 0, x \ge 0 \} = \{0\},\\&\{y \in \mathbb {R}^m{:}\, A^T y \le 0, b^T y \ge 0\} = \{0\}, \end{aligned}$$

holds for every \(A \in \varvec{A}, b \in \varvec{b}, c \in \varvec{c}\), then the optimal solution set \(\mathcal {S}\) is bounded.

We continue by studying the decision problem of checking boundedness of the optimal solution set from a complexity-theoretic point of view. First, let us review a theorem proved by Rohn [28] establishing a relationship between boundedness of the feasible set of a square interval system and regularity of the coefficient matrix. He proved that for an interval system \(\varvec{A}x = \varvec{b}\) with the feasible set \(\mathcal {M}(\varvec{A},\varvec{b})\) and a coefficient matrix \(\varvec{A} \in \mathbb {IR}^{n \times n}\) containing at least one non-singular matrix, the following assertions are equivalent:

  1. 1.

    \(\varvec{A}\) is regular,

  2. 2.

    \(\mathcal {M}(\varvec{A},\varvec{b})\) is bounded for some \(\varvec{b} \in \mathbb {IR}^n\),

  3. 3.

    \(\mathcal {M}(\varvec{A},\varvec{b})\) is bounded for each \(\varvec{b} \in \mathbb {IR}^n\).

For the purposes of Theorem 6, we consider an inequality-constrained interval linear program. The proof exploits the fact that a feasibility problem can be formulated as an optimization problem with a constant objective function, thus, the result also holds for testing boundedness of the feasible set of interval systems.

Theorem 6

The problem of checking boundedness of the optimal set for an interval linear program in the form

$$\begin{aligned} \hbox {minimize}\quad&\varvec{c}^T x \nonumber \\ \hbox {subject to}\quad&\varvec{A}x \le \varvec{b} \end{aligned}$$
(11)

is co-NP-hard.

Proof

To prove the desired result, we need to construct a polynomial-time reduction from a decision problem, which is already known to be co-NP-hard, to the problem of checking boundedness of the optimal set. From the proof of [31, Theorem 2.33], we have that testing regularity of interval matrices is co-NP-hard on the set of matrices in the form \([A-ee^T, A+ee^T]\) with a non-negative positive definite rational matrix A.

Since a positive definite matrix A has positive eigenvalues, it is also non-singular. Thus, the matrix \(\varvec{A} = [A-ee^T, A+ee^T]\) contains a non-singular matrix, namely the central matrix \(A^c = A\). This allows us to use the characterization of regularity of an interval matrix given by Rohn [28].

Let us set \(b=0\). Then, \(\varvec{A}\) is regular if and only if the feasible set of the interval system \(\varvec{A} x = 0\) is bounded. Using the results of [19], we can split the equation constraint into two independent constraints \(\varvec{A_1} x \le 0, \varvec{A_2} x \ge 0\) with \(\varvec{A_1} = \varvec{A_2} = \varvec{A}\), while preserving the same feasible set. Therefore, the interval matrix \(\varvec{A}\) is regular if and only if the optimal solution set of the interval linear program

(12)

is bounded. The reduction shows that there exists a class of problems in the form (12), for which checking boundedness of the optimal solution set is at least as hard as testing regularity of interval matrices. This implies that checking boundedness of the optimal set is co-NP-hard on the class of programs of type (11), since this is a more general decision problem. \(\square \)

4 Approximating the optimal solution set

Since the shape and structure of the optimal solution set of an ILP may be very complicated, it is often difficult to determine it precisely. Therefore, finding a tight approximation (e.g. an interval envelope) of the optimal set is also desirable. In this section we present two decomposition methods, which can be used to compute an approximation by means of a union of convex polyhedra or enclose the optimal set by an interval box. The main focus is on interval linear programs with intervals occurring only in the objective function and the right-hand-side vector. This class of ILPs includes a wide range practical problems, where the coefficient matrix represents a graph (e.g. uncertain transportation or minimum-cost flow problems).

Following the convention of [29], we say that a problem “\(\text {maximize } f(x)\) subject to \(x\in X\)” is NP-hard, if the corresponding decision problem

$$\begin{aligned} \text {``Is } f(x) \ge r \text { for some } x \in X \text {?''} \end{aligned}$$

with r rational is NP-hard. While the restriction to programs with a fixed matrix may seem like a strong assumption, some of the basic problems related to interval optimization remain difficult. For example, computing the upper bound of the optimal value range of ILP (6) with a fixed matrix is still NP-hard [29]. The lower bound can be found in polynomial time, however, this is also true for the general case [30]. Theorem 7 proves that computing the interval hull of the optimal set is also an NP-hard problem, even for the special class of ILPs with a fixed coefficient matrix and a fixed objective vector.

Theorem 7

Let \(\mathcal {S}(A,\varvec{b},c)\) denote the optimal solution set of an ILP in the form

$$\begin{aligned} \hbox {minimize}\quad&c^T x \\ \hbox {subject to}\quad&Ax = \varvec{b}, x \ge 0. \end{aligned}$$

Then, the problem

$$\begin{aligned} \hbox {maximize}\quad&x_i \\ \hbox {subject to}\quad&x \in \mathcal {S}(A,\varvec{b},c) \end{aligned}$$

for \(i \in \{1, \ldots , n\}\) is NP-hard.

Proof

Let us by f(Abc) denote the optimal value \(\inf \,\{ c^T x{:}\, Ax = b, x \ge 0\}\) of a given linear program. From [29, Theorem 6.1] we have that computing the worst optimal value \(\overline{f} = \sup \,\{f(A,b,c): b \in \varvec{b}\}\) of an ILP is NP-hard on the class of problems in the form

$$\begin{aligned} \hbox {minimize}\quad&e^T x_1 + e^T x_2 \nonumber \\ \hbox {subject to}\quad&D x_1 - D x_2 = [-e, e],\nonumber \\&x_1, x_2 \ge 0 \end{aligned}$$
(13)

with a positive definite matrix D and \(e = (1, \ldots , 1)^T\). Note that the value \(\overline{f}\) is finite for ILP (13), since it is strongly bounded and feasible. Let us now formulate a similar program as follows:

$$\begin{aligned} \hbox {minimize}\quad&z \nonumber \\ \hbox {subject to}\quad&z = e^T x_1 + e^T x_2, \nonumber \\&D x_1 - D x_2 = [-e, e],\nonumber \\&x_1, x_2, z \ge 0. \end{aligned}$$
(14)

Since both variables \(x_1, x_2\) are non-negative in program (13), the objective value \(e^T x_1 + e^T x_2\) represented by the variable z in program (14) is non-negative as well. This implies that both programs are equivalent (with respect to the range of optimal values). Therefore, the maximal value of z optimal for some scenario of program (14) is equal to the worst objective value \(\overline{f}\) of program (13), which is NP-hard to compute. \(\square \)

4.1 Orthant decomposition

First, we review the so-called orthant decomposition, which is based on an interval relaxation of the parametric description of the optimal solution set given in (7). By breaking dependencies between the coefficients, we obtain the interval linear system

$$\begin{aligned} \varvec{A_1} x = \varvec{b_1},\, x \ge 0,\, \varvec{A_2}^T y \le \varvec{c_1},\, \varvec{c_2}^T x = \varvec{b_2}^T y, \end{aligned}$$
(15)

with \(\varvec{A_1} = \varvec{A_2} = \varvec{A}\), \(\varvec{b_1} = \varvec{b_2} = \varvec{b}\) and \(\varvec{c_1} = \varvec{c_2} = \varvec{c}\). Next, we apply the results on weak solvability of mixed interval systems [11] derived from the theorems of Oettli–Prager and Gerlach. Note that the non-negativity of primal variables allows for a simplification of the primal feasibility constraint, using the definitions of center and radius of an interval matrix. Thus, it is possible to rewrite system (15) as

$$\begin{aligned}&c_c^T x - b_c^T y \le c_\Delta ^T x + b_\Delta ^T \left|y \right|,\nonumber \\&c_c^T x - b_c^T y \ge -c_\Delta ^T x - b_\Delta ^T \left|y \right|,\nonumber \\&\underline{A}x \le \overline{b}, -\overline{A}x \le -\underline{b}, x\ge 0,\nonumber \\&A_c^T y - A_\Delta ^T \left|y \right| \le \overline{c}. \end{aligned}$$
(16)

Let us now define some related terms: Given \(s \in \{\pm 1\}^n\), an orthant defined by s is the set \(\{x \in \mathbb {R}^n{:}\, \text {diag}(s)x \ge 0\}\). The vector s is called the signature of the orthant.

It is easy to see that restricting the dual variables y in system (16) to a single orthant yields a linear system, since we can directly express the absolute value using the signature of the orthant. Thus, we can proceed by searching each of the \(2^m\) orthants and solving the respective linear systems. It is possible to exclude some orthants from search by first computing a quicker and looser enclosure of the optimal set and then searching the non-empty orthants to tighten it. For a fixed signature \(s \in \{\pm 1\}^m\) we can formulate the description as follows:

$$\begin{aligned}&c_c^T x - b_c^T y \le c_\Delta ^T x + b_\Delta ^T \hbox {diag}(s)y,\nonumber \\&c_c^T x - b_c^T y \ge -c_\Delta ^T x - b_\Delta ^T \hbox {diag}(s)y,\nonumber \\&\underline{A}x \le \overline{b},\, -\overline{A}x \le -\underline{b},\, x\ge 0,\nonumber \\&A_c^T y - A_\Delta ^T \hbox {diag}(s)y \le \overline{c},\nonumber \\&\hbox {diag}(s)y \ge 0. \end{aligned}$$
(17)

Note that for a program in the form \(\varvec{A}x \le \varvec{b}\) we decompose the corresponding system with respect to the unrestricted primal variables into \(2^n\) orthants. A program in the form \(\varvec{A}x \le \varvec{b}, x \ge 0\) does not need to be decomposed at all, since both primal and dual variables are constrained to one orthant. Therefore, in this case, the problem of approximating the optimal solution set via an interval relaxation reduces to solving a single linear system.

As presented so far, the method provides an approximation of the optimal set by means of a set of convex polyhedra. In order to obtain an interval enclosure, we can compute the interval hull of the union of the feasible sets described by (17). This can be done by finding the minimal and maximal value of each primal variable \(x_i\) over the union of the feasible sets.

The exponential-time orthant decomposition method can also be modified into a polynomial iterative contractor [10] providing a looser, but faster approximation. The contractor uses a linear approximation of the absolute value function instead of expressing it exactly in every orthant.

4.2 Decomposition by complementarity

Let us now introduce a new method for approximating the optimal solution set of an ILP based on complementary slackness in linear programming. We also show that this approach provides an exact description of the optimal set for a special class of interval linear programs with a fixed coefficient matrix.

Using the idea developed in the proof of Theorem 1, we consider the parametric description of the optimal set given in (3) as a non-linear system. We know that the complementary slackness condition \(x^T (c-A^T y) = 0\) is satisfied if and only if \(x_i = 0\) or \((c-A^T y)_i = 0\) holds for each index \(i \in \{1, \ldots , n\}\). This implies that for a fixed subset \(I \subseteq \{1, \ldots , n\}\) with \(x_i = 0\) for \(i \in I\), we only need to consider the primal and dual feasibility conditions with the remaining equation constraints from the complementary slackness condition to obtain the corresponding subset of optimal solutions. In other words, we need to solve the \(2^n\) problems in the form

(18)

Consider now a special class of interval linear programs, in which the entries of the matrix A are only degenerate intervals. In this case, the value of the variable A is fixed, thus reducing system (18) to a linear problem. Therefore, we can directly obtain the exact optimal set of a linear program with interval objective and right-hand side by solving \(2^n\) linear subproblems. Similarly, we can also compute the exact interval hull of the optimal solution set. In fact, the proposed algorithm can be directly applied to a more general class of programs, for which the parameters b and c belong to an easily characterizable set, e.g. a polyhedron.

The method can also be extended to the general case. For a problem with an interval coefficient matrix \(\varvec{A}\), the system (18) remains non-linear. In order to simplify the problem, we can formulate an interval relaxation of the system by breaking the dependencies between multiple occurrences of the variable A. After breaking the dependencies, we obtain an interval linear system, whose weakly feasible set can be described by the Oettli–Prager theorem. Moreover, since we have fixed the value of some of the primal variables to 0, the corresponding columns of the system \(Ax=b\) are also equal to 0 independent of the coefficient value. Therefore, the only relaxed dependencies are present for indices \(j \notin I\).

Furthermore, let us examine the constraints

$$\begin{aligned} \begin{aligned}&(\varvec{A}^T y)_i \le \varvec{c}_i,&\qquad \text {for } i \in I,\\&(\varvec{A}^T y)_j = \varvec{c}_j,&\qquad \text {for } j \notin I. \end{aligned} \end{aligned}$$
(19)

Since the dual variables do not appear in other constraints of the interval relaxation of (18), we can treat (19) as an independent subsystem. As we are only interested in the optimal solution set, which is formed by the projection of the feasible set onto the primal variables, it is sufficient to test weak feasibility of system (19). If there are no feasible solutions, then the interval relaxation of (18) is also strongly infeasible. Otherwise we can fix a feasible scenario and solve the remaining primal constraints. Moreover, if the subproblem (19) is infeasible for some index set I, then it is also infeasible for all subsets of I and we do not need to check them.

Unfortunately, testing weak feasibility is also difficult due to the fact that the variables in system (19) are unrestricted. For small problems, we can use orthant decomposition on the subproblem and test weak feasibility directly. However, it is also possible to simplify the test by checking some sufficient and necessary conditions for weak feasibility of an interval system. A basic sufficient condition for weak feasibility of (19) is feasibility of the central system

$$\begin{aligned} \begin{aligned}&(A_c^T y)_i \le (c_c)_i,&\qquad \text {for } i \in I,\\&(A_c^T y)_j = (c_c)_j,&\qquad \text {for } j \notin I. \end{aligned} \end{aligned}$$

On the other hand, the well-known Farkas lemma implies that system (19) is strongly infeasible if and only if the system

$$\begin{aligned} \begin{aligned} \varvec{A}_J p + \varvec{A}_I q&= 0,\\ \left( \varvec{c}^T\right) _J p + \left( \varvec{c}^T\right) _I q&\le -1,\\ q&\ge 0 \end{aligned} \end{aligned}$$

is strongly feasible, where \(J = \{1, \ldots , n\}\backslash I\) and the subscript denotes restriction of a matrix to the columns indexed by the given set. A sufficient condition for testing strong feasibility of an interval system, as well as the theorem of alternatives for mixed linear systems derived from the Farkas lemma, can be found in [11].

This leads to a general method for approximating the optimal solution set of an ILP, which is exponential in the number of variables. Similarly, an ILP in the form \(\varvec{A}x \le \varvec{b}\) or \(\varvec{A}x \le \varvec{b}, x \ge 0\) can be approached by decomposition into \(2^m\) subproblems analogous to system (18). In this case, the dual variables are non-positive and weak feasibility of the dual system can be tested efficiently (however, when solving problems of type \(\varvec{A}x \le \varvec{b}\) we have to deal with unrestricted primal variables).

4.3 Comparison of the methods

We have already seen that when the coefficient matrix of an interval linear program is fixed, decomposition by complementarity yields the exact optimal solution set or its interval hull. Example 3 shows that this is not true for the orthant decomposition method, which may return an overestimated result even in this special case.

Example 3

Consider the interval program

$$\begin{aligned} \begin{array}{lr@{\ }l} \text {minimize} &{}&{} x_1\\ \text {subject to} &{}&{} x_1-x_2 = [-1,1],\\ &{}&{}x_1\ge 0, x_2 \ge 0. \end{array} \end{aligned}$$
(20)

When using the orthant decomposition method, we approximate the optimal solution set of (20) by the union of feasible sets of linear systems in the form

$$\begin{aligned}&x_1 \le sy,\, x_1 \ge -sy,\\&x_1-x_2 \le 1,\, -x_1+x_2 \le 1,\, x_1 \ge 0,\,x_2 \ge 0,\\&y \le 1,\,-y \le 0,\\&sy \ge 0, \end{aligned}$$

with \(s \in \{-1,1\}\). For the choice \(s = -1\), we have \(y = 0\) and the feasible set of x-solutions is formed by all pairs \((x_1, x_2)\) with \(x_1 = 0\) and \(x_2 \in [0,1]\). In the case of \(s = 1\), we obtain the set described by \(x_1 \in [0,1]\), \(x_2 \ge 0\) and \(x_1 - x_2 \in [-1,1]\). Due to the dependency problem, the approximation also contains solutions, which are not optimal for the original ILP (see Fig. 3). Even if we only consider the interval enclosure of the optimal set generated by orthant decomposition, it is still an overestimation of the exact interval hull.

Fig. 3
figure 3

Feasible set (light gray) and optimal set (thick black) of ILP (20) and its approximation obtained by orthant decomposition (dark gray)

We proceed with an example of a simple linear program with an interval coefficient matrix. In this case, neither of the presented methods can guarantee an exact result in the form of an interval hull. Example 4 shows that at least for some problems, decomposition by complementarity computes a tighter approximation of the optimal set than the orthant decomposition.

Example 4

Consider the interval program

$$\begin{aligned} \begin{array}{lr@{\ }ll} \text {minimize} &{}&{} -x_2\\ \text {subject to} &{}&{} x_1+[1,2]x_2 = 1,\\ &{}&{}x_1, x_2 \ge 0. \end{array} \end{aligned}$$
(21)

By applying the interval relaxation approach used in orthant decomposition, we obtain an approximation of the optimal set described by the system

$$\begin{aligned} -x_2 - y&= 0, \end{aligned}$$
(22a)
$$\begin{aligned} x_1+x_2&\le 1,\end{aligned}$$
(22b)
$$\begin{aligned} -x_1-2x_2&\le -1,\end{aligned}$$
(22c)
$$\begin{aligned} x_1, x_2&\ge 0,\end{aligned}$$
(22d)
$$\begin{aligned} y - 0\left|y \right|&\le 0,\end{aligned}$$
(22e)
$$\begin{aligned} 1.5y - 0.5\left|y \right|&\le -1. \end{aligned}$$
(22f)

Since constraint (22e) reads \(y \le 0\), we only need to examine a single orthant and no decomposition is needed. Moreover, constraint (22f) can then be rewritten as \(y \le -\frac{1}{2}\), which combined with (22a) yields \(x_2 \ge \frac{1}{2}\). The remaining constraints (22b), (22c) and (22d) only restrict the approximation to the feasible region. The resulting set is depicted in Fig. 4.

Fig. 4
figure 4

Feasible set (light gray) and optimal set (thick black) of ILP (21) and its approximation obtained by orthant decomposition (dark gray)

Let us now obtain an approximation of the optimal set using decomposition by complementarity. First of all, we see that the vector (0, 0) is not even feasible and thus setting \(I = \{1,2\}\) yields no solutions. If we set \(x_1 = 0\), then we need to check feasibility of the dual system \([1,2]y = - 1, y \le 0\). Clearly, the system is feasible and adds to the approximation all vectors satisfying \(x_1 = 0\) and \([1,2]x_2 = 1\). Setting \(x_2 = 0\), we obtain the infeasible dual system \([1,2]y \le -1, y = 0\) and this also implies infeasibility of the system corresponding to \(I = \emptyset \). The resulting approximation is the interval vector \(\left( 0, \left[ \frac{1}{2},1\right] \right) \), which is equal to the exact optimal solution set.

4.4 Computational experiments

We have performed an experiment in order to assess the advantage of using decomposition by complementarity in problems with a fixed coefficient matrix and the drawback of its worse theoretical time complexity compared to orthant decomposition. Both methods were implemented in Python 3.5 using the Gurobi 7.0.2 solver for linear programming. The experiment was carried out on a computer with a 4 GB RAM and an Intel Core i5-2410M (2.30 GHz) processor.

Table 1 Comparison of decomposition by complementarity and orthant decomposition on instances with a fixed coefficient matrix

We have used a data set of instances split into groups of 30, based on the number of variables and constraints. The entries of the central vectors \(b_c\), \(c_c\) and the matrix A were chosen randomly from the interval \([-1000, 1000]\), and entries of the radial vectors \(b_\Delta , c_\Delta \) from [0, 100] with uniform distribution. Performance of the methods was measured based on tightness of the generated interval enclosure and average time needed to solve an instance.

Table 1 provides an overview of the results obtained in the experiment. The column labelled “Ties” shows the number of instances, for which both methods computed the same (optimal) interval enclosure, while the column “Won by Compl” shows the number of instances, for which the orthant decomposition returned an overestimated result. Average computation times (in seconds) of the orthant decomposition (Orth) and decomposition by complementarity (Compl) on a single instance of given dimensions are also presented. Note that both methods could easily be parallelized, providing substantial room for speedup.

We can observe that the advantage of decomposition by complementarity in tightness of approximation is present mainly for underdetermined problems with \(m < n\), which naturally also exhibit largest difference between computation times of the two methods. For square systems, orthant decomposition was able to compute the exact interval hull in all tested instances, but it also lost its time advantage over complementarity decomposition.

Table 2 Comparison of decomposition by complementarity and orthant decomposition on general instances

Table 2 summarizes the results obtained in a comparison of the two methods on general problems with an interval matrix. For the decomposition by complementarity, the basic variant with interval relaxation and an exponential-time condition for checking dual weak feasibility was used. In this test, each of the algorithms was able to find a tighter enclosure on some of the instances. Note that even in the case of a tie, the two enclosures found by the algorithms need not be exactly the same—they can also be incomparable with respect to inclusion.

The computational experiments confirm the advantage of the decomposition by complementarity when aiming to find the exact optimal solution set in the special class of programs with a fixed coefficient matrix. While orthant decomposition may be faster, depending on the size of the program, it was not able to obtain the optimal result in a significant number of tested instances. In the general case, each of the methods provided a better result on some of the instances, and in many cases the results obtained by both methods were similar regarding the quality of the computed approximation.

Recall that both of the tested algorithms require exponential time in the worst case, which is justified by NP-hardness of the considered problem (even in the special case with a fixed matrix). The computations times also indicate that directly using these decompositions methods to obtain exact results, or tight approximations in the general case, may be intractable for larger-sized programs. In such case, the idea of the proposed decomposition by complementarity can be considered as a basis for designing a new polynomial-time algorithm or heuristic for obtaining an approximate solution or to solve smaller subprograms to optimality.

5 Conclusion

We have studied fundamental properties of the optimal solution set of an interval linear program and formulated sufficient conditions for its closedness, boundedness and connectedness. For the special class of problems with a fixed coefficient matrix, we have shown that the optimal set is connected and polyhedral. Regarding the theoretical complexity of testing the studied properties, we have proved that the problem of checking boundedness of the optimal set is co-NP-hard for inequality-constrained interval programs.

Further, we have also addressed the problem of computing an approximation of the optimal solution set. Aiming for a tight enclosure, we have presented a new decomposition method based on complementary slackness. Our method can be used to find the exact optimal solution set for problems, in which intervals only occur in the right-hand-side vector and the objective function. We have also proved that the problem of computing the interval hull of the optimal set remains NP-hard even in this special case, which justifies the use of an exponential-time algorithm. Finally, we have performed computational experiments to evaluate the advantage of our method over the existing orthant decomposition method on problems with a fixed coefficient matrix and the time sacrificed in order to obtain a tight approximation.

Possible directions for future research include strengthening the theoretical results regarding the properties of the optimal set, namely its polyhedrality and closedness. From an algorithmic standpoint, methods providing a tight enclosure of the optimal solution set for general problems are of interest. Such methods can also serve as a basis for deriving faster approximation algorithms useful for practical purposes.