Abstract
A strongly polynomial-time algorithm is proposed for the strict homogeneous linear-inequality feasibility problem in the positive orthant, that is, to obtain \(x\in \mathbb {R}^n\), such that \(Ax > 0\), \(x> 0\), for an \(m\times n\) matrix \(A\), \(m\ge n\). This algorithm requires \(O(p)\) iterations and \(O(m^2(n+p))\) arithmetical operations to ensure that the distance between the solution and the iteration is \(10^{-p}\). No matrix inversion is needed. An extension to the non-homogeneous linear feasibility problem is presented.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
The linear-inequality feasibility problem has been the object of research at least since (Fourier 1824). His elimination method was rediscovered by Motzkin (1936), analysed by Kuhn (1956) and can be considered the first effective technique for solving that family of problems.
To this day, many years after (Fourier 1824), the relaxation and centering methodologies developed to solve linear equations in the basic papers of Kaczmarz (1937) and Cimmino (1938), were extended to linear inequalities by Agmon (1954), Motzkin and Schoenberg (1954), and Merzlyakov (1963). They represent an important area of both theoretical and applied research. The exponential nature of relaxation methods, in terms of the data, was obtained by Todd (1979), and Goffin (1982).
Some classes of projection algorithms are well studied. For intermittent projection algorithms, Bauschke and Borwein (1996) proved that the sequence converges linearly to some solution point at a rate that is independent of the starting point. For cyclic projection algorithms, Gubin et al. (1967), Herman et al. (1978) proved that the sequence converges in norm to some solution point. For block projection algorithms, Censor et al. (1988) proved that the sequence converges linearly to some solution point with a rate independent of the starting point. For weighted projection algorithms, Eremin (1969) proved that the sequence converges linearly to some solution point with a rate independent of the starting point. See the definitions of those classes of projection methods in Bauschke and Borwein (1996).
Censor and Elfving (1982) proposed a least-squares algorithm for the linear inequality feasibility problem, that has a Cimmino-like algorithm as a special case, proving the convergence for both.
Recently, Censor et al. (2012) showed the effectiveness of projection methods for large scale linear inequality feasibility problems. They considered problems up to tens of thousands of unknowns satisfying up to hundreds of thousands of constraints.
Closely related to projection methods, as shown, for example by Bauschke and Borwein (1996), subgradient algorithms occupy a special place amongst all the methods discovered, in their own merits and also for their extensions and applications; see Eremin (1969), Polyak (1987) and Shor (1985).
Center methods also play an essential role in feasibility problems. In general, the usual setting are convex sets. Nevertheless it is frequent that subproblems (local models) are given by linear feasibility problems. Now, we quote some classical papers, whose theory is based on geometric concepts. Levin (1965) and Newman (1965) considered the center of gravity of a convex body; Khachiyan (1979), in his remarkable paper, Nemirovsky and Yudin (1983), and Shor (1985) considered the ellipsoid method. The papers by Khachiyan (1979) and Nemirovsky and Yudin (1983) also proved the polynomial-time complexity. Tarasov et al. (1988) considered the center of the max-volume ellipsoid inscribing the body. The polynomial-time complexity of approximating the maximal inscribed ellipsoid for a polytope is given in Khachiyan and Todd (1993).
As largely known, the concept of analytic center leads to more efficient algorithms than the considered above. This begins with Huard and Lieu (1966) and Huard (1967), considering a generic center in the body that maximizes a distance function. Vaidya (1996) considered the volumetric center, the maximizer of the determinant of the Hessian matrix of the logarithmic barrier function. Goffin et al. (1996) consider a dual column generation algorithm for solving general convex feasibility problems. In each iteration it is computed an approximation of the analytic center of some set given by the intersection of linear inequalities. The authors in Vaidya (1996) and Goffin et al. (1996) obtained the usual convergence and polynomial-time complexity. The linear feasibility problem is considered by Filipowsky (1995). This author proposes “an algorithm that gives an approximate solution of a desired accuracy to a system of linear inequalities specified with approximate data, through the using of knowledge that the actual instance is feasible”. The polynomial-time complexity of the algorithm is a direct consequence of the using of polynomial-time linear programming algorithm in each main iteration.
Ho and Kashyap (1965) considered the strict feasibility problem \(Ax>0\), through the minimization of a quadratic model associated to the feasibility problem, namely, \(\min \Vert Ax-b\Vert _2\), where \(b>0\) is conveniently chosen. The authors prove exponential convergence for the algorithm.
Linear feasibility problems have important applications; see, for example, proton therapy planning in Chen et al. (2010), set theoretic estimation in Combettes (1993), image reconstruction in computerized tomography in Herman (2009), radiation therapy in Herman and Chen (2008), and image reconstruction in Herman et al. (1978).
One of the most important results in the endeavour to obtain a strongly polynomial-time algorithm is in Tardos (1986). In another direction, some papers have presented methods whose complexity depends on the condition number of the constraint matrix; see Vavasis and Ye (1996) and Ye (1986).
Chubanov (2012) recently proposed a strongly polynomial-time algorithm which either finds a solution of a linear system \(Ax = b, 0 \le x \le 1\) or decides that the system has no \(0,1\) solution. In (Chubanov 2010), Chubanov arrives at a similar result for integer solutions. In both cases, the data are integers. See also Basu et al. (2014) for some extensions of Chubanov’s algorithm. On a different approach, for certain classes of polytopes, Barasz and Vempala (2010) obtained a strong polynomial-time algorithm for linear programming.
Our work contributes to establishing the existence of a strong polynomial-time for the restricted case of open sets given by \(Ax>b, x>0\). There is thus no immediate generalization to linear programming. The most important theoretical results underpinning our method are Gordan and Farkas’ Alternative Theorems, and Banach’s Fixed Point Theorem. Our approach differs from those of the authors above, particularly by not requiring integer solution.
This paper is organized as follows: after this introduction, Sect. 2 presents the problem and a general idea of the method. We basically want to solve the feasibility problem through a certain convex minimization problem whose cost function is a mix of barrier and quadratic terms and is dependent on some parameters. Sect. 3 describes the method that uses the Banach Fixed Point Theorem (see Banach 1922). In Sect. 4 the extension to non- homogeneous problems is achieved. The conclusions are given in Sect. 5.
2 Strict homogeneous feasibility and associated problem
This section presents the strict homogeneous and associated convex problems, showing the relation between them. The main purpose of this section is to prove that if an approximate null solution is obtained by our method, then the interior of the set \(V\) in (2.1), which defines the problem, is empty. As an immediate consequence, the complexity of obtaining a null solution is the same as will be proved for the method.
We are concerned with the feasibility problem in the following form:
where the matrix \(A\), is of size \(m\times n\), with rows \(a_i^T\), \(i=1,\ldots ,m\). Note that any feasibility problem \(Ax> 0, \; x\in R^n\), can be expressed in the above form.
Our aim is to find an interior point of \(V\) or to show that its interior is empty.
2.1 Related problem
Consider the following convex problem:
where \(\rho \) and \(\mu \) are positive parameters. Notice that (2.2) is feasible if there exists \((x,y)>0\) satisfying the equality constraints. Then we have the following:
Lemma 2.1
Only one of the following statements is true:
Proof
In the statement 1, we see that if int \((V)=\emptyset \) then one of the following situations is true: \(x>0 \) implies \(Ax\le 0\), or \(\exists x\le 0\) such that \(Ax>0\). Therefore (2.2) is not feasible. Otherwise, we have 2. \(\square \)
Now, let \(s\in \mathbb {R}^m\) be the Lagrange multiplier vector associated with the equalities in (2.2). Then the KKT equations can be written:
Clearly, as (2.2) is a convex-linear problem, these conditions are necessary and sufficient to determine its solution, if one exists. We state the following Lemma.
Lemma 2.2
Let \(\rho \) and \(\mu \) be positive parameters. Suppose (2.2) is feasible. Then there exists a dual variable \(s\in \mathbb {R}_{++}^m\) satisfying
which is a (dual) solution to the KKT equations (2.3).
Proof
The existence of \(s\in \mathbb {R}^m\) follows from the KKT theorem for convex problems. Now, from the KKT equations, the first group of equalities leads to a quadratic in \(x_j\), whose positive solution (the only solution we are interested in) is
Using that formula, we can state explicitly \(a_i^Tx(s),\;i=1,\ldots ,m\), as a function of \(s\), whose positivity is implicitly ensured by the positivity of \(s_i\) (and \(y_i\)). Now, from the second equation in (2.3), \(y_i=1/s_i\), and the primal feasibility equality gives
This is a square system in the variable \(s\), as given in (2.4) in the hypothesis. \(\square \)
Remark 2.3
The existence of solution for the problem (2.2) is also constructively ensured through the closed expressions of \(x(s)\), given in (2.5) and \(y=Ax(s)\), given in (2.6), valid for any \(s>0\). Regarding the uniqueness, it will be assured in the convergence theorem of the Algorithm DVA in Sect. 4.
Next the relationship between the feasibility and the optimization problems will be stated. Taking the first hypothesis in the parameters, for some given small \(\varepsilon \), let
Notice that (2.7) allows (2.2) to be interpreted as a perturbation of the linear version of (2.2).
2.2 Relation between the problems
In this section, we apply two alternative lemmas, one by Gordan (1873), the other by Farkas (1901). From the foregoing discussion, it can be supposed that, in solving the set of equations (2.4) by some algorithm, a vector \(s^*>0\) is produced such that \(x^*>0\) and \(Ax^*>0\), as given by (2.5) and (2.6), respectively. Therefore, only two cases need be considered, with \(\varepsilon \) given as above:
-
(a)
\(2\rho x_i^*> 0\), \(2\rho x_i^*\ne O(\varepsilon )\), for some \(i=1,\ldots ,n\), then we are done.
-
(b)
\(2\rho x_i^*> 0\), \(2\rho x_i^*= O(\varepsilon )\), for all \(i=1,\ldots ,n\).
Clearly, the coefficient \(2\rho \) prevents a false null entry of the solution, if \(\rho \) is large.
Theorem 2.4
Assume that \(2\rho x_i^*> 0\), \(2\rho x_i^*= O(\varepsilon )\), for all \(i=1,\ldots ,n\), \(\mu \) satisfying (2.7) and \(\rho >0\) given. Then, in the feasibility problem (2.1), within an \(\epsilon \) approximation, \(\hbox {int}(V)=\emptyset \). Otherwise, in case a), any solution of (2.4) presents a positive solution to the feasibility problem.
Proof
From (2.5), due to the hypothesis satisfied by \(\mu \), the term \((A^Ts)_j\le O(\varepsilon )\). Therefore, the following cases have to be considered:
-
i.
\((A^Ts)_j=O(\varepsilon ),\;j=1,\ldots ,n\)
-
ii.
\((A^Ts)_j<0\), for at least some \(j=1,\ldots ,n\), \((A^Ts)_j=O(\varepsilon )\), for the remaining entries.
Now, in order to apply alternative theorems, the above conditions are approximated by:
-
i’.
\((A^Ts)_j =0,\;j=1,\ldots ,n\)
-
ii’.
\(A^Ts=c\le 0\), \(c \ne 0\), for some vector c.
For i\('\), we apply Gordan’s alternative Lemma, which in our notation is written:
Thus, we can conclude that \({\hbox {int}}(V)=\emptyset \).
In case ii\('\), we apply Farkas’ Lemma:
Clearly, one of the following is true:
-
1.
\(\{Ax< 0,\; c^Tx<0\}\) has some solution: then \(x\) is non-negative, \(Ax< 0\), so \(\hbox {int}(V)=\emptyset .\)
-
2.
\(\{Ax\ge 0,\; c^Tx>0\}\) has some solution. Only \(x<0\) satisfies that condition, so \(\hbox {int}(V)=\emptyset .\)
-
3.
\(\{Ax< 0,\; c^Tx>0\}\) has some solution. Only \(x<0\) satisfies those inequalities, meaning that \(\hbox {int}(V)=\emptyset .\) The proof is complete.\(\square \)
Remark 2.5
Given some dual solution \(\hat{s}>0\), we have, by construction \(x(\hat{s})\) and \(Ax(\hat{s})>0\), or \(x(\hat{s})=O(\epsilon )\) and \(Ax(\hat{s})=O(\epsilon )\), that is, \(\hbox {int}V\) is empty under an \(\epsilon \) approximation, as specified in above Theorem.
3 An iterative Banach procedure
Remark 3.1
From now on, to simplify notation, we drop the set index \(l=1,\ldots ,m\), or take a similar course, except where necessary for clarity.
3.1 Motivation
As a motivation for the algorithm, we introduce a class of methods aimed at ensuring the main hypothesis to apply the fixed point theory. In the algorithm proposed we fix \(s\) in the hypercube \([1,2]^m\). Now, denote
and define the following abstract procedure:
where \(H_i(s_i)\) is some positive function, and \(F_i(s)\) is given in (2.4). Then, the iterative function \(\Psi _i(s)\) writes:
where \(T_i(s)=H_i(s_i)G_i(s)\).
As will become evident in the next sections, there are two points that determine the main conditions in \(\Psi _i(s)\). The inclusion property:
for \(\delta >0\) small (we cannot ensure that result is true for the whole hypercube). In fact, the not obvious property is to show that \(\Psi _i(s)\le 2\), for \(s\in [1,2-\delta ]\). The second point is related to the rate of convergence. Particularly, we must obtain conditions such that \(\partial \Psi _i(s)/\partial s_i \le \tau <1,\;\;s\in [1,2]^m\).
Let us begin by the last one. From the expression of \(\Psi _i(s)\) above, we must have:
In terms of \(\rho \), this writes:
Then this inequality is solvable for \(\rho >0\) if
Clearly, we must require that the last derivative be negative. A simple choice is to fix \(H_i(s_i)=s_i Q_i(s_i)\), with \(Q_i(s_i)\) a decreasing function.
Now, we take the condition \(\Psi _i(s)\le 2\) into account for \(s\in [1,2-\delta ]^m\). Using (3.2), it writes:
or, in terms of \(\rho \)
Let us consider a particular choice, the exponential function: \(Q_i(s_i)= e^{-\alpha s_i}\), \(\alpha >0\). Note that the coefficient of \(\rho \) writes, in that case, \(2-s_i-e^{-\alpha s_i}\). It is easy to see that it is a concave function, so its minimum is at one of the extremes, given by \(2-\delta \). Therefore, the \(\rho \) coefficient is positive if \(\delta >e^{-\alpha (2-\delta )}\), giving the bound \(\alpha >(2-\delta )^{-1}\ln \delta ^{-1}\). Returning to (3.3), we have, in this case, \(\tau >1-\alpha e^{-\alpha s_i}\). Thus, we have, approximately, \(\tau >1+(e/2)\delta \ln \delta \), a rather poor rate of convergence. This shows that the method depends strongly on the choice of \(H_i(s_i)\).
3.2 Iterative procedure
Define
where \(\alpha \) and \(\beta \) are conveniently chosen and \(F_i(s)\) given in (2.4).
Dual Variable Algorithm (DVA)
Input:
\(\alpha >0\);
\(\beta >0\);
\(\hbox {tol}\) is the accuracy;
\(1\le s_i^0\le 2, \;i=1,\ldots ,m\) is the initial vector iteration;
Compute:
\(s_i^{1}=\Psi _i(s^0) \;i=1,\ldots ,m\)
\(t_0:=\Vert s^1-s^0\Vert _\infty \)
begin
for \(k\ge 0\)
\(s_i^{k+1}=\Psi _i(s^k), \;i=1,\ldots ,m\)
\(t_k:=\Vert s^{k+1}-s^k\Vert _\infty \)
if \(t_k\le \hbox {tol}\; t_0\), then stop.
Clearly, the convergence of this algorithm ensures that \(F_i(s^k)\) tends to \(0\), or, equivalently, we approach a solution to the system (2.2).
4 Convergence theory
This section provides the necessary hypothesis to apply the fixed-point theorem, i.e., to show that both the variable \(s\) and the iteration function belong to the same set, and obtain some convergence rate \(\tau <1\).
Remark 4.1
Consider the following simple estimation which uses the fact that \(s_i\in [1,2]\). From (3.1),
which approximately verifies:
We denote by
Regarding the partial derivatives, we have, for all \(i,l\):
Now, returning to \(\Psi _i(s)\), we can write:
where
4.1 Assuring inclusion
Our aim is to show that, for \(s\in [1,2]^m\), \(\Psi _i(s)\in [1,2]\). Indeed, we are only able to show an approximate inclusion, that is, \(\Psi _i(s)\in [1,2]\) for \(s\in [1,2-\delta ]^m\), for some small \(\delta \).
Proposition 4.2
Let \(\beta _2<\beta <\beta _1\), \(\alpha <\min \{\alpha _1,\alpha _2\}\), \(\rho \ge \max \{\rho _1,\rho _2\}\), with
and
Then \(\ln (\alpha s_i+\beta )<0\), for all \(s_i\in [1,2]\), \(\Psi _i(s)\in [1,2]\), for all \(s\in [1,2-\delta ]^m\).
Proof
Let prove first that \(\Psi _i(s)\ge 1\). From (4.3), \(\Psi _i(s)\ge 1\) is equivalent to
or, in terms of \(\rho \):
Then, in order that the coefficient of \(\rho \) be positive, we fix, in the worst case, \(s_i=1\). The remaining term is positive due to the bounds (4.5) and (4.7) of the hypothesis. So we can obtain the following lower bound for \(\rho \)
Then, with (4.1), we get \(\rho _1\), (4.9).
In the other sense, we have to show that
Similarly as above, we rewrite this expression in terms of \(\rho \), giving:
To analyse the \(\rho \) coefficient, we observe that its derivative is \(-1+\alpha (\alpha s_i+\beta )^{-1} \) and the second derivative is \(-\alpha ^2(\alpha s_i+\beta )^{-2} \), a negative function. Thus, the coefficient is a concave function, and its minimum is at one of its extreme points. Therefore we must compare \(1+\ln (\alpha +\beta )\) and \(\delta +\ln [(2-\delta )\alpha +\beta ]\). Clearly, the last expression is the smallest one, for \(\delta \) sufficiently small. It is positive if we allow the bound \(\beta >\beta _2\), (4.6) in the hypothesis . The second upper bound for \(\alpha \) is derived from the compatibility between the lower and upper bounds to \(\beta \). Finally, we can return to (4.11) to get
A simple computation leads to \(\rho _2\), (4.10). \(\square \)
4.2 Convergence rate
Our target is the relationship between successive iterations, that is,
for some \(0<\tau <1\). Then, from the Algorithm DVA we have
Now, we are going to analyse the partial derivatives of \(\Psi _i(s)\). We need the following upper bound which uses (4.1) and (4.2). Straightforward calculations give that:
Proposition 4.3
Suppose valid the hypothesis of Proposition 4.2. Let \(\alpha <1/2\), \(\beta =(\beta _1+\beta _2)/2\), \(\tau >1-\alpha \) (for \(\delta \) sufficiently small), \(\rho \) satisfies \(\rho \ge \max \{\rho _3, \rho _4, \rho _5\}\), where
and
Then
Proof
-
(I)
The case \(i=l\)
We first show that \(\partial \Psi _i(s)/\partial s_i\le \tau <1\). Using the formulation (4.3), we have
We want that
or, in terms of \(\rho \),
We then get
and, in the worst case:
Now, we use the bounds on \(\alpha \) and \(\beta \). Fix \(\beta \) as \((\beta _1+\beta _2)/2\), to give, after some calculation:
We can, now, for \(\delta \) sufficiently small, take the estimation \(\tau >1-\alpha \). Regarding the bounds for \(\alpha \), we see that \(\lim _{\delta \rightarrow 0}\alpha _2=1\), so we can take \(\alpha <\alpha _1=1/2\).
Returning to (4.16), we obtain
We then use (4.12) to get \(\rho _3\), given in (4.13) in the hypothesis.
Now we take the \(\tau \) condition into account in the opposite sense, that is,
which leads to
Clearly, the coefficient of \(\rho \) is always positive, giving the lower bound \(\rho _4\) (4.14) of the hypothesis.
-
(II)
The case \(i\ne l\)
The condition on \(\tau \) reduces to
which is ensured if we take the lower bound \(\rho _5\), (4.15) of the hypothesis. \(\square \)
4.3 Convergence theorem
Next we present the main result of this section, the Banach fixed-point theorem applied to our iterative procedure. The proof is classical and can be found in many functional analysis books (see e.g. Kantorovich and Akilov 1959). It is given here for the sake of completeness. In what follows, we assume the previous results.
Theorem 4.4
Let \(s^0\in [1,2]^m\) be given. Then the sequence \(\{s^k\}\) produced by Algorithm DVA converges, and its limit \(s^*\) is unique. We also have the following estimation:
Proof
We know that, for each \(i=1,\ldots ,m\), \(k\ge 1\),
Then,
with \(\tau <1\). We now state that, for all \(k\in \{1,2,\ldots \}\), the following is true:
With that aim, we will proceed by induction. The statement is true when \(k=1\) because, from (4.19), we have \(\Vert s^{1+1}-s^1\Vert _\infty \le \tau \Vert s^1-s^0\Vert _\infty .\) Suppose the statement is valid for some \(l\in \{1,2,\ldots \}\). Then
given the inductive assumption. Therefore, the above claim is true.
Let \(\varepsilon >0\). Since \(0<\tau <1\), there exists some sufficiently large \(J\in \{1,2,\ldots \}\) such that
Using property (4.20), for any \(j,k\in \{0,1,\ldots \}\) with \(j>k\ge J\),
Therefore, \(\{s^k\}_{k\ge 0}\) is a Cauchy sequence, its convergence being an immediate consequence of the completeness of the space (a closed box). Hence, we can take \(s^*=\lim _{k\rightarrow \infty }s^k\).
Proceeding now to the uniqueness property, we first show that \(s^*\) is a fixed point of \(\Psi _i(s)\). Note that, for any \(k\in \{0,1,\ldots \}\),
Then, as the sequence is convergent, the term on the right converges to \(0\), and hence
Therefore, simultaneously, as \(k\rightarrow \infty \), so
and \(s^k\rightarrow s^*\). As the limits are unique, it must be true that
In order to demonstrate the uniqueness, suppose \(w\) also satisfies \(\rightarrow w_i=\Psi _i(w)\). Then
As \(\tau <1\), we have achieved our aim. \(\square \)
4.3.1 Complexity
This section presents a complexity result based on a given tolerance \(10^{-p}\), for \(p\ge 1\). Note that the tolerance specified is applied to the interval \([1,2]^m\), and thus independently from the feasibility problem data. \(\tau \) is also independent of data. Consequently, the number of iterations of the algorithm is also data-independent.
Theorem 4.5
Let the error be \(10^{-p}\), for \(p\ge 1\), between solutions \(s^*\) and \(s^K\). The Algorithm DVA then produces a solution with at most
iterations and \(O(m^2(n+p))\) arithmetic operations.
Proof
For the first result, supposing a tolerance \(10^{-p}\), using (4.17), a bound for the iteration \(K\) must be computed such that
This produces the estimation given in the hypothesis.
We now proceed to determine the total arithmetic complexity.
Clearly, the main fixed computation is given by he product \(AA^T\) in \(F_i(s)\), which takes \(O(m^2n)\) operations.
Therefore, the fixed cost takes \(O(m^2n)\) operations.
At each iteration, the product \(AA^Ts^k\) has a maximum cost of \(O(m^2)\) operations, leading to the estimation given in the hypothesis. \(\square \)
5 Strict non-homogeneous feasibility problem
The problem to consider is:
where \(A\) is an \(m\times n\) matrix and \(b\in \mathbb {R}^m\).
As usual, we can define the following equivalent strict homogeneous problem:
A natural adaptation of the previous related convex problem, considered in Sect. 2, is:
Then, similarly to the development of Sect. 2.1 and, as before, designating the Lagrange multipliers associated with the above equality constraints as \(s\in \mathbb {R}^m\), we obtain,:
and
Also, \(1/s_i=y_i\), for \(i=1,\ldots ,m\), which determines the system:
Note that the results of Sect. 3 are immediately applicable.
6 Conclusions
We describe a new algorithm for the strict homogeneous linear-inequality feasibility problem in the positive orthant. It has some desirable and useful properties:
-
the number of iterations depends only on the given error \(10^{-p}\), for some positive integer \(p\);
-
the overall complexity depends only on \(p\) and the dimensions \(m\) and \(n\) of the problem;
-
it is based only on products of matrices and vectors, and is comparable in terms of arithmetic error and computational time with most known algorithms that use matrix inversion.
-
the algorithm is matrix rank independent.
-
The structure of the method allows parallel procedures to be used.
Our current research is directed to considering the general feasibility problem \(Ax=b, x\ge 0\).
References
Agmon S (1954) The relaxation method for linear inequalities. Can J Math 6(3):382–392
Banach S (1922) Sur les opérations dans les ensembles abstraits et leur application aux équations intégrales. Fund Math 3(7):133–181
Barasz, M, Vempala, S (2010) A new approach to strongly polynomial linear programming. In: ICS, Proceedings, Tsinghua University Press, pp 42–48
Basu A, De Loera JA, Junod M (2014) On Chubanov’s method for linear programming. INFORM J Comput 26(2):336–350
Bauschke HH, Borwein JM (1996) On projection algorithms for solving convex problems. SIAM Rev 38:367–426
Censor Y, Altschuler MD, Powlis WD (1988) On the use of Cimmino ’s simultaneous projections method for computing a solution of the inverse problem in radiation therapy treatment planning. Inverse Probl 4:607–623
Censor Y, Chen W, Combettes PL, Davidi R, Herman GT (2012) On the effectiveness of projection methods for convex feasibility problems with linear inequality constraints. Comput Optim Appl 51:1065–1088
Censor Y, Elfving T (1982) New methods for linear inequalities. Linear Algebra Appl 42:199–211
Chen W, Craft D, Madden TM, Zhang K, Kooy HM, Herman GT (2010) A fast optimization algorithm for multi-criteria intensity modulated proton therapy planning. Med Phys 7:4938–4945
Cimmino G (1938) Calcolo approssimate per le soluzioni dei sistemi di equazioni lineari. La Ricerca Scientifica ed il Progresso tecnico nell’Economia Nazionale, 9: 326–333. Consiglio Nazionale delle Ricerche, Ministero dell’Educazione Nazionale, Roma
Combettes PL (1993) The foundations of set theoretic estimation. Proc IEEE 81:182–208
Chubanov S (2012) A strongly polynomial algorithm for linear systems having a binary solution. Math Program 134(2):533–570
Chubanov S (2010) A polynomial relaxation-type algorithm for linear programming. http://www.optimization-online.org/DB_FILE/2011/02/2915.pdf
Eremin II (1969) Féjer mappings and convex programming. Sib Math J 10:762–772
Farkas J (1901) Theorie der einfachen Ungleichungen. J Reine Angew Math 124:1–27
Filipowsky S (1995) On the complexity of solving feasible systems of linear inequalities specified with approximate data. Math Program 71:259–288
Fourier JJB (1824) Reported in Analyse de travaux de l’Académie Royale des Sciences. Partie Mathématique, Histoire de l’Académie de Sciences de l’Institut de France 7 (1827) xlvii–lv
Goffin JL (1982) On the non-polynomiality of the relaxation method for a system of inequalities. Math Program 22:93–103
Goffin JL, Luo ZQ, Ye Y (1996) Complexity analysis of an interior cutting plane method for convex feasibility problems. SIAM J Optim 6(3):638–652
Gordan P (1873) Uber die auflosung linearer Gleichungen mit reelen coefficienten. Math Ann 6:23–28
Gubin LG, Polyak BT, Raik EV (1967) The method of projections for finding the common point of convex sets. Comput Math Math Phys 7(6):1–24
Herman GT (2009) Fundamentals of computerized tomography: image reconstruction from projections, 2nd edn. Springer, London
Herman GT, Chen W (2008) A fast algorithm for solving a linear feasibility problem with application to intensity-modulated radiation therapy. Linear Algebra Appl 428:1207–1217
Herman GT, Lent A, Lutz PH (1978) Relaxation methods for image reconstruction. Commun ACM 21:152–158
Ho Y-C, Kashyap RL (1965) An algorithm for linear inequalities and its applications. IEEE Trans Electron Comput EC-14 5:683–688
Huard P (1967) Resolution of mathematical programming with nonlinear constraints by the method of centers. In: Abadie J (ed) Nonlinear programming. North Holland Publishing Co, Amsterdam, Holland, pp 207–219
Huard P, Lieu BT (1966) La méthode des centres dans un espace topologique. Numer Math 8:56–67
Kantorovich LV, Akilov GP (1959) Functional Analysis in Normed Spaces. Original. translated from the Russian by Brown DE, ed by Robertson AP (1964), Pergamon Press Book, Macmillan Co, New York
Kaczmarz S (1937) Angenherte auflsung von systemen linearer gleschungen. B Int Acad Pol Sci Lettres Classe des Sciences Mathématiques et Naturels. Série A. Sciences Mathematiques, Cracovie, Imprimerie de l ’Université, pp 355–357
Khachiyan LG (1979) A polynomial algorithm in linear programming (English translation). Sov Math Doklady 20:191–194
Khachiyan LG, Todd MJ (1993) On the complexity of approximating the maximal inscribed ellipsoid for a polytope. Math Program 61:137–160
Kuhn HW (1956) Solvability and consistency for linear equations and inequalities. Am Math Mon 63:217–232
Levin A (1965) On an algorithm for the minimization of convex functions. Sov Math Doklady 6:286–290
Merzlyakov YI (1963) On a relaxation method of solving systems of linear inequalities. USSR Comput Math Math Phys 2:504–510
Motzkin TS (1936) Beitrage zur theorie der linearen ungleichungen. Section 13, Azriel, Jerusalem
Motzkin TS, Schoenberg IJ (1954) The relaxation method for linear inequalities. Can J Math 6:393–404
Nemirovsky A, Yudin D (1983) Problem complexity and method efficiency in optimization. Wiley-Interscience Series in Discrete Mathematics, New York
Newman DJ (1965) Location of the maximum on unimodal surfaces. J Assoc Comput Math 12:395–398
Polyak BT (1987) Introduction to optimization. Optimization Software Inc, New York
Shor NZ (1985) Minimization methods for non-differentiable functions. Springer, Berlin Springer Series Computational Mathematics, 3
Tarasov SP, Khachiyan LG, Erlikh I (1988) The method of inscribed ellipsoids. Sov Math Doklady 37:226–230
Tardos E (1986) A strongly polynomial algorithm to solve combinatorial linear programs. Oper Res 34:250–256
Todd MJ (1979) Some remarks on the relaxation method for linear inequalities, Technical report 419. School of Operations Research and Industrial Engineering,Cornell University, Ithaca, NY
Vaidya PM (1996) A new algorithm for minimizing a convex function over convex sets. Math Program 73:291–341
Vavasis SA, Ye Y (1996) A primal-dual interior point method whose running time depends only on the constraint matrix. Math Program 74(1):79–120
Ye Y (1986) How partial knowledge helps to solve linear programs. J Complex 12:480–491
Acknowledgments
I appreciate my wife Sandra and Mestre Gonçalves immensely for non-technical help and I would like to thank the anonymous referees for their helpful comments.
Author information
Authors and Affiliations
Corresponding author
Additional information
Paulo Roberto Oliveira was partially supported by CNPq, Brazil.
Rights and permissions
About this article
Cite this article
Oliveira, P.R. A strongly polynomial-time algorithm for the strict homogeneous linear-inequality feasibility problem. Math Meth Oper Res 80, 267–284 (2014). https://doi.org/10.1007/s00186-014-0480-y
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00186-014-0480-y
Keywords
- Strict linear-inequality feasibility
- Linear programming
- Strong polynomial method
- Application of non-linear programming to feasibility problems