Abstract
The simplex method is an efficient and widely used LP problem solver. Since proposed by George B. Dantzig in 1947, it has been dominating this area for more than 60 years.
Access provided by Autonomous University of Puebla. Download chapter PDF
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
The simplex method is an efficient and widely used LP problem solver. Since proposed by George B. Dantzig in 1947, it has been dominating this area for more than 60 years.
The basic idea behind the simplex method is quite simple. In geometric words, it moves from a vertex to an adjacent vertex, while improving the objective value, until reaching an optimal vertex. Such doing is based on Theorem 2.3.2, guaranteeing the existence of a basic optimal solution if an optimal solution exists. It seems to be natural to hunt for an optimal solution among vertices in the feasible region, as it usually involves infinitely many point but only finitely many vertices (no more than \(C_{n}^{m}\)). So, such a strategy shrinks the hunting scope from the whole feasible region to a finite subset.
The idea may be traced back to as early as Fourier (1823). It was materialized algebraically by Dantzig (1951a). In this chapter, the simplex method will be presented in a tableau form first, then it is revised to a more applicable version. Discussed will also be related topics, such as how to get the method started, finiteness problem and finite pivot rules, and computational complexity. The last section will comment on features of the method.
1 Simplex Tableau
We begin with introduction of the so-called “simplex tableau” for problem (1.10). In Sect. 1.6, we already obtained the canonical form (1.11) of its constraint system, without touching the objective function at all. Now we put the objective function in the equation form
where f is called objective variable, at the bottom of the constraint system. The according tableau is
Then we eliminate the nonzero entries, corresponding to pivot columns (associated with variables \(x_{1},x_{2},x_{3}\)), in its bottom (objective) row. To do so, add − 1 times of the first row to the bottom row first:
then add − 2 times of row 2 to the bottom row:
where the north-west corner is the unit matrix, corresponding to zero entries in the bottom row. Thereby, the tableau offers not only a basic solution
but also a reduced form of the objective function over the feasible region. Note that the solution is a basic feasible solution, associated with the objective value 35∕11, which is equal to the opposite number of the south-east corner entry.
The same tableau may be obtained otherwise by putting coefficients of the constraint system and of the objective function together to form an initial tableau, then applying the relevant Gauss-Jordan elimination.
Such a tableau is called simplex tableau, whose general form is as shown by Table 3.1 Footnote 1.
The associated terms coincide with those the same named for the canonical form of the system Ax = b (Sect. 1.6):
Variables (components) corresponding to the unit matrix are basic variables (components), and the rest are nonbasic variables (components). The basic and nonbasic index sets
are basis and nonbasis, respectively. The sets of basic and nonbasic variables are also called basis and nonbasis. The importance of the simplex tableau lies in that it gives a basic solution \(\bar{x}_{B} =\bar{ b};\,\bar{x}_{N} = 0\). If \(\bar{x}_{B} \geq 0\), the solution and tableaus are basic feasible solution and feasible (simplex) tableau, respectively. If the objective function attains the minimum value over the feasible region, the solution and tableau are said to be basic optimal solution and optimal (simplex) tableau.
In addition, \(\bar{z}_{N}\) in the simplex tableau is termed reduced costs (coefficients). The opposite number of the south-east corner entry gives the according objective value \(\bar{f}\).
Throughout this book, it is stipulated that the bottom row of a simplex tableau always corresponds to the objective function. It will be seen that the f column does not change in solution process by the simplex method, and hence can be omitted. However, it is indispensible in the context of the“reduced simplex method”, presented in Chap. 15.
2 Simplex Method: Tableau Form
In the previous section, a simplex tableau of the LP problem (1.10) together with the associated basic feasible solution (3.1) were obtained. But it can no not be asserted that the solution is optimal, since the reduced cost of variable x 4 is negative. As the value of x 4 increases from 0 while the value of x 5 fixed at 0, in fact, the objective function would decrease further, reaching a lower value than the current.
The new value of x 4 should be as large as possible, so that the associated objective value becomes as low as possible, subject to maintaining nonnegativity of corresponding values of \(x_{1},x_{2},x_{3}\), satisfying
The preceding set of inequalities are equivalent to
whose solution set is
Thereby, \(\bar{x}_{4} = 14/15\) is the largest possible value taken by x 4. Substituting it to (3.2) gives the new feasible solution
corresponding to objective value \(\bar{f} = -7/15\) lower than 35∕11.
The according new simplex tableau is obtained by taking entry 15∕11 at row 2 and column 4 as the pivot. To this end, firstly multiply row 2 by 11∕15 to turn the pivot to 1, leading to
Then add \(-2/11\), \(-1/11\) and 43∕11 times of row 2 to rows 1, 3 and 4, respectively, giving the new simplex tableau
which clearly corresponds to the new basic feasible solution (3.3).
As the reduced cost, associated with variable x 5, in the objective line is negative, still it cannot be asserted that the new solution is optimal. Similarly as in the previous step, we consider the following set of inequalities to determine the new value of x 5 that can be increased to and an associated pivot:
Since coefficients of x 5 in the first two inequalities are positive, the according values of x 1 and x 4 remain nonnegative as x 5 increases from 0, while x 2 is fixed on zero. It is therefore only needed to consider the third inequality, associated with the negative coefficient of x 5. Setting x 3 = 0 in the third equation gives \(x_{5} = 11/14\), leading to the basic feasible solution
associated with objective value \(\bar{f} = -4/7\) lower than \(-7/15\).
To obtain the associated simplex tableau, it is only needed to enter x 5 to and drop x 3 from the basis by taking the entry 14∕15 at row 3 and column 5 as the pivot. Multiply row 3 by 15∕14 gives
Then add 2∕15, 19∕15 and 2∕15 times of row 3 to rows 1, 2 and 4, respectively, leading to
where reduced costs in the bottom row are all nonnegative. As will be proved a little later, it is now can be asserted that the corresponding basic feasible solution is optimal, which is just (3.4), and we are done.
Now turn to the general standard LP problem (1.7). Following the preceding example, we describe an iteration by determining a pivot, and then updating the tableau by relevant elementary transformations.
Assume at the current iteration that we are faced with the feasible Tableau 3.1, the right-hand side of which gives the basic feasible solution
associated with the objective value \(\bar{f}\) equal to the opposite number of the south-east corner entry of the tableau.
Lemma 3.2.1.
If reduced costs are all nonnegative, the feasible simplex tableau is optimal, giving a basic optimal solution.
Proof.
The simplex tableau results from a series of elementary transformations, and hence equivalent to the original problem. Its bottom row represents equality
Assume that \(\tilde{x}\) is any feasible solution, associated with objective value \(\tilde{f}\). Substituting it to (3.6) leads to
where the inequality is from \(\bar{z}_{N} \geq 0\) and \(\tilde{x} \geq 0\). Therefore, \(\bar{x}\) is a basic optimal solution. □
The reduced costs are often called check numbers, as their sign can be used to jude the optimality of a simplex tableau. Usually, there are negative check numbers in the tableau.
Lemma 3.2.2.
Assuming that \(\bar{z}_{q} <0\) holds for some q ∈ N, and that
then the LP problem is (lower) unbounded.
Proof.
The simplex tableau is associated with the constraint system
Setting x j = 0, j ∈ N, j ≠ q in the preceding and combining the result with the nonnegative constrains gives
It is known from (3.7) that the set (3.8) of inequalities hold for all x q = α ≥ 0, associated with feasible value
which, since \(\bar{z}_{q} <0\), can be arbitrarily low as α increases. Therefore, the problem is lower unbounded. □
If (3.7) does not hold, then the value that the nonbasic variable x q takes on will be restricted by the set (3.8) of inequalities. It is not difficulty to verify that the following rule gives the largest possible value α of x q subject to (3.8).
Rule 3.2.1 (Row pivot rule)
Determine a row index p and stepsize α such that
which is often called minimum-ratio test.
Setting x q = α in the equation part of (3.8) gives a new basic feasible solution, i.e.,
Taking \(\bar{a}_{p\,q}\) as the pivot, the according simplex tableau is obtained by multiplying row p by \(1/\bar{a}_{p\,q}\) to convert the pivot to 1, adding \(-\bar{a}_{i\,q}\) times of row p to rows i = 1, …, m, i ≠ p, and adding \(-\bar{z}_{q}\) times of row p to the objective row. Finally, (B, N) is updated by exchanging j p and q, and an iteration is complete.
It is seen from (3.9) that the associated objective value decreases strictly if α > 0; no real decrement is made if α = 0.
Definition 3.2.1.
If some components of \(\bar{b}\) is equal to zero, then the associated basic feasible solution (or tableau) is degenerate.
A LP problems is said to be nondegenerate if all basic solutions are nondegenerate.
In degeneracy case, the stepsize α defined by (3.10) could vanish, and hence the objective function remains unchanged (see (3.9)). That is to say, the associated “new solution” (3.11) is actually the same as the old although the basis is changed.
In general, there are multiple choices for q, as any q with negative \(\bar{z}_{q}\) is eligible to be chosen. Dantzig’s original minimum reduced cost rule is as follows.
Rule 3.2.2 (Column pivot rule)
Select a column index q such that
Thus, this rule selects the column with the most negative reduced cost as the pivot column.Footnote 2 For unit increment of the nonbasic variable x q , this choice leads to the largest amount of decrease in the objective value.
The overall steps are summarized to the following algorithm (Dantzig 1947).
Algorithm 3.2.1 (Simplex algorithm: tableau form).
Initial: a feasible simplex tableau of the form Table 3.1. This algorithm solves the standard LP problem (1.7).
-
1.
Determine a pivot column index \(q \in \arg \min _{j\in N}\bar{z}_{j}\).
-
2.
Stop if \(\bar{z}_{q} \geq 0\).
-
3.
Stop if \(I =\{ i = 1,\ldots,m\ \vert \ \bar{a}_{i\,q}> 0\} = \varnothing\).
-
4.
Determine a pivot row index \(p \in \arg \min _{i\in I}\bar{b}_{i}/\bar{a}_{iq}\).
-
5.
Convert \(\bar{a}_{p\,q}\) to 1, and eliminate the other nonzeros in the column by elementary transformations.
-
6.
Go to Step 1.
Theorem 3.2.1.
Under the nondegeneracy assumption, the simplex algorithm terminates either at
-
(i)
Step 2, generating a basic optimal solution; or at
-
(ii)
Step 3, detecting lower unboundedness of the problem.
Proof.
Note that there are infinitely many basic feasible solutions. If is clear that Algorithm 3.2.1 generates a sequence of basic feasible solutions, while the associated objective value decreases, due to the nonnegative stepssize α. Under the nondegeneracy assumption, the stepssize α is positive, and hence the objective value decreases strictly in each iteration. In the solution process, therefore, any basic solution can only appear once at most. So, infiniteness of the solution process implies that there are infinitely many basic feasible solutions, as is a contradiction. Therefore, Algorithm 3.2.1 terminates.
The meanings of the exits of the Algorithm 3.2.1 comes from Lemmas 3.2.1 and 3.2.2. □
It should be aware that the nondegeneracy assumption is beyond reality at all. As practical problems are almost always degenerate, termination of the simplex Algorithm 3.2.1 is actually not guaranteed. In other words, the possibility is not ruled out that indices enter and leave the basis infinitely many times. In fact, few instances that cannot be solved by the simplex algorithm had been constructed (we will handle this topic in Sects. 3.6 and 3.7). Even so, the possibility for not terminating is very rare, so as dose not matter to broad applications of the simplex algorithm.
A simplex tableau is nothing but a concise expression of a standard LP problem. As they represent problems equivalent to the original problem itself, all the tableaus created by the simplex algorithm are viewed as equivalent. Recursive formulas between a simplex tableau and its predecessor are listed below:
-
1.
The objective row
$$\displaystyle{ \begin{array}{lll} \beta & =& -\bar{ z}_{q}/\bar{a}_{p\,q}, \\ \hat{f} & =&\bar{f} -\beta \bar{ b}_{p}, \\ \hat{z}_{j} & =&\bar{z}_{j} +\beta \bar{ a}_{p\,j},\qquad j \in N, \\ \hat{z}_{j_{i}} & =&\left \{\begin{array}{ll} \beta &i = p,\\ 0 &i = 1,\ldots,m,\ i\neq p.\\ \end{array} \right.\\ \end{array} }$$(3.13) -
2.
The right-hand side
$$\displaystyle{ \begin{array}{lll} \alpha & =&\bar{b}_{p}/\bar{a}_{p\,q}, \\ \hat{b}_{i}& =&\left \{\begin{array}{ll} \bar{b}_{i} -\alpha \bar{ a}_{i\,q}&i = 1,\ldots,m,\ i\neq p, \\ \alpha &i = p.\end{array} \right.\\ \end{array} }$$(3.14) -
3.
Entries of the constraint matrix
$$\displaystyle{ \begin{array}{lll} \hat{a}_{t,j} & =&\left \{\begin{array}{ll} 0, &t = 1,\ldots,m,\ t\neq p;\ j = q. \\ 1, &t = p;\ j = q, \\ \bar{a}_{t\,j} - (\bar{a}_{p\,j}/\bar{a}_{p\,q})\bar{a}_{t\,q},&t = 1,\ldots,m,\ t\neq p;j \in N,\ j\neq q, \\ \bar{a}_{p\,j}/\bar{a}_{p\,q}, &t = p;\ j \in N,\ j\neq q.\\ \end{array} \right. \\ \hat{a}_{t\,j_{i}} & =&\left \{\begin{array}{ll} 0, &t = 1,\ldots,m;\ i = 1,\ldots,m,\,i\neq p;\ i\neq t, \\ 1, &t = i = 1,\ldots,m;\ i\neq p, \\ -\bar{ a}_{t\,q}/\bar{a}_{p\,q},\qquad &t = 1,\ldots,m,\,t\neq p;\,i = p, \\ 1/\bar{a}_{p\,q}, &t = i = p.\\ \end{array} \right. \\ \end{array} }$$(3.15)
Example 3.2.1.
Solve the following problem by Algorithm 3.2.1:
Answer Initial: the following feasible simplex tableau can be directly obtained from the problem:
Iteration 1:
-
1.
\(\min \{-4,-3,-5\} = -5 <0,\ q = 3\).
-
3.
I = { 1, 2} ≠ ∅.
-
4.
\(\min \{15/3,\,12/1\} = 15/3,\ p = 1\).
-
5.
Take 3 in row 1 and column 3 as the pivot (marked by “*”, the same below).
Multiply row 1 by 1∕3, then add − 1, 3, 5 times of row 1 to rows 2,3,5, respectively:
Iteration 2:
-
1.
\(\min \{-2/3,-4/3,5/3\} = -4/3 <0,\ q = 2\).
-
3.
I = { 1, 2, 3, 4} ≠ ∅.
-
4.
\(\min \{5/(1/3),7/(2/3),18/2,9/1\} = 9/1,\ p = 3\).
-
5.
Multiply row 3 by 1∕2, then add \(-1/3,-2/3,-1,4/3\) times of row 3 to rows 1,2,4,5, respectively:
Iteration 3:
-
1.
\(\min \{-2/3,7/3,2/3\} = -2/3 <0,\ q = 1\).
-
3.
I = { 1, 2, 4} ≠ ∅.
-
4.
\(\min \{2/(2/3),1/(1/3),0/2\} = 0,\ p = 4\).
-
5.
Multiply row 4 by 1∕2, then add \(-2/3,-1/3,2/3\) times of row 4 to rows 1,2,5, respectively:
Now all reduced costs in the preceding tableau are nonnegative, and hence the basic optimal solution and associated objective value are, respectively,
It is seen from the preceding example that the tableau in the second iteration already attained the basic optimal solution, but an additional iteration was performed, due to the existence of a negative reduced cost. This occurred because of degeneracy leading to a zero stepsize. So, the condition that reduced costs are all nonnegative is sufficient but not necessary for optimality.
3 Start-Up of the Simplex Method
Algorithm 3.2.1 must start from a feasible simplex tableau. In Example 3.2.1, there is a feasible simplex tableau available, as is not the case in general. A so-called Phase-I procedure is usually carried out to provide an initial feasible simplex tableau (if any), then Algorithm 3.2.1 is used to achieve optimality or detect unboundedness of the problem. Thus, the simplex algorithm described in the previous section is actually a “Phase-II” procedure. A standard LP problem is usually solved by the two procedures in succession, referred to as two-phase simplex method. In this section, a classical Phase-I procedure using artificial variables will be presented first; described is then a closely related start-up method, the so-called “big M”.
Assume that all components of the right-hand side are nonnegative, i.e.,
If not so, multiply each constraint equation with negative right-hand side by − 1 before hand. Then construct an auxiliary problem as follows.
For each i = 1, …, m, introduce a nonnegative artificial variable x n+i to the ith equation, and take the sum of all artificial variables as the auxiliary objective function, i.e.,
Using the constraint system, we eliminate all artificial variables from the auxiliary objective, resulting in
Clearly, there is an available feasible simplex tableau to the preceding auxiliary program, corresponding to the basic feasible solution
Thereby, the program can be solved by Algorithm 3.2.1.
Regarding the outcome, we have the following.
Theorem 3.3.1.
The auxiliary program has an optimal solution, associated with a nonnegative optimal value:
-
(i)
If the optimal value is strictly greater than zero, the original problem is infeasible;
-
(ii)
If the optimal value is equal to zero, the first n components of the optimal solution to the auxiliary program form a feasible solution to the original problem.
Proof.
Clearly, there exists a feasible solution to problem (3.16). Since artificial components of all feasible solutions are nonnegative, all feasible values of the auxiliary program are nonnegative too. Therefore, there exists an optimal solution, associated with a nonnegative objective value:
-
(i)
If the optimal value is strictly greater than zero, it can be asserted that the original problem is infeasible, because if it had a feasible solution \(\bar{x}_{j} \geq 0,\,j = 1,\ldots,n\), then
$$\displaystyle{\bar{x}_{1},\ldots,\bar{x}_{n},\bar{x}_{n+1} = \cdots =\bar{ x}_{n+m} = 0}$$clearly satisfied constraints of (3.16), and hence was a feasible solution to (3.16), corresponding to auxiliary objective value zero, as is a contradiction.
-
(ii)
If the optimal value is zero, then artificial components of the optimal solution are 0. From substituting it to the constraints of (3.16), it is therefore seen that its first n components just satisfy the constraints of the original problem, and hence constitute a feasible solution to the latter. □
Corollary 3.3.1.
The original problem is feasible if and only if the optimal value of the auxiliary program vanishes.
Once a feasible solution to the original problem is obtained by the preceding approach, a feasible simplex tableau can be yielded from the optimal auxiliary tableau by the “following-up steps” below. These steps come from the fact that setting all the artificial variables to zero in the system, corresponding to the auxiliary optimal tableau, leads to a system equivalent to the original one.
Following-up steps:
-
(A)
Delete columns, associated to all nonbasic artificial variables (which can be deleted once the corresponding artificial variable leaves the basis).
-
(B)
Go to step D if there is no basic artificial variable.
-
(C)
Delete the row, associated to a basic artificial variable, if all its nonbasic entries are zero (see the Note below); otherwise, take a nonzero entry of it as pivot to let the artificial variable become nonbasic, and then delete the associated column. This is repeated until no artificial variable is basic.
-
(D)
Cover the auxiliary objective row by the original costs, and then eliminate all basic entries of this row, giving a feasible simplex tableau to the original problem.
Note: In step C, the row is deleted because substituting 0 to the associated artificial variable turns the corresponding equation to an identity, as reflects dependence of the original constraint equations. So, the method can get rid of such dependency.
The preceding can be put into the following algorithm.
Algorithm 3.3.1 (Phase-1: artificial variable).
This algorithm finds a feasible tableau.
-
1.
Introduce artificial variables, and construct auxiliary program of form (3.16).
-
2.
Call the simplex Algorithm 3.2.1.
-
3.
If the optimal value of the auxiliary program is zero, create a feasible tableau via “Following-up steps”.
-
4.
The original problem is infeasible if the optimal value of the auxiliary program is strictly greater than zero.
Note that if a constraint matrix includes some columns of the unit matrix, such columns should be employed to reduce the number of artificial variables. The preceding discussions are still valid, though the auxiliary objective function involves artificial variables only.
Example 3.3.1.
Find a feasible simplex tableau to the following problem:
Answer Construct auxiliary program: the first constraint equation is multiplied by − 1 to turn its right-hand side to nonnegative; as the coefficients of x 6 give a unit vector (0, 0, 1)T, only two artificial variables x 7, x 8 are introduced.
Put the preceding auxiliary program into the following tableau:
Turn the preceding to a simplex tableau: eliminate nonzeros in x 7 and x 8 columns at the bottom (objective) row by adding − 1 times of row 1 and of row 2 to that row:
which is a feasible simplex tableau to the auxiliary program. Call Algorithm 3.2.1 to solve it:
Iteration 1:
-
1.
\(\min \{0,-2,-6,1,1\} = -6 <0,\ q = 3\).
-
3.
I = { 1, 2, 3} ≠ ∅.
-
4.
\(\min \{4/2,2/4,8/1\} = 1/2,\ p = 2\).
-
5.
Multiply row 2 by 1∕4, and then add \(-2,-1,6\) times of row 2 to rows 1,3,4, respectively
(Erase x 8 column after artificial variable x 8 becomes nonbasic):
Iteration 2:
-
1.
\(\min \{3/2,-7/2,1,-1/2\} = -7/2 <0,\ q = 2\).
-
3.
I = { 1, 3} ≠ ∅.
-
4.
\(\min \{3/(7/2),(15/2)/(5/4)\} = 6/7,\ p = 1\).
-
5.
Multiply row 1 by 2∕7, and then add \(1/4,-5/4,7/2\) times of row 1 to rows 2,3,4, respectively
(Erase x 7 column after artificial variable x 7 becomes nonbasic):
Now, all the artificial variables become nonbasic, hence reached is the optimal objective value 0 of the auxiliary program.
Covering the bottom row by original costs leads to
Adding − 1 times of row 1 and 2 times of row 2 to the bottom row gives a feasible tableau of the original problem, i.e.,
Thus, the preceding can be taken as an initial feasible tableau to get Algorithm 3.2.1 started to solve the original problem. Solving LP problems usually requires two phases, both of which are carried out using the simplex algorithm.
On the other hand, it seems to be attractive to solve LP problems in a single phase, as leads to the following so-called big-M method.
The according auxiliary program shares the same constraints as before with (3.16), while its objective function is the sum of the original objective function and M times of the sum of all the artificial variables, i.e.,
Artificial variables in the objective function are eliminated using the constraint system. As a result, there will be a feasible simplex tableau to the auxiliary program, which can be taken as an initial one to get the simplex algorithm started.
The reason for using such an auxiliary objective function is as follows. Its artificial variable part may be regarded as a “penalty function”, where M serves as a “penalty factor”, is a sufficiently large positive number (far larger than the absolute value of any number involved in the computations). The big M inflicts penalty on possible increase of values of artificial variables, consequently forcing them minimized prior to the original objective.
It is difficult however to determine a suitable M in advance. Too large M could lead to bad numerical stability, while too small M degrades method’s effect. It depends not only on the problem to be solved, but also the computer used. A practicable way is to take M as a parameter in the solution process.
To demonstrate, we again bring up Example 3.3.1. Its auxiliary program is of the following form:
Add M times of row 1 and of row 2 to the objective row, giving
Thereby, we can get the simplex algorithm stated from the preceding tableau. In selection of a pivot column index q, however, it should be noted that M is so large that the sign of reduced costs depends upon coefficients of M only. In the preceding tableau, e.g., x 3 column is selected as the pivot column, as term M’s coefficients are \(0,-2,-6,1,1\) for costs of nonbasic variables x 1 through x 5 respectively, and
Row 2 is selected as the pivot row by the minimum-ratio test below:
Then elementary transformations are performed to make a corresponding basis change, completing the first iteration.
If the process continued, it can be found that sequences of iterates created by Big M method and the two-phase simplex method are actually the same. This is not surprising because, as mentioned previously, the big “penalty factor” M forces values of artificial variables vanishing first before pursuing optimality of the original problem–the two methods are essentially the same. Practically, however, the two-phase method is certainly preferable to the big M method, as it involves no any parameter, and easier to realize.
Nevertheless, the auxiliary programs, presented previously in this section, are usually found in textbooks only. If the number m of rows is large, the scale of the programs would become unacceptable large. A somehow practicable approach is to use an auxiliary program with a single artificial variable as follows.
Introducing artificial variable x n+1, we consider the following auxiliary program instead:
to which there is a feasible solution
Results similar to Theorem 3.3.1 and Corollary 3.3.1 hold to the preceding auxiliary program. On the other hand, using the following auxiliary objective function leads to an analogue to the big M method:
A drawback of such auxiliary programs seems to be lack of a explicit feasible simplex tableau. This will be seen not essential, however. In Sect. 13.2, we will present other Phase-I methods as well as a more practicable single artificial variable approach.
Now it is known that the answer to a LP problem must be one of the following three cases:
-
(i)
Infeasible problem: there exists no feasible solution;
-
(ii)
Unbounded problem: there exists a feasible solution but the feasible value is lower unbounded over the feasible region;
-
(iii)
There exists an optimal basic solution.
In principle, a two-phase simplex method can be used to solve any LP problem, achieving an basic optimal solution, if any, or detecting infeasibility or unboundedness, otherwise.
4 Revised Simplex Tableau
Simplex tableau is not an unique tool to implement the simplex method. In fact, getting rig of the tableau can lead to a more compact variant of the simplex method. For this purpose, we will employ vectors or matrices more from now on.
The standard LP problem (1.8) may be represented by the following tableau:
Assume that through some elementary transformations, the preceding table becomes the simplex tableau Table 3.1, which may be succinctly put into
Unless specified otherwise, thereafter the associated basic and nonbasis index sets are assumed to be
Columns corresponding to B are said to be basic, and those to N nonbasic. Without confusion, B and N will also be used to respectively denote submatrices consisting of corresponding columns. The two submatrices are respectively called basis matrix and nonbasis matrix, or basis and nonbasis for short. It is clear that B is an invertible square matrix. The simplex tableau corresponds to the basic solution
If \(\bar{b} \geq 0,\ \bar{z}_{N} \geq 0\), then the tableau is an optimal (simplex) tableau, giving an basic optimal solution, and the according B and N are optimal basis and optimal nonbasis, respectively.
On the other hand, if Ax = b is premultiplied by B −1, and some transposition of terms is made, it follows that
Substituting the preceding to
gives
which can put in
corresponding to basic solution \(\bar{x}_{B} = {B}^{-1}b,\ \bar{x}_{N} = 0\) (hereafter \(\bar{x}_{B} = {B}^{-1}b\) is often said basic solution for short). The preceding, representing a problem equivalent to the original, is called revised simplex tableau, compared to the simplex tableau (3.18).
For simplicity, \(x_{B}^{T}\) and f columns in the preceding two tableaus may be omitted, as they remain unchanged as basis changes.
Proposition 3.4.1.
Any simplex tableau and revised simplex tableau, corresponding to the same basis, are equivalent.
Proof.
Denote by (3.18) and (3.20) the two tableaus, having the same basis B. Since problems represented by them are equivalent, the corresponding entries of the two tableaus are equal. □
Based on the preceding Proposition, Table 3.2 gives equivalence correspondence between quantities, involved in simplex steps, of tableaus (3.18) and (3.20):
In conventional simplex context, each iteration corresponds to a basis B (or its inverse B −1), with which any entry in a simplex tableau can be calculated from the original data (A, b, c). Thereby, Table 3.2 will be used as a tool to derive common simplex variants, such as the (revised) simplex algorithm in the next section and the dual (revised) simplex algorithm in Sect. 4.5.
Notations in this section will be employed throughout this book.
5 Simplex Method
A simplex tableau has to be calculated in each iteration by the tableau simplex Algorithm 3.2.1. But its \((m + 1) \times (n + 1)\) entries are not all useful in an iteration. In fact, only the objective row is needed for the selection of a pivot column, while the pivot column and right-hand side needed for the determination of a pivot row. Using B −1, therefore, a variant without any simplex tableau can be derived by calculating the first three items in Table 3.2.
Let us consider updating B −1. Assume that pivot column index q and row index p are already determined. Putting the nonbasic column a q in place of B’s pth column \(a_{j_{p}}\) gives the new basis below:
It is now needed to compute \(\hat{{B}}^{-1}\) to go on the next iteration.
Note that \(\bar{a}_{q} = {B}^{-1}a_{q}\). Taking \(\bar{a}_{pq}\) as the pivot, the according elementary transformations amount to premultiplying the first m rows of the tableau by m × m elementary matrix
which may also be obtained by executing the same elementary transformations on the unit matrix. It is seen that such a matrix, which is the same as the unit matrix except for the pth column, is determined only by \(\bar{a}_{q}\). Combining (3.21) and (3.22) gives
from which the update of the basis’ inverse follows, i.e.,
Based on the preceding discussions and the equivalence between the simplex tableau and revised simplex tableau, we are able to revise Algorithm 3.2.1 to the following version (Dantzig and Orchard-Hays 1953):
Algorithm 3.5.1 (Simplex algorithm 1).
Initial: \((B,N),{B}^{-1},\bar{x}_{B} = {B}^{-1}b \geq 0\) and \(\bar{f} = c_{B}^{T}\bar{x}_{B}\). This algorithm solves the standard LP problem (1.8).
-
1.
Compute \(\bar{z}_{N} = c_{N} - {N}^{T}\bar{y},\quad \bar{y} = {B}^{-T}c_{B}\).
-
2.
Determine pivot column index \(q \in \arg \min _{j\in N}\ \bar{z}_{j}\).
-
3.
Stop if \(\bar{z}_{q} \geq 0\) (optimality achieved).
-
4.
Compute \(\bar{a}_{q} = {B}^{-1}a_{q}\).
-
5.
Stop if \(\bar{a}_{q} \leq 0\) (unbounded problem).
-
6.
Determine stepsize α and pivot row index p such that
\(\alpha =\bar{ x}_{j_{p}}/\bar{a}_{p\,q} =\min \{\bar{ x}_{j_{i}}/\bar{a}_{i\,q}\ \vert \ \bar{a}_{i\,q}> 0;\ i = 1,\ldots,m\}.\)
-
7.
Set \(\bar{x}_{q} =\alpha\), and update \(\bar{x}_{B} =\bar{ x}_{B} -\alpha \bar{ a}_{q},\ \bar{f} =\bar{ f} +\alpha \bar{ z}_{q}\) if α ≠ 0.
-
8.
Update B −1 by (3.23).
-
9.
Update (B, N) by exchanging j p and q.
-
10.
Go to step 1.
The preceding, usually called revised simplex algorithm, will be referred to as simplex algorithm 1.
In step 1, vector \(\bar{y}\) is calculated first, then it is used to compute reduced costs \(\bar{z}_{N}\), as is referred to as pricing. \(\bar{y}\) is called simplex multipliers (vector), whose additional meanings will be clear later.
See Sect. 3.3 for how to provide an initial basic feasible solution (or basis). This topic will be handled further in Chap. 13.
Example 3.5.1.
Solve the following problem by Algorithm 3.5.1:
Answer Initial: \(B =\{ 5,4,7,6\},\,N =\{ 1,2,3\},{B}^{-1} = I,\bar{x}_{B} = {(15,12,3,9)}^{T},\,f = 0\).
Iteration 1:
-
1.
\(\bar{y} = {B}^{-T}c_{B} = {(0,0,0,0)}^{T},\,\bar{z}_{N} = c_{N} - {N}^{T}\bar{y} = {(-4,-3,-5)}^{T}\).
-
2.
\(\min \{-4,-3,-5\} = -5 <0,\ q = 3\).
-
4.
\(\bar{a}_{3} = {B}^{-1}a_{3} = {(3,1,-3,0)}^{T}.\)
-
6.
\(\alpha =\min \{ 15/3,\,12/1\} = 15/3 = 5,\ p = 1.\)
-
7.
\(\bar{x}_{B} = {(15,12,3,9)}^{T} - 5 \times {(3,1,-3,0)}^{T} = {(0,7,18,9)}^{T},\,x_{3} = 5,\)
\(f = 5 \times (-5) = -25.\)
-
8.
\({B}^{-1} = \left (\begin{array}{@{}lccc@{}} 1/3 & & & \\ - 1/3 & 1 & & \\ 1 & & 1 & \\ 0 & & & 1 \\ \end{array} \right ).\)
-
9.
\(B =\{ 3,4,7,6\},\,N =\{ 1,2,5\},\,\bar{x}_{B} = {(5,7,18,9)}^{T}.\)
Iteration 2:
-
1.
\(\bar{y}\;\; = {(-5/3,0,0,0)}^{T},\,\bar{z}_{N} = {(-4,-3,0)}^{T} - {(-10/3,-5/3,-5/3)}^{T}\)
\(\; = {(-2/3,-4/3,5/3)}^{T}\).
-
2.
\(\min \{-2/3,-4/3,5/3\} = -4/3 <0,\ q = 2\).
-
4.
\(\bar{a}_{2} = {(1/3,2/3,2,1)}^{T}.\)
-
6.
\(\alpha =\min \{ 15,\,21/2,\,9,9\} = 9,\ p = 3.\)
-
7.
\(\bar{x}_{B} = {(5,7,18,9)}^{T} - 9 \times {(1/3,2/3,2,1)}^{T} = {(2,1,0,0)}^{T},\,x_{2} = 9,\)
\(f = -25 + 9 \times (-4/3) = -37.\)
-
8.
\({B}^{-1} = \left (\begin{array}{@{}ccrc@{}} 1 & & - 1/6 & \\ 0 & 1 & - 1/3 & \\ 0 & & 1/2 & \\ 0 & & - 1/2 & 1\\ \end{array} \right )\left (\begin{array}{@{}cccc@{}} 1/3 & & & \\ \, - 1/3 & 1 & & \\ 1 & & 1 & \\ 0 & & & 1\\ \end{array} \right )\! =\! \left (\begin{array}{@{}rcrc@{}} 1/6 & & - 1/6 & \\ - 2/3 & 1 & - 1/3 & \\ 1/2 & & 1/2 & \\ - 1/2 & & - 1/2 & 1\\ \end{array} \right ).\)
-
9.
\(B =\{ 3,4,2,6\},\,N =\{ 1,7,5\},\bar{x}_{B} = {(2,1,9,0)}^{T}.\)
Iteration 3:
-
1.
\(\bar{y}\;\; = {(-7/3,0,-2/3,0)}^{T},\,\bar{z}_{N} = {(-4,0,0)}^{T} - {(-10/3,-2/3,-7/3)}^{T}\)
\(\; = {(-2/3,2/3,7/3)}^{T}\).
-
2.
\(\min \{-2/3,2/3,7/3\} = -2/3,\ q = 1\).
-
4.
\(\bar{a}_{1} = {(2/3,1/3,0,2)}^{T}.\)
-
6.
\(\alpha =\min \{ 2/(2/3),1/(1/3),0/2\} = 0,\ p = 4.\)
-
7.
\(\bar{x}_{B} = {(2,1,9,0)}^{T},\,x_{6} = 0,\ f = -37.\)
-
8.
\({B}^{-1} = \left (\begin{array}{@{}cccl@{}} 1 & & & - 1/3 \\ 0 & 1 & & - 1/6 \\ 0 & & 1 & 0 \\ 0 & & & 1/2 \\ \end{array} \right )\left (\begin{array}{@{}lclc@{}} 1/6 & & - 1/6 & \\ - 2/3 & 1 & - 1/3 & \\ 1/2 & & 1/2 & \\ - 1/2 & & - 1/2 & 1\\ \end{array} \right )\! =\! \left (\begin{array}{@{}lcll@{}} 1/3 & & 0 & - 1/3 \\ -7/12 & 1 & - 1/4 & - 1/6 \\ 1/2 & & 1/2 & 0 \\ - 1/4 & & - 1/4 & 1/2 \\ \end{array} \right ). \)
-
9.
\(B =\{ 3,4,2,1\},\,N =\{ 6,7,5\},\bar{x}_{B} = {(2,1,9,0)}^{T}.\)
Iteration 4:
-
1.
\(\bar{y}\ \, = {(-13/6,0,-1/2,-1/3)}^{T},\ \bar{z}_{N} = {(0,0,0)}^{T}-{(-1/3,-1/2,-13/6)}^{T} = {(1/3,1/2,13/6)}^{T} \geq 0\).
-
2.
The optimal basic solution and optimal value:
$$\displaystyle{\bar{x} = {(0,9,2,1,0,0,0)}^{T},\qquad \bar{f} = -37.}$$
If some practicable pivot rule (Chap. 11) or pricing scheme (Sect. 25.3) is used in the simplex method, there will be a need for computing row p. In order not to increase the number of systems to be solved, modern LP codes are often based on the following variant, where the objective row is computed in recurrence (see (3.13)).
Algorithm 3.5.2 (Simplex algorithm 2).
Initial: \((B,N),\ {B}^{-1},\ \bar{x}_{B} = {B}^{-1}b \geq 0\), \(\bar{z}_{N} = c_{N} - {N}^{T}{B}^{-T}c_{B}\) and \(\bar{f} = c_{B}^{T}\bar{x}_{B}\). This algorithm solves the standard LP problem (1.8).
-
1.
Determine pivot column index \(q \in \arg \min _{j\in N}\bar{z}_{j}\).
-
2.
Stop if \(\bar{z}_{q} \geq 0\) (optimality achieved).
-
3.
Compute \(\bar{a}_{q} = {B}^{-1}a_{q}\).
-
4.
Stop if \(\bar{a}_{q} \leq 0\) (unbounded).
-
5.
Determine stepsize α and pivot row index p such that
\(\alpha =\bar{ x}_{j_{p}}/\bar{a}_{p\,q} =\min \{\bar{ x}_{j_{i}}/\bar{a}_{i\,q}\ \vert \ \bar{a}_{i\,q}> 0;\ i = 1,\ldots,m\}.\)
-
6.
Set \(\bar{x}_{q} =\alpha\), and update \(\bar{x}_{B} =\bar{ x}_{B} -\alpha \bar{ a}_{q},\ \bar{f} =\bar{ f} +\alpha \bar{ z}_{q}\) if α ≠ 0.
-
7.
Compute \(\sigma _{N} = {N}^{T}v\), where \(v = {B}^{-T}e_{p}\).
-
8.
Update by: \(\bar{z}_{N} =\bar{ z}_{N} +\beta \sigma _{N},\,\bar{z}_{j_{p}} =\beta\), where \(\beta = -\bar{z}_{q}/\bar{a}_{p\,q}\).
-
9.
Update B −1 by (3.23).
-
10.
Update (B, N) by exchanging j p and q.
-
11.
Go to step 1.
Although they are equivalent in theory, the revised Algorithms differ from the tableau algorithm numerically. For solving large-scale LP problems, they are certainly superior to the latter (especially when m ≪ n, see Sect. 3.8). In fact, it serves as a basis for designing practicable simplex variants, though the formulation of the latter is simpler, providing a suitable tool for illustration.
Algorithm 3.5.1 was previously derived based on the equivalence of the simplex tableau and revised simplex tableau. It may be derived alternatively by taking a downhill edge, emanating from a current vertex, as a search direction to form a line search scheme, as follows.
Without loss of generality, let B = { 1, …, m} and \(N =\{ m + 1,\ldots,n\}\) be respectively the basis and nonbasis, associated with basic feasible solution \(\bar{x}\). Assume that a pivot column index q ∈ N has been determined such that
Introduce vector
where e q−m is the (n − m)-dimensional unit vector with the (q − m)th component 1. It is clear that
Therefore, \(-\Delta x\) is a downhill with respect to c T x. Taking it as search direction gives the following line search scheme:
where α ≥ 0 is a stepsize to be determined.
Since \(\bar{x}\) is feasible, it holds for any α ≥ 0 that
Therefore, what should do is to maximize α subject to \(\hat{x}_{B} \geq 0\). When \({B}^{-1}a_{q}\not \leq 0\), such doing results in α and p such that
It is clear that the according new solution \(\hat{x}\) is still feasible. In fact, it is verified that \(\hat{x}\) is just the basic feasible solution, associated with the new basis resulting from the old by exchanging j p and q.
The relation between the new and old basis matrices is
In view of that \(a_{j_{p}}\) is the pth column of B and that \({B}^{-1}a_{j_{p}} = e_{p}\) and \({B}^{-1}a_{q} =\bar{ a}_{q}\) hold, it is not difficult to derive the following result from Sherman-Morrison formula (Golub and Van Loan 1989):
which may serve as an update of B −1. In fact, it is easily verified that the preceding and (3.23) are actually equivalent.
The search direction \(-\Delta x\), defined by (3.24), can be further investigated geometrically. Regarding set
we have the following result.
Proposition 3.5.1.
Set E is a downhill edge, emanating from the current vertex \(\bar{x}\) , and \(-\Delta x\) is its direction. If \({B}^{-1}a_{q} \leq 0\) , then \(-\Delta x\) is an (unbounded) extreme direction.
Proof.
It is clear that E is a half-line or edge, emanating from \(\bar{x}\). By (3.29), (3.27) and (3.24), for any x ∈ E ⊂ P it holds that
By (3.25), it is known that the associated objective value satisfies
Note that (3.27) is well-defined when \({B}^{-1}a_{q}\not \leq 0\). If, in addition, \(\bar{x}_{j_{p}} = 0\), then α = 0, and hence E degenerates to vertex \(\bar{x}\). If \(\bar{x}_{j_{p}}> 0\), hence α > 0, then the associated objective value strictly decreases with x q ∈ [0, α]. Therefore, \(-\Delta x\) is a direction of the downhill edge E. When \({B}^{-1}a_{q} \leq 0\), it is clear that \(\alpha = +\infty\) corresponds to the edge E ∈ P, and hence \(-\Delta x\) is an extreme direction. □
Note that edge E, defined by (3.29), could degenerate to the current vertex \(\bar{x}\) if some component of B −1 b vanishes and that the objective value is lower unbounded over the feasible region if \(-\Delta x\) is an extreme direction.
6 Degeneracy and Cycling
It was seen that a zero stepsize leads to the same basic feasible solution, and hence the unchanged objective value. Thus, finiteness of the simple method is questionable (see, e.g., Ryan and Osborne 1988; Wolfe 1963). Soon after its emerging, in fact, the simplex method is found not to terminate in few cases. E.M.L. Beale (1955) and A.J. Hoffman (1953) offered such instances independently. The following is due to Beale.
Example 3.6.1.
Solve the following problem by Algorithm 3.2.1:
Answer Initial: the following feasible tableau is available from the preceding:
Iteration 1:
-
1.
\(\min \{-3/4,20,1,-1/2,6\} = -3/4 <0,\ q = 4\).
-
3.
I = { 1, 2} ≠ ∅.
-
4.
\(\min \{0/(1/4),0/(1/2)\} = 0,\ p = 1\).
-
5.
Multiply row 1 by 4, and then add \(-1/2,3/4\) times of row 1 to rows 2,4, respectively:
Iteration 2:
-
1.
\(\min \{3,-4,-7/2,33\} = -4 <0,\ q = 5\).
-
3.
I = { 2} ≠ ∅.
-
4.
\(\min \{0/4\} = 0,\ p = 2\).
-
5.
Multiply row 2 by 1∕4, and then add 32, 4 times of row 2 to rows 1,4, respectively:
Iteration 3:
-
1.
\(\min \{1,1,-2,18\} = -2 <0,\ q = 6\).
-
3.
I = { 1, 2, 3} ≠ ∅.
-
4.
\(\min \{0/8,0/(3/8),1/1\} = 0,\ p = 1\).
-
5.
Multiply row 1 by 1∕8, and then add \(-3/8,-1,2\) times of row 1 to rows 2,3,4, respectively:
Iteration 4:
-
1.
\(\min \{-2,3,1/4,-3\} = -3 <0,\ q = 7\).
-
3.
I = { 2, 3} ≠ ∅.
-
4.
\(\min \{0/(3/16),1/(21/2)\} = 0,\ p = 2\).
-
5.
Multiply row 2 by 16∕3, and then add \(21/2,-21/2,3\) times of row 2 to rows 1,3,4, respectively:
Iteration 5:
-
1.
\(\min \{-1,1,-1/2,16\} = -1 <0,\ q = 1\).
-
3.
I = { 1, 2} ≠ ∅.
-
4.
\(\min \{0/2,0/(1/3)\} = 0,\ p = 1\).
-
5.
Multiply row 1 by 1∕2, and then add \(-1/3,2,1\) times of row 1 to rows 2,3,4, respectively:
Iteration 6:
-
1.
\(\min \{-2,-7/4,44,1/2\} = -2 <0,\ q = 2\).
-
3.
I = { 2} ≠ ∅.
-
4.
\(\min \{0/(1/3)\} = 0,\ p = 2\).
-
5.
Multiply row 2 by 3, and then add 3, 2 times of row 2 to rows 1,4, respectively:
It is seen that stepsizes are equally zero in all the six iterations, and the last tableau is the same as the first one, consequently. Therefore, continuing the process must generate the same sequence of tableaus, as is a phenomena called cycling. So, the simplex algorithm failed to solve Beale’s problem. It is clear that such a hated infinite case to the simplex method could occur only when degeneracy presents.
At the early days of the simplex method, some scholars thought that degeneracy hardly happens in practice, and up to now the nondegeneracy is still frequently assumed in theory. However, it turns out that degeneracy almost always presents when the simplex method is applied to solving real-world LP problems. Even so, fortunately, cycling rarely occurs, except for few artificial instances, and the simplex method has achieved great success in practice.
The real problem caused by degeneracy seems to be stalling, as it degrades method’s performance seriously when a large number of iterations stay at a vertex for too long a time before exiting it. It is especially a headache for highly degenerate problems, where vanished basic components occupy a large proportion, as leads to a huge number of iterations. But this hard problem is only with the simplex method using the conventional pivot rule, not with variants using rules, presented in Chap. 11.
7 Finite Pivot Rule
As was shown int the previous section, the finiteness of the simplex method is not guaranteed in general. An approach or pivot rule that turns the simplex method to a finite one is called finite.
Is there any finite approach or pivot rule?
The answer is positive. Charnes (1952) proposed a “perturbation approach” by adding a perturbation term to the right-hand side of the initial feasible simplex tableau, i.e.,
where ε > 0 is a sufficiently small parameter (while still using Dantzig’s original rule for pivot column selection).
Theorem 3.7.1.
The perturbation approach is finite.
Proof.
The perturbation term added to the right-hand side can be put in the form w = Iw. In any iteration, the right-hand side can be written
where U is a permutation, resulting from performing elementary transformations on I. Note that U and I have the same rank m, and every row of U is nonzero. Firstly, it holds that \(\bar{b} \geq 0\), because if, otherwise, \(\bar{b}_{i} <0\) for some i ∈ { 1, …, m}, then it follows that v i < 0, as contradicts to problem’s feasibility. Further, it is clear that v i > 0 holds for all row indices i, satisfying \(\bar{b}_{i}> 0\); on the other hand, v i > 0 also holds for all row index i, satisfying \(\bar{b}_{i} = 0\), because the first nonzero of the ith row of U is positive (otherwise, it contradicts the feasibility). Therefore, v > 0 holds. Since each tableau corresponds to a nondegenerate basic feasible solution, there is no any possibility of cycling, hence the process terminates within finitely many iterations. Consequently, eliminating all parameter terms in the end tableau leads to the final tableau of the original problem. □
The order of two vectors, determined by their first different components, is called lexicographic order. Equal vectors are regarded as equal in the lexicographic order. \((\lambda _{1},\ldots,\lambda _{t}) \prec (\mu _{1},\ldots,\mu _{t})\) means that the former is less than the latter in lexicographic order, that is, for the smallest subscript i, satisfying \(\lambda _{i}\neq \mu _{i}\), it holds that \(\lambda _{i} <\mu _{i}\). Similarly, “ ≻ ” is used to denote “greater than” in lexicographic order.
Once a pivot column index q is determined, the perturbation approach amounts to determining a pivot row index p by
As ε is sufficiently small, the preceding is equivalent to the following so-called lexicographic rule (Dantzig et al. 1955):
where u i j is the entry at the ith row and the jth column of U, and “min” is minimization in the sense of lexicographic order.
Among existing finite rules, Bland (1977) rule draws great attention due to its simplicity (also see Avis and Chvatal 1978).
Rule 3.7.1 (Bland rule)
Among nonbasic variables, corresponding to negative reduced costs, select the smallest-indexed one to enter the basis. When there are multiple rows, attaining the same minimum-ratio, select the basic variable with the smallest index to leave the basis.
Theorem 3.7.2.
Bland rule is finite.
Proof.
Assume that cycling occurs with the simplex algorithm using Bland rule. If some variable leaves the basis in a circle, it must enter the basis gain. Denote by T the index set of such shuttling variables, and define
Note that the stepsize is always equal to 0 in each iteration in the circle, and hence leads to the same basic feasible solution; besides, the h-indexed component of the basic feasible solution is 0 for any h ∈ T.
Assume that x t is selected to enter the basis for simplex tableau
thus \(\bar{z}_{t} <0\), and \(\bar{z}_{j} \geq 0\) for any reduced cost’s index j < t.
Assume that at another simplex tableau
basic variable x t in row p leaves and nonbasic variable x s enters the basis. Let \(x_{j_{1}},\ldots,x_{j_{m}}\) be basic variables (\(x_{j_{p}} \equiv x_{t}\)). It follows that \(\hat{c}_{s} <0\), and \(\hat{c}_{j} \geq 0\) for any reduced cost’s index j < s. Note that pivot is positive, i.e., \(\hat{a}_{ps}> 0\); since s ∈ T, it holds that s < t.
Define \(v_{k},\,k = 1,\ldots,n,n + 1\) as follows:
Note that basic columns of \(\hat{A}\) constitute a permutation. Nonbasic components of vector
are all 0, except for v s = 1. For i = 1, …, m, on the other hand, the basic entries in row i of \(\hat{A}\), except for \(\hat{a}_{ij_{i}} = 1\), are all zero; basic entries of \(\hat{c}\) are all zero. Therefore it holds that
Since (3.32) can be obtained from (3.33) by premultiplying a series of elementary matrices, it follows that
where the last equality is
hence
Therefore, there exists some index h < n + 1 such that
giving \(\bar{z}_{h}\neq 0\) and v h ≠ 0.
On the other hand, it is known by v h ≠ 0 and the definition of v that \(h \in \{ j_{1},\ldots,j_{m},s\}\). Thus, there are only following three cases arising:
-
(i)
h = s. v h = 1 in this case. Since x t is an entering variable for simplex tableau (3.32) and h = s < t, hence \(\bar{z}_{h}> 0\), it follows that \(\bar{z}_{h}v_{h} =\bar{ z}_{h}> 0\), contradicting (3.37).
-
(ii)
\(h = j_{p} = t\). In this case, from \(\bar{z}_{h} =\bar{ z}_{t} <0\) and \(v_{h} = -\hat{a}_{ps} <0\), it follows that \(\bar{z}_{h}v_{h}> 0\), contradicting (3.37).
-
(iii)
\(h = j_{i}\neq j_{p}\) or h ≠ t. Now x h is a nonbasic variable of simplex tableau (3.32) (otherwise, \(\bar{z}_{h} = 0\)); it is also a basic index of simplex tableau (3.33), hence h ∈ T. It follows that
$$\displaystyle{ \hat{b}_{i} = 0,\qquad h <t, }$$(3.38)
and hence \(\bar{z}_{h}> 0\). Further, it holds that
since, otherwise, v h ≠ 0 gives \(\hat{a}_{i,s}> 0\), from which and (3.38) it follows that x h , rather than x t , were selected to leave the basis for simplex tableau (3.33), as a contradiction to (3.37). Therefore, Bland rule is finite. □
Chang (1979), Terlaky (1985) and Wang (1987) independently proposed a so-called “criss-cross” finite variant of Bland rule, which is embedded in a somehow different context, compared with the simplex method (see Chap. 18).
Unfortunately, it turns out that these finite rules are very slow in practice, not be mentioned in the same breath with the conventional rule. This is not surprising, however. For example, Rule 3.7.1 gives nonbasic variables with small index priority to enter the basis, while we all know that basic variables of an optimal solution are not necessarily small indexed.
Rule 3.7.1 actually uses a priority order, coinciding with decreasing indices, for selection of an entering variable. It is clear that the “ideal” order, if any, should enter the basic variables of an optimal solution to the basis. According to the heuristic Proposition 2.5.1, inequality constraints with small pivoting-indices should be satisfied as equations by an optimal solution, therefore the corresponding variables would be better to be nonbasic (zero-valued). In other words, variables with large pivoting-indices should have the priority to enter the basis (stipulation: among variables with equal pivoting-indices, select one with the largest index). Thus, we have the following variant of Bland rule (Pan 1990, 1992c).
Rule 3.7.2
Among nonbasic variables, corresponding to negative reduced costs, select the largest pivoting-indexed one to enter the basis. When there are multiple rows, attaining the same minimum-ratio, select the largest pivoting-indexed basic variable to leave the basis. When multiple variables correspond to the same largest pivoting-index, take the largest indexed one.
Theorem 3.7.3.
Rule 3.7.2 is finite.
Proof.
This rule is equivalent to Rule 3.7.1 if variables are re-given indices in accordance with their pivoting-indices. □
Preliminary computational experiments with small test problems showed that performance of Rule 3.7.2 is much better than Bland’s Rule 3.7.1. It might be the best among known finite rules. However, it is still inferior to the conventional rule, as requiring more iterations than the latter in general (Pan 1990).
Bland’s Rule can be easily generalized to the following finite rule.
Rule 3.7.3
Given any order for variables. Among nonbasic variables, corresponding to negative reduced costs, select one the smallest in this order to enter the basis. When there are multiple rows attaining the same minimum-ratio, select the basic variable smallest in the order to leave the basis.
In Example 3.6.1 (Beale problem), we have seen that cycling occurred with the simplex algorithm. The situation will be different if Rule 3.7.2 is used in the place of the conventional rule.
Example 3.7.1.
Solve Beale problem by Algorithm 3.2.1 using Rule 3.7.2:
Answer As the coefficient matrix includes a unit matrix, it is easy to transform the constraints to “ ≥ ” type of inequalities:
Note that the first three constraints correspond to the original variables \(x_{1},\,x_{2},\,x_{3}\), respectively; pivoting-indices of constraints may be regarded as those for the associated variables.
The gradient of the objective function is \(c = {(-3/4,20,-1/2,6)}^{T}\). The gradient of the first constraint is \(a_{1} = {(-1/4,8,1,-9)}^{T}\). The pivoting-index of this constraint (or corresponding variable x 1) is \(\alpha _{1} = -a_{1}^{T}c/\|a_{1}\| = -8.74\). Similarly, calculate all pivoting-indices and put them in the following table in decreasing order:
Now call Algorithm 3.2.1 with Rule 3.7.2.
Initial: The following feasible simplex tableau is obtained directly from the preceding problem:
Iteration 1:
-
1.
Among nonbasic variables \(x_{4}\ (\alpha _{4} = 0.75)\) and \(x_{6}\ (\alpha _{6} = 0.50)\) with negative reduced costs, select the largest pivoting-indexed x 4 to enter the basis, q = 4.
-
3.
I = { 1, 2} ≠ ∅.
-
4.
\(\min \{0/(1/4),0/(1/2)\} = 0\). Among basic variables in rows 1 and 2, select the largest pivoting-indexed \(x_{1}\ (\alpha _{1} = -8.74> -114.78 =\alpha _{2})\) to leave the basis, p = 1.
-
5.
Multiply row 1 by 4, and then add \(-1/2,3/4\) times of row 1 to rows 2,4, respectively:
Iteration 2:
-
1.
Among nonbasic variables \(x_{5}\ (\alpha _{5} = -20.00)\) and x 6 (α 6 = 0. 50) with negative reduced costs, select the largest-pivoting-indexed x 6 to enter the basis, q = 6.
-
3.
I = { 2, 3} ≠ ∅.
-
4.
\(\min \{0/(3/2),1/1\} = 0\), only x 2 is eligible for leaving the basis, p = 2.
-
5.
Multiply row 2 by 2∕3, and then add \(4,-1,7/2\) times of row 2 to rows 1,3,4, respectively:
Iteration 3:
-
1.
Among nonbasic variables \(x_{1}\ (\alpha _{1} = -8.74)\) and \(x_{7}\ (\alpha _{7} = -6.00)\) with negative reduced costs, select the largest-pivoting-indexed x 7 to enter the basis, q = 7.
-
3.
I = { 3} ≠ ∅.
-
4.
Only x 3 is eligible for leaving the basis, p = 3.
-
5.
Multiply row 3 by 1∕10, and then add 4, 10, 2 times of row 3 to rows 1,2,4, respectively:
Iteration 4:
-
1.
Only nonbasic variable x 1 is eligible for entering the basis, q = 1.
-
3.
I = { 3} ≠ ∅.
-
4.
Only x 7 is eligible for leaving the basis, p = 3.
-
5.
Multiply row 3 by 15∕2, and then add \(4/5,7/5\) times of row 3 to rows 1,4, respectively:
All reduced costs are now nonnegative. The optimal solution and optimal value are
Thus, Beale problem is solved without cycling.
8 Computational Complexity
The evaluation of an algorithm is concerned with the amount of required arithmetics and storages, numerical stability and degree of difficulty for programming. In this section, we will discuss the simplex method’s computational complexity, including time complexity (estimate of the number of required four basic arithmetics and comparisons), and storage complexity (estimate of the number of memory locations).
Either time or storage complexity is closely related to the scale of the problem handled: the larger the problem is, the higher the complexity. Therefore, analyzing complexity must be done with fixed problem’s size. As for a standard LP problem, it is convenient to use m and n to determine its size roughly. Further, problem’s size also depends on concrete values of (A, b, c), as can be characterized by total number L, called input length of binary digits of input data. For reaching a certain solution precision, the amount of arithmetics is a function of m, n, L. If the number of arithmetics required by solving some type of problems is bounded above by some function τ f(m, n, L), the algorithm is said to have order O(f(m, n, L)) of (time) complexity, where τ > 0 is a constant and f(m, n, L) is complexity function. If f(m, n, L) is a polynomial in m, n and L, it is said to be of polynomial time complexity. Usually, such algorithms are regarded as “good” ones, and the lower the order of the polynomial is, the better the algorithm. On the other hand, if f(m, n, L) is exponential in m, n or L, it is said to be of exponential time complexity. Such algorithms are regarded as “bad”, as they can fail to solve larger problems by consuming unacceptable amount of time. Note, however, that such complexity is the worst case complexity, as the amount of arithmetics never exceeds τ f(m, n, L).
The following table lists numbers of arithmetics per iteration and storage locations, required by tableau simplex Algorithm 3.2.1 vs. simplex Algorithm 3.5.1:
In the preceding table, the amount of storage locations required by Algorithm 3.2.1 excludes that for original data (A, b, c), though these data should be stored, in practice. In fact, both algorithms have to restart from scratch periodically after a certain number of iterations (see Sect. 5.1), let alone Algorithm 3.5.1 utilizes a part of them in each iteration. Therefore, storage requirement of the tableau simplex algorithm is significantly high, relative to that of the revised version, especially when n ≫ m.
As iterative algorithms, their time complexity depends on the required number of iterations, as well as that of arithmetics per iteration. As they are equivalent theoretically, the two algorithms would require the same iterations in solving any standard LP problem, if rounding errors are neglected. Thus, we only compare the amount of arithmetics, mainly multiplications, in a single iteration. It is seen from the table that Algorithm 3.5.1 is much superior to the tableau version if m ≪ n. In fact, the latter is not applied in practice, but only seen in textbooks. Note that all listed in the table is for dense computations. As for sparse computations, the former and its variants are even very much superior to the latter (see Chap. 5).
In addition, it is seen that the numbers of arithmetics per iteration are polynomial functions in m, n. Therefore, required iterations are a key to their time complexity. Note that each iteration corresponds to a basis, and the number of bases is no more than C n m. If n ≥ 2m, then \(C_{n}^{m} \geq {(n/m)}^{m} \geq {2}^{m}\), as indicates that the required number of iterations could attain an exponent order. Indeed, Klee and Minty (1972) offered an example, indicating that the simplex method using the conventional pivot rule passes through all the 2m vertices. Thus, the conventional rule is not polynomial, in the sense that it does not turn the simplex method to a polynomial-time one. Moreover, it turns out that Bland’s Rule 3.7.1, the “most improvement rule”, and many other subsequent rules, like the steepest edge rule, are all not polynomial (Chap. 11). Actually, it has not been clear whether there exists a polynomial rule though such possibility seems to be very low, if any.
Computational experiences indicate that the conventional simplex algorithm is slow for solving certain type of LP problems, such as some hard large-scale problems or problems with combinatorial constraints; e.g., with zero or one coefficients or those from “Krawchuk polynomials” (Schrijver 1986, p. 141; also see Klee 1965). Nevertheless, its average efficiency is quite high. For solving small or medium LP problems, in particular, it usually requires iterations no more than 4m to 6m (including Phase-1).
The fact that the non-polynomial-time simplex algorithm and its variants perform very well in practice reveals that the worst case complexity is of limited reference value, even could be misguiding. In fact, the worst case hardly happens in practice, and complexity under some probability sense would be closer to reality. In this aspect, Borgwardt (1982a,b) showed that an average complexity of the simplex algorithm is polynomial. Specifically, for LP problem
where \(A \in {\mathcal{R}}^{m\times n},\ b> 0\), and components of c are random under certain assumptions, he proved that the mathematical expectation of iterations, required by the simplex algorithm using a special pivot rule, is
Using a different probability model and pivot rule, Smale (1983a,b) proved that average complexity of the simplex algorithm when solving
is bounded above by
which is not polynomial, but still better than Borgwardt’s result when m ≪ n. Combining Borgwardt’s pivot rule and a generalized Smale’s probability model, Haimovich (1983) proved that the average complexity of iterations required is linear polynomial. These theoretical results coincide with real situation.
Finally, we stress that algorithms’ evaluation is basically a practical issue. In a word, practice is the unique touchstone: the value and vitality of an algorithm lie on its performance only.
9 On Features of the Simplex Method
In this final section of the chapter, we focuss on some features of the simplex method.
It is interesting that the method’s prefix “simplex” came from a chat between G.B. Dantzig and T. Motzkin (Dantzig 1991) in the early days of LP. The latter indicated that the m columns of the basis matrix and the entering column just form a “simplex” in the m-dimensional space. Thus, each iteration in Dantzig’s method may be viewed as a movement from a simplex to an adjacent simplex. Dantzig accepted his suggestion by consorting with the “simplex”.
Accordingly, the simplex method is pivotal and basis-based, as is closely related to the linear structure of the LP model. Each iteration of it is characterized by a basis: once a basis is determined, so is done the corresponding basic feasible solution. If optimality cannot be asserted, a pivot is selected to make a basis change to improve the solution, or unboundedness of the problem is detected. Consequently, computational work per iteration, involved in the simplex method, is much less than that required by the interior-point method (Chap. 9).
If an optimal basis is available, an LP problem can be handled by just solving a single system of linear equations. Even if this is not the case, a basis close to an optimal one is useful: less iterations are usually required starting from a basis, yielded from a previously interrupted solution process. Such a so-called “warm start” features a source of main bonus of the simplex method. For instance, it is applied to sensitivity analysis and parametric programs (Chap. 6), the restarting tactic used in implementation (Chap. 5), as well as the decomposition principle (Sect. 25.6). In addition, the warm start is of great importance to the methodology for solving ILP problems.
It is noted that each iteration of the simplex method consists of a pivot selection and a basis change. Since it emerged, in fact, research on the method has not been beyond the scope of the two aspects. On one side, the pivot rule used in it is, no doubt, crucial to method’s efficiency. As a result, new pivot rules were suggested from time to time, though Dantzig’s original rule, because of its simplicity, had gained broad applications for a long time, as is a situation that has changed only about 20 years ago. More efficient rules will be presented in Chap. 11. On the other side, the computation related to pivot and basis change has been improved continually. Related results will be presented in later chapters, especially in Part II of this book.
As for concern whether an index enters and leaves the basis too many times, the following property seems to be favorable.
Proposition 3.9.1.
A leaving column in a simplex iteration does not enter the basis in the next iteration.
Proof.
Since an entering column corresponds to a negative reduced cost and the pivot determined is positive (see (3.10)), a leaving column corresponds to a positive reduced cost, after the associated elementary transformations carried out, and hence never enters the basis in the next iteration. □
Nevertheless, it is not difficult to construct an instance, in which a column that just entered the basis leaves it immediately.
As was known, nonnegativity of nonbasic reduced costs is not a necessary condition for optimality. The following indicates that it is necessary if the nondegeneracy is ensured.
Proposition 3.9.2.
If a basic optimal solution is nondegenerate, reduced costs in the associated simplex tableau are all nonnegative.
Proof.
Assume that there are negative reduced costs in the simplex Tableau 3.1. Without loss of generality, assume \(\bar{z}_{q} <0\). If (3.7) holds, then unboundedness of the problem follows from Theorem 3.2.2, as contradicts the existence of an optimal solution; if, otherwise, (3.7) does not hold, it is known from the nondegeneracy assumption that α > 0, and hence there is a feasible value strictly less than the optimal value, as is a contradiction. Therefore, reduced costs are all nonnegative. □
The preceding and Lemma 3.2.1 together imply that the condition of nonnegativity of nonbasic reduced costs is not only sufficient but also necessary to optimality under the nondegeneracy assumption. The following result concerns presence of multiple optimal solutions.
Proposition 3.9.3.
If reduced costs are all positive, there is a unique optimal solution to the LP problem. If a basic optimal solution is nondegenerate and there is a zero-valued reduced cost, then there are infinitely many optimal solutions; in the case when the feasible region is bounded, there are multiple basic optimal solutions.
Proof.
We prove the first half first. Assume that reduced costs in an optimal tableau are all positive, corresponding to the basic optimal solution \(\bar{x}\). For any feasible solution \(\hat{x} \geq 0\) different from \(\bar{x}\), there is an index s ∈ N such that \(\hat{x}_{s}> 0\) (otherwise, the two are the same). Therefore, substituting \(\hat{x}\) to (3.6) leads to
which implies that \(\hat{x}\) is not optimal, as is a contradiction. Therefore, there is an unique optimal solution.
To prove the seconde half, assume that a tableau, say Table 3.1, gives a nondegenerate basic optimal solution and has zero reduced costs. Without loss of generality, assume \(\bar{z}_{q} = 0\). If (3.7) holds, then inequalities of the right-hand side of (3.8) hold for any x q = α > 0, that is, there are infinitely many feasible solutions, corresponding to the same optimal value \(-\bar{f}\) (see (3.9)); if the feasible region is bounded, then (3.7) does not hold, hence it is known from \(\bar{b}_{p}> 0\) that the stepsize α, defined by (3.10), is positive. Thus, for any value of x q in [0, α], a feasible solution can be determined by the equalities of (3.8), corresponding to optimal value \(-\bar{f}\). Therefore, there are infinitely many optimal solutions. It is clear that entering x q to and dropping \(x_{j_{p}}\) from the basis give a different basic optimal solution. □
The last half of the proof actually describes an approach to obtain multiple basic optimal solutions by entering the nonbasic indices, corresponding to zero-valued reduced costs, to the basis. With this respect, an approach to intercepting for the optimal set will be described in Sect. 25.2.
As was mentioned in Sect. 2.4, the simplex method can be explained in terms of the active set method. In each iteration, in fact, a vertex is determined by Ax = b and x j = 0, j ∈ N, corresponding to n active constraints. Since it has zero basic components, a degenerate vertex is the intersection of superplanes, the number of which is greater than n. At first glance, this case seems rarely to occur in practice. Surprisingly, however, the situation is just the opposite: problems stemming from practice are almost all degenerate.
The simplex tableau is essentially the canonical form of Ax = b (together with reduced costs), which may be initially created by the Gauss-Jordan elimination. Such a tableau was used to develop the simplex method previously, although the same can be done alternatively via the triangular form, involving an upper triangular submatrix rather than unit matrix. As it is associated with the Gauss elimination, in fact, the latter should be more relevant to implementation (see also the last paragraph of Sect. 1.6).
Finally, there are two issues that are not guaranteed by the simplex method.
As was well-known, the method is not a polynomial time one; even finiteness of it is, in presence of degeneracy, not guaranteed in theory. Practically, however, this might not be a serious problem, as the method performs well overall if implemented properly although some authors do not agree with this point (see, e.g., Kotiah and Steinberg 1978).
More seriously, the method in its very form is numerically unstable, because the selected pivot may be arbitrarily small in module (see, e.g., Chan 1985; Maros 2003b; Ogryczak 1988). Refer to Rule 3.2.1 used in steps 4 of Algorithm 3.2.1. The pivot \(\bar{a}_{p\,q}\), selected by the minimum-ratio test, could be too small to carry out subsequent computations. Indeed, the simplex method in its very form can only solve few (even very small) LP problems.
Instead of Rule 3.2.1, the following rule may serve as a remedy for solving highly degenerate LP problems.
Rule 3.9.1 (Row rule)
Define \(I =\{ i\ \mid \ \bar{a}_{i\,q}> 0,\ i = 1,\cdots \,,m\},\ I_{1} =\{ i\ \vert \ \bar{b}_{i} = 0,\ i \in I\}\). Determine pivot row index p and stepsize α by
A more favorable and applicable remedy is Harris two-pass Rule 5.6.1 though somehow cumbersome (see also Greenberg 1978). Even so, the stability problem is still not overcome yet entirely, as is the source of many troubles encountered in practice. With this aspect, alternative methods presented in Chaps. 15 and 16 might be “terminators”.
There are analogues to the preceding issues and remedies for various simplex variants, including the dual simplex method presented in the next chapter.
Notes
- 1.
It is always possible to arrange the unit matrix at the north-west corner of the simplex tableau by column exchanges. Practically, however, this is not needed, and the matrix corresponding to basic variables is usually a permutation matrix.
- 2.
Any choice is eligible if there is a tie when the number of the most negative reduced costs is more than one. Similarly below.
References
Abadie J, Corpentier J (1969) Generalization of the Wolfe reduced gradient method to the case of mon-linear constrained optimization. In: Fletcher R (ed) Optimization. Academic, London, pp 37–48
Abel P (1987) On the choice of the pivot columns of the simplex method: gradient criteria. Computing 38:13–21
Adler I, Megiddo N (1985) A simplex algorithm whose average number of steps is bounded between two quadratic functions of the smaller dimension. J ACM 32:871–895
Adler I, Resende MGC, Veige G, Karmarkar N (1989) An implementation of Karmarkar’s algorithm for linear programming. Math Program 44:297–335
Andersen E, Andersen K (1995) Presolving in linear programming. Math Program 71:221–245
Andersen ED, Gondzio J, Mészáros C, Xu X (1996) Implementation of interior-point methods for large scale linear programming. In: Terlaky T (ed) Interior point methods of mathematical programming. Kluwer, Dordrecht
Andrel N, Barbulescu M (1993) Balance constraints reduction of large-scale linear programming problems. Ann Oper Res 43:149–170
Anstreicher KM, Watteyne P (1993) A family of search directions for Karmarkar’s algorithm. Oper Res 41:759–767
Arrow KJ, Hurwicz L (1956) Reduction of constrained maxima to saddle-point problems. In: Neyman J (ed) Proceedings of the third Berkeley symposium on mathematical statistics and probability, vol 5. University of California Press, Berkeley, pp 1–26
Avis D, Chvatal V (1978) Notes on Bland’s pivoting rule. Math Program 8:24–34
Balas E (1965) An additive algorithm for solving linear programs with zero-one variables. Oper Res 13:517–546
Balinski ML, Gomory RE (1963) A mutual primal-dual simplex method. In: Graves RL, Wolfe P (eds) Recent advances in mathematical programming. McGraw-Hill, New York
Balinski ML, Tucker AW (1969) Duality theory of linear problems: a constructive approach with applications. SIAM Rev 11:347–377
Barnes ER (1986) A variation on Karmarkars algorithm for solving linear programming problems. Math Program 36:174–182
Bartels RH (1971) A stabilization of the simplex method. Numer Math 16:414–434
Bartels RH, Golub GH (1969) The simplex method of linear programming using LU decomposition. Commun ACM 12:266–268
Bartels RH, Stoer J, Zenger Ch (1971) A realization of the simplex method based on triangular decompositions. In: Wilkinson JH, Reinsch C (eds) Contributions I/II in handbook for automatic computation, volume II: linear algebra. Springer, Berlin/London
Bazaraa MS, Jarvis JJ, Sherali HD (1977) Linear programming and network flows, 2nd edn. Wiley, New York
Beale EML (1954) An alternative method for linear programming. Proc Camb Philos Soc 50:513–523
Beale EML (1955) Cycling in the dual simplex algorithm. Nav Res Logist Q 2:269–275
Beale E (1968) Mathematical programming in practice. Topics in operations research. Pitman & Sons, London
Benders JF (1962) Partitioning procedures for solving mixed-variables programming problems. Numer Math 4:238–252
Benichou MJ, Cautier J, Hentges G, Ribiere G (1977) The efficient solution of large scale linear programming problems. Math Program 13:280–322
Bixby RE (1992) Implementing the simplex method: the initial basis. ORSA J Comput 4:287–294
Bixby RE (1994) Progress in linear programming. ORSA J Comput 6:15–22
Bixby RE (2002) Solving real-world linear problems: a decade and more of progress. Oper Res l50:3–15
Bixby RE, Saltzman MJ (1992) Recovering an optimal LP basis from the interior point solution. Technical report 607, Dapartment of Mathematical Sciences, Clemson University, Clemson
Bixby RE, Wagner DK (1987) A note on detecting simple redundancies in linear systems. Oper Res Lett 6:15–17
Bixby RE, Gregory JW, Lustig IJ, Marsten RE, Shanno DF (1992) Very large-scale linear programming: a case study in combining interior point and simplex methods. Oper Res 40:885–897
Björck A, Plemmons RJ, Schneider H (1981) Large-scale matrix problems. North-Holland, Amsterdanm
Bland RG (1977) New finite pivoting rules for the simplex method. Math Oper Res 2:103–107
Borgwardt K-H (1982a) Some distribution-dependent results about the asymptotic order of the arerage number of pivot steps of the simplex method. Math Oper Res 7:441–462.
Borgwardt K-H (1982b) The average number of pivot steps required by the simplex method is polynomial. Z Oper Res 26:157–177
Botsaris CA (1974) Differential gradient methods. J Math Anal Appl 63:177–198
Breadley A, Mitra G, Williams HB (1975) Analysis of mathematica problems prior to applying the simplex algorithm. Math Program 8:54–83
Brown AA, Bartholomew-Biggs MC (1987) ODE vs SQP methods for constrained optimization. Technical report, 179, The Numerical Center, Hatfield Polytechnic
Carolan MJ, Hill JE, Kennington JL, Niemi S, Wichmann SJ (1990) An empirical evaluation of the KORBX algorithms for military airlift applications. Oper Res 38:240–248
Cavalier TM, Soyster AL (1985) Some computational experience and a modification of the Karmarkar algorithm. ISME working paper, The Pennsylvania State University, pp 85–105
Chan TF (1985) On the existence and computation of LU factorizations with small pivots. Math Comput 42:535–548
Chang YY (1979) Least index resolution of degeneracy in linear complementarity problems. Technical Report 79–14, Department of OR, Stanford University
Charnes A (1952) Optimality and degeneracy in linear programming. Econometrica 20:160–170
Cheng MC (1987) General criteria for redundant and nonredundant linear inequalities. J Optim Theory Appl 53:37–42
Chvatal V (1983) Linear programming. W.H. Freeman, New York
Cipra BA (2000) The best of the 20th century: editors name top 10 algorithms. SIAM News 33: 1–2
Coleman TF, Pothen A (1987) The null space problem II. Algorithms. SIAM J Algebra Discret Methods 8:544–562
Cook AS (1971) The complexity of theorem-proving procedure. In: Proceedings of third annual ACM symposium on theory of computing, Shaker Heights, 1971. ACM, New York, pp 151–158
Cottle RW, Johnson E, Wets R (2007) George B. Dantzig (1914–2005). Not AMS 54:344–362
CPLEX ILOG (2007) 11.0 User’s manual. ILOG SA, Gentilly, France
Curtis A, Reid J (1972) On the automatic scaling of mtrices for Gaussian elimination. J Inst Math Appl 10:118–124
Dantzig GB (1948) Programming in a linear structure, Comptroller, USAF, Washington, DC
Dantzig GB (1951a) Programming of interdependent activities, mathematical model. In: Koopmas TC (ed) Activity analysis of production and allocation. Wiley, New York, pp 19–32; Econometrica 17(3/4):200–211 (1949)
Dantzig GB (1951b) Maximization of a linear function of variables subject to linear inequalities. In: Koopmans TC (ed) Activity analysis of production and allocation. Wiley, New York, pp 339–347
Dantzig GB (1951c) A proof of the equivalence of the programming problem and the game problem. In: Koopmans T (ed) Activity analysis of production and allocation. Wiley, New York, pp 330–335
Dantzig GB (1963) Linear programming and extensions. Princeton University Press, Princeton
Dantzig GB (1991) linear programming. In: Lenstra JK, Rinnooy Kan AHG, Schrijver A (eds) History of mathematical programming. CWI, Amsterdam, pp 19–31
Dantzig GB, Ford LR, Fulkerson DR (1956) A primal-dual algorithm for linear programs. In: Kuhn HK, Tucker AW (eds) Linear inequalities and related systems. Princeton University Press, Princeton, pp 171–181
Dantzig GB, Orchard-Hays W (1953) Alternate algorithm for the revised simplex method using product form for the inverse. Notes on linear programming: part V, RM-1268. The RAND Corporation, Santa Monica
Dantzig GB, Orchard-Hayes W (1954) The product form for the inverse in the simplex method. Math Tables Other Aids Comput 8:64–67
Dantzig GB, Thapa MN (1997) Linear programming 1: introduction. Springer, New York
Dantzig GB, Thapa MN (2003) Linear programming 2: theory and extensions. Springer, New York
Dantzig GB, Wolfe P (1960) Decomposition principle for linear programs. Oper Res 8:101–111
Dantzig GB, Orden A, Wolfe P (1955) The generalized simplex method for minimizing a linear form under linear inequality constraints. Pac J Math 5:183–195
de Ghellinck G, Vial J-Ph, Polynomial (1986) Newton method for linear programming. Algorithnica 1:425–453. (Special issue)
Dikin I (1967) Iterative solution of problems of linear and quadratic programming. Sov Math Dokl 8:674–675
Dikin I (1974) On the speed of an iterative process. Upravlyaemye Sistemi 12:54–60
Dorfman R, Samuelson PA, Solow RM (1958) Linear programming and economic analysis. McGraw-Hill, New York
Duff IS, Erisman AM, Reid JK (1986) Direct methods for sparse matrices. Oxford University Press, Oxford
Evtushenko YG (1974) Two numerical methods for solving non-linear programming problems. Sov Math Dokl 15:420–423
Fang S-C (1993) Linear optimization and extensions: theory and algorithms. AT & T, Prentice-Hall, Englewood Cliffs
Farkas J (1902) Uber die Theorie der Einfachen Ungleichungen. Journal für die Reine und Angewandte Mathematik 124:1–27
Fiacco AV, Mccormick GP (1968) Nonlinear programming: sequential unconstrained minimization techniques. Wiley, New York
Fletcher R (1981) Practical methods of optimization. Volume 2: constrained optimization. Wiley, Chichester
Ford Jr LR, Fulkerson DR (1956) Maximal flow through a network. Can J Math 8:399–407
Forrest JJH, Goldfarb D (1992) Steepest edge simplex algorithm for linear programming. Math Program 57:341–374
Forrest J, Tomlin J (1972) Updating triangular factors of the basis to maintain sparsity in the product form simplex method. Math Program 2:263–278
Forsythe GE, Malcom MA, Moler CB (1977) Computer methods for mathematical computations. Pretice-Hall, EnglewoodCliffs
Fourer R (1979) Sparse Gaussian elimination of staircase linear systems. Tehchnical report ADA081856, Calif Systems Optimization LAB, Stanford University
Fourier JBJ (1823) Analyse des travaux de l’Academie Royale des Science, pendant l’Iannee, Partie mathematique, Histoire de l’Acanemie Royale des Sciences de l’nstitut de France 6 [1823] (1826), xxix–xli (partially reprinted as: Premier extrait, in Oeuvres de Fourier, Tome 11 (Darboux G, ed.), Gauthier-Villars, Paris, 1890, (reprinted: G. Olms, Hildesheim, 1970), pp 321–324
Frisch KR (1955) The logarithmic potentical method of convex programming. Memorandum, University Institute of Economics, Oslo
Fulkerson D, Wolfe P (1962) An algorithm for scaling matrices. SIAM Rev 4:142–146
Gale D, Kuhn HW, Tucker AW (1951) Linear programming and the theory of games. In: Koopmans T (ed) Activity analysis of production and allocation. Wiley, New York, pp 317–329
Gass SI (1985) Linear programming: methods and applications. McGraw-Hill, New York
Gass SI, Saaty T (1955) The computational algorithm for the parametric objective function. Nav Res Logist Q 2:39–45
Gay DM (1978) On combining the schemes of Reid and Saunders for sparse LP bases. In: Duff IS, Stewart GW (eds) Sparse matrix proceedings. SIAM, Philadelphia, pp 313–334
Gay D (1985) Electronic mail distribution of linear programming test problems. COAL Newsl 13:10–12
Gay DM (1987) A variant of Karmarkar’s linear programming algorithm for problems in standard form. Math Program 37:81–90
Geoffrion AM (1972) Generalized Benders decomposition. JOTA 10:137–154
George A, Liu W-H (1981) Computing solution of large sparse positive definite systems. Prentice-Hall, Engleewood Cliffs
Gill PE, Murray W (1973) A numerically stable form of the simplex algorithm. Linear Algebra Appl 7:99–138
Gill PE, Murray W, Saunders MA, Tomlin JA, Wright MH (1985) On projected Newton barrier methods for linear programming and an equivalence to Karmarkar’s projected method. Technical report SOL 85–11, Department of Operations Research, Stanford University
Gill PE, Murray W, Saunders MA, Tomlin JA, Wright MH (1986) On projected Newton methods for linear programming and an equivalence to Karmarkar’s projected method. Math Program 36:183–209
Gill PE, Murray W, Saunders MA, Wright MH (1987) Maintaining LU factors of a general sparse matrix. Linear Algebra Appl 88/89:239–270
Gill PE, Murray W, Saunders MA, Wright MH (1989) A practical anti-cycling procedure for linearly constrainted optimization. Math Program 45:437–474
Goldfarb D (1977) On the Bartels-Golub decomposition for linear programming bases. Math Program 13:272–279
Goldfarb D, Reid JK (1977) A practicable steepest-edge simplex algorithm. Math Program 12:361–371
Goldman AJ, Tucker AW (1956a) Polyhedral convex cones. In: Kuhn HW, Tucker AW (eds) Linear inequalities and related systems. Annals of mathmatical studies, vol 38. Princeton University Press, Princeton, pp 19–39
Goldman AJ, Tucker AW (1956b) Theory of linear programming. In: Kuhn HW, Tucker AW (eds) Linear inequalities and related systems. Annals of mathmatical studies, vol 38. Princeton University Press, Princeton, pp 53–97
Golub GH (1965) Numerical methods for solving linear least squares problems. Numer Math 7:206–216
Golub GH, Van Loan CF (1989) Matrix computations, 2edn. The Johns Hopkins University Press, Baltimore
Gomory AW (1958) Outline of an algorithm for integer solutions to linear programs. Bull Am Math Soc 64:275–278
Gonzaga CC (1987) An algorithm for solving linear programming problems in O(n 3 L) operations. Technical Report UCB/ERL M87/10, Electronics Research Laboratory, University of California, Berkeley
Gonzaga CC (1990) Convergence of the large step primal affine-scaling algorithm for primal non-degenerate linear problems. Technical report, Department of Systems Engineering and Computer Sciences, COPPE-Federal University of Riode Janeiro
Gould N, Reid J (1989) New crash procedure for large systems of linear constraints. Math Program 45:475–501
Greenberg HJ (1978) Pivot selection tactics. In: Greenberg HJ (ed) Design and implementation of optimization software. Sijthoff and Noordhoff, Alphen aan den Rijn, pp 109–143
Greenberg HJ, Kalan J (1975) An exact update for Harris’ tread. Math Program Study 4:26–29
Guerrero-Garcia P, Santos-Palomo A (2005) Phase I cycling under the most-obtuse-angle pivot rule. Eur J Oper Res 167:20–27
Guerrero-Garcia P, Santos-Palomo A (2009) A deficient-basis dual counterpart of Pararrizos, Samaras ans Stephanides’ primal-dual simplex-type algorithm. Optim Methods Softw 24:187–204
Güler O, Ye Y (1993) Convergence behavior of interior-point algorithms. Math Program 60:215–228
Hadley G (1972) Linear programming. Addison-Wesley, Reading
Hager WW (2002) The dual active set algorithm and its application to linear programming. Comput Optim Appl 21:263–275
Hall LA, Vanderbei RJ (1993) Two-third is sharp for affine scaling. Oper Res Lett 13:197–201
Hamming RW (1971) Introduction to applied numerical analysis. McGraw-Hill, New York
Harris PMJ (1973) Pivot selection methods of the Devex LP code. Math Program 5:1–28
Hattersley B, Wilson J (1988) A dual approach to primal degeneracy. Math Program 42:135–145
He X-C, Sun W-Y (1991) An intorduction to generalized inverse matrix (in Chinese). Jiangsu Science and Technology Press, Nanjing
Hellerman E, Rarick DC (1971) Reinversion with the preassigned pivot procedure. Math Program 1:195–216
Hellerman E, Rarick DC (1972) The partitioned preassigned pivot procedure. In: Rose DJ, Willouhby RA (eds) Sparse matrices and their applications. Plenum, New York, pp 68–76
Hertog DD, Roos C (1991) A survey of search directions in interior point methods for linear programming. Math Program 52:481–509
Hoffman AJ (1953) Cycling in the simplex algorithm. Technical report 2974, National Bureau of Standards
Hu J-F (2007) A note on “an improved initial basis for the simplex algorithm”. Comput Oper Res 34:3397–3401
Hu J-F, Pan P-Q (2006) A second note on ‘A method to solve the feasible basis of LP’ (in Chinese). Oper Res Manag Sci 15:13–15
Hu J-F, Pan P-Q (2008a) Fresh views on some recent developments in the simplex algorithm. J Southeast Univ 24:124–126
Hu J-F, Pan P-Q (2008b) An efficient approach to updating simplex multipliers in the simplex algorithm. Math Program Ser A 114:235–248
Jansen B, Terlakey T, Roos C (1994) The theory of linear programming: skew symmetric self-dual problems and the central path. Optimization 29:225–233
Jansen B, Roos C, Terlaky T (1996) Target-following methods for linear programming. In: Terlaky T (ed) Interior point methods of mathematical programming. Kluwer, Dordrecht
Jeroslow R (1973) The simplex algorithm with the pivot rule of maximizing criterion improvement. Discret Appl Math 4:367–377
Kalantari B (1990) Karmarkar’s algorithm with improved steps. Math Program 46:73–78
Kallio M, Porteus EL (1978) A class of methods for linear programming. Math Program 14:161–169
Kantorovich LV (1960) Mathematical methods in the organization and planning of production. Manag Sci 6:550–559. Original Russian version appeared in 1939
Karmarkar N (1984) A new polynomial time algorithm for linear programming. Combinatorica 4:373–395
Karmarkar N, Ramakrishnan K (1985) Further developments in the new polynomial-time algorithm for linear progrmming. In: Talk given at ORSA/TIMES national meeting, Boston, Apr 1985
Khachiyan L (1979) A polynomial algorithm in linear programming. Doklady Academiia Nauk SSSR 244:1093–1096
Kirillova FM, Gabasov R, Kostyukova OI (1979) A method of solving general linear programming problems. Doklady AN BSSR (in Russian) 23:197–200
Klee V (1965) A class of linear problemminng problems requiring a larger number of iterations. Numer Math 7:313–321
Klee V, Minty GJ (1972) How good is the simplex algorithm? In: Shisha O (ed) Inequalities-III. Academic, New York, pp 159–175
Koberstein A (2008) Progress in the dual simplex algorithm for solving large scale LP problems: techniques for a fast and stable implementation. Comput Optim Appl 41:185–204
Koberstein A, Suhl UH (2007) Progress in the dual simplex method for large scale LP problems: practical dual phase 1 algorithms. Comput Optim Appl 37:49–65
Kojima M, Mizuno S, Yoshise A (1989) A primal-dual interior point algorithm for linear programming. In: Megiddo N (ed) Progress in mathematical programming. Springer, New York, pp 29–47
Kojima M, Megiddo N, Mizuno S (1993) A primal-dual infeasible-interior-point algorithm for linear programming. Math Program 61:263–280
Kortanek KO, Shi M (1987) Convergence results and numerical experiments on a linear programming hybrid algorithm. Eur J Oper Res 32:47–61
Kostina E (2002) The long step rule in the bounded-variable dual simplex method: numerical experiments. Math Methods Oper Res 55:413–429
Kotiah TCT, Steinberg DI (1978) On the possibility of cycling with the simplex method. Oper Res 26:374–376
Kuhn HW, Quandt RE (1953) An experimental study of the simplex method. In: Metropolis NC et al (eds) Eperimental arithmetic, high-speed computing and mathematics. Proceedings of symposia in applied mathematics XV. American Mathematical Society, Providence, pp 107–124
Land AH, Doig AG (1960) An automatic method of solving discrete programming problems. Econometrica 28:497–520
Leichner SA, Dantzig GB, Davis JW (1993) A strictly, improving linear programming phase I algorithm. Ann Oper Res 47:409–430
Lemke CE (1954) The dual method of solving the linear programming problem. Nav Res Logist Q 1:36–47
Li W (2004) A note on two direct methods in linear programming. Eur J Oper Res 158:262–265
Li C, Pan P-Q, Li W (2002) A revised simplex algorithm based on partial pricing pivotrule (in Chinese). J Wenzhou Univ 15:53–55
Li W, Guerrero-Garcia P, Santos-Palomo A (2006a) A basis-deficiency-allowing primal phase-1 algorithm using the most-obtuse-angle column rule. Comput Math Appl 51:903–914
Li W, Pan P-Q, Chen G (2006b) A combined projected gradient algorithm for linear programming. Optim Methods Softw 21:541–550
Llewellyn RW (1964) Linear programming. Holt, Rinehart and Winston, New York
Luo Z-Q, Wu S (1994) A modified predictor-corrector method for linear programming. Comput Optim Appl 3:83–91
Lustig IJ (1990) Feasibility issues in a prinal-dual interior-point method for linear programming. Math Program 49:145–162
Lustig IJ, Marsten RE, Shanno DF (1991) Computational exeperience with a primal-dual interior point method for linear programming. Linear Algebra Appl 152:191–222
Lustig IJ, Marsten RE, Shanno DF (1992) On implementing Mehrotras’s predictor-corrector interior-point for linear programming. SIAM J Optim 2:435–449
Lustig IJ, Marsten R, Shanno D (1994) Interior point methods for linear programming: computational state of the art. ORSA J Comput 6:1–14
Markowitz HM (1957) The elimination form of the inverse and its application to linear programming. Manag Sci 3:255–269
Maros I (1986) A general phase-1 method in linear programming. Eur J Oper Res 23:64–77
Maros I (2003a) A generalied dual phase-2 simplex algorithm. Eur J Oper Res 149:1–16
Maros I (2003b) Computational techniques of the simplex method. International series in operations research and management, vol 61. Kluwer, Boston
Maros I, Khaliq M (2002) Advances in design and implementation of optimization software. Eur J Oper Res 140:322–337
Marshall KT, Suurballe JW (1969) A note on cycling in the simplex method. Nav Res Logist Q 16:121–137
Martin RK (1999) Large scale linear and integer optimization: a unified approach. Kluwer, Boston
Mascarenhas WF (1997) The affine scaling algorithm fails for λ = 0. 999. SIAM J Optim 7:34–46
McShane KA, Monma CL, Shanno DF (1989) An implementation of a primal-dual method for linear programming. ORSA J Comput 1:70–83
Megiddo N (1986a) Introduction: new approaches to linear programming. Algorithmca 1:387–394. (Special issue)
Megiddo N (1986b) A note on degenerach in linear programming. Math Program 35:365–367
Megiddo N (1989) Pathways to the optimal set in linear programming. In: Megiddo N (ed) Progress in mathematical programming. Springer, New York, pp 131–158
Megiddo N, Shub M (1989) Boundary behavior of interior point algorithm in linear programming. Math Oper Res 14:97–146
Mehrotra S (1991) On finding a vertex solution using interior point methods. Linear Algebra Appl 152:233–253
Mehrotra S (1992) On the implementation of a primal-dual interior point method. SIAM J Optim 2:575–601
Mizuno S, Todd MJ, Ye Y (1993) On adaptive-step primal-dual interior-point algorithms for linear programming. Math Oper Res 18:964–981
Monteiro RDC, Adler I (1989) Interior path following primal-dual algorithms: Part I: linear programming. Math Program 44:27–41
Murtagh BA (1981) Advances in linear programming: computation and practice. McGraw-Hill, New York/London
Murtagh BA, Saunders MA (1978) Large-scale linearly constrained optimization. Math Program 14:41–72
Murtagh BA, Saunders MA (1998) MINOS 5.5 user’s guid. Technical report SOL 83-20R, Department of Engineering Economics Systems & Operations Research, Stanford University, Stanford
Murty KG (1983) Linear programming. Wiley, New York
Nazareth JL (1987) Computer solutions of linear programs. Oxford University Press, Oxford
Nazareth JL (1996) The implementation of linear programming algorithms based on homotopies. Algorithmica 15:332–350
Nemhauser GL (1994) The age of optimizaiton: solving large-scale real-world problems. Oper Res 42:5–13
Nemhauser GL, Wolsey LA (1999) Integer and combinatorial optimization. Wiley, New York
Nocedal J, Wright SJ (1999) Numerical optimization. Springer, Berlin
Ogryczak W (1988) The simplex method is not always well behaved. Linear Algebra Appl 109:41–57
Orchard-Hays W (1954) Background development and extensions of the revised simplex method. Report RM 1433, The Rand Corporation, Santa Monica
Orchard-Hays W (1956) Evolution of computer codes for linear programming. Paper P-810, The RAND Corporation, p 2224
Orchard-Hays W (1971) Advanced linear programming computing techniques. McGraw-Hill, New York
Padberg MW (1995) Linear optimization and extensions. Springer, Berlin
Pan P-Q (1982) Differential equaltion methods for unconstarained optimization (in Chinese). Numer Math J Chin Univ 4:338–349
Pan P-Q (1990) Practical finite pivoting rules for the simplex method. OR Spektrum 12:219–225
Pan P-Q (1991) Simplex-like method with bisection for linear programming. Optimization 22:717–743
Pan P-Q (1992a) New ODE methods for equality constrained optimization (I) – equations. J Comput Math 10:77–92
Pan P-Q (1992b) New ODE methods for equality constrained optimization (II) – algorithms. J Comput Math 10:129–146
Pan P-Q (1992c) Modification of Bland’s pivoting rule (in Chinese). Numer Math 14:379–381
Pan P-Q (1994a) A variant of the dual pivot rule in linear programming. J Inf Optim Sci 15:405–413
Pan P-Q (1994b) Composite phase-1 methods without measuring infeasibility. In: Yue M-Y (ed) Theory of optimization and its applications. Xidian University Press, Xian, pp 359–364.
Pan P-Q (1994c) Ratio-test-free pivoting rules for the bisection simplex method. In: Proceedings of national conference on decision making science, Shangrao, pp 24–29
Pan P-Q (1994d) Ratio-test-free pivoting rules for a dual phase-1 method. In: Xiao S-T, Wu F (eds) Proceeding of the third conference of Chinese SIAM. Tsinghua University press, Beijing, pp 245–249
Pan P-Q (1995) New non-monotone procedures for achieving dual feasibility. J Nanjing Univ Math Biquarterly 12:155–162
Pan P-Q (1996a) A modified bisection simplex method for linear programming. J Comput Math 14:249–255
Pan P-Q (1996b) New pivot rules for achieving dual feasibility. In: Wei Z (ed) Theory and applications of OR. Proceedings of the fifth conference of Chinese OR society, Xian, 10–14 Oct 1996. Xidian University Press, Xian, pp 109–113
Pan P-Q (1996c) Solving linear programming problems via appending an elastic constraint. J Southeast Univ (English edn) 12:253–265
Pan P-Q (1997) The most-obtuse-angle row pivot rule for achieving dual feasibility in linear programming: a computational study. Eur J Oper Res 101:164–176
Pan P-Q (1998a) A dual projective simplex method for linear programming. Comput Math Appl 35:119–135
Pan P-Q (1998b) A basis-deficiency-allowing variation of the simplex method. Comput Math Appl 36:33–53
Pan P-Q (1999a) A new perturbation simplex algorithm for linear programming. J Comput Math 17:233–242
Pan P-Q (1999b) A projective simplex method for linear programming. Linear Algebra Appl 292:99–125
Pan P-Q (2000a) A projective simplex algorithm using LU decomposition. Comput Math Appl 39:187–208
Pan P-Q (2000b) Primal perturbation simplex algorithms for linear programming. J Comput Math 18:587–596
Pan P-Q (2000c) On developments of pivot algorithms for linear programming. In: Proceedings of the sixth national conference of operations research society of China, Changsha, 10–15 Oct 2000. Global-Link Publishing, Hong Kong, pp 120–129
Pan P-Q (2004) A dual projective pivot algorithm for linear programming. Comput Optim Appl 29:333–344
Pan P-Q (2005) A revised dual projective pivot algorithm for linear programming. SIAM J Optim 16:49–68
Pan P-Q (2008a) A largest-distance pivot rule for the simplex algorithm. Eur J Oper Res 187:393–402
Pan P-Q (2008b) A primal deficient-basis algorithm for linear programming. Appl Math Comput 198:898–912
Pan P-Q (2008c) Efficient nested pricing in the simplex algorithm. Oper Res Lett 38:309–313
Pan P-Q (2010) A fast simplex algorithm for linear programming. J Comput Math 28(6):837–847
Pan P-Q (2013) An affine-scaling pivot algorithm for linear programming. Optimization 62: 431–445
Pan P-Q, Li W (2003) A non-monotone phase-1 method in linear programming. J Southeast Univ (English edn) 19:293–296
Pan P-Q, Ouiang Z-X (1993) Two variants of the simplex algorithm (in Chinese). J Math Res Expo 13(2):274–275
Pan P-Q, Ouyang Z-X (1994) Moore-Penrose inverse simplex algorithms based on successive linear subprogramming approach. Numer Math 3:180–190
Pan P-Q, Pan Y-P (2001) A phase-1 approach to the generalized simplex algorithm. Comput Math Appl 42:1455–1464
Pan P-Q, Li W, Wang Y (2004) A phase-1 algorithm using the most-obtuse-angle rule for the basis-deficiency-allowing dual simplex method. OR Trans 8:88–96
Pan P-Q, Li W, Cao J (2006a) Partial pricing rule simplex method with deficient basis. Numer Math J Chin Univ (English series) 15:23–30
Pan P-Q, Hu J-F, Li C (2006b) Feasible region contraction interior point algorithm. Appl Math Comput 182:1361–1368
Papadimitriou CH, Steiglitz K (1982) Combinatorial optimization: algorithms and complexity. Prentice-Hall, New Jersey
Perold AF (1980) A degeneracy exploiting LU factorization for the simplex method. Math Program 19:239–254
Powell MJD (1989) A tolerant algorithm for linearly constrained optimization calculations. Math Program 45:547–566
Reid JK (1982) A sparsity-exploiting variant of the Bartels-Golub decomposition for linear programming bases. Math Program 24:55–69
Rockafellar RT (1997) Convext analysis. Princeton University Press, Princeton
Roos C (1990) An exponential example for Terlaky’s pivoting rule for the criss-cross simplex method. Math Program 46:79–84
Roos C, Vial J-Ph (1992) A polynomial method of approximate centers for linear programming. Math Program 54:295–305
Roos C, Terlaky T, Vial J-P (1997) Theory and algorithms for linear programming. Wiley, Chichester
Rothenberg RI (1979) Linear programming. North-Holland, New York
Ryan D, Osborne M (1988) On the solution of highly degenerate linear problems. Math Program 41:385–392
Saigal R (1995) Linear programming. Kluwer, Boston
Santos-Palomo A (2004) The sagitta method for solving linear problems. Eur J Oper Res 157:527–539
Saunders MA (1972) Large scale linear programming using the Cholesky factorization. Technical report STAN-CS-72-152, Stanford University
Saunders MA (1973) The complexity of LU updating in the simplex method. In: Andersen R, Brent R (eds) The complexity of computational problem solving. University Press, St. Lucia, pp 214–230
Saunders MA (1976) A fast and stable implementation of the simplex method using Bartels-Golub updating. In: Bunch J, Rose D (eds) Sparse matrix computation. Academic, New York, pp 213–226
Schrijver A (1986) Theory of linear and integer programming. Wiley, Chichester
Shanno DF, Bagchi A (1990) A unified view of interior point methods for linear programming. Ann Oper Res 22:55–70
Shen Y, Pan P-Q (2006) Dual besiction simplex algorithm (in Chinese). In: Preceedings of the national conference of operatios research society of China, Shenzhen. Globa-Link Informatics, Hong Kong, pp 168–174
Shi Y, Pan P-Q (2011) Higher order iteration schemes for unconstrained optimization. Am J Oper Res 1:73–83
Smale S (1983a) On the average number of steps of the simplex method of linear programming. Math Program 27:241–262
Smale S (1983b) The problem of the average speed of the simplex method. In: Bachem A, Grotschel M, Korte B, (eds) Mathematical programming, the state of the art. Springer, Berlin, pp 530–539
Srinath LS (1982) Linear programming: principles and applications. Affiliated East-West Press, New Delhi
Suhl UH (1994) Mathematical optimization system. Eur J Oper Res 72:312–322
Suhl LM, Suhl UH (1990) Computing sparse LU factorization for large-scale linear programming bases. ORSA J Comput 2:325–335
Suhl LM, Suhl UH (1993) A fast LU-update for linear programming. Ann Oper Res 43:33–47
Sun W, Yuan Y-X (2006) Optimization theory an methods: nonlinear programming. Springer, New York
Swietanowaki A (1998) A new steepest edge approximation for the simplex method for linear programming. Comput Optim Appl 10:271–281
Taha H (1975) Integer programming: theory, applications, and computations. Academic, Orlando
Talacko J V, Rockefeller RT (1960) A Compact Simplex Algorithm and a Symmetric Algorithm for General Linear Programs. Unpublished paper, Marquette University
Tanabe K (1977) A geometric method in non-linear programming. Technical Report 23343-AMD780, Brookhaven National Laboratory, New York
Tanabe K (1990) Centered Newton method for linear prgramming: interior and ‘exterior’ point method, (in Japannese). In: Tone K (ed) New methods for linear programming 3. The Institute of Statistical Mathematics, Tokeo, pp 98–100
Tapia RA, Zhang Y (1991) An optimal-basis identification technique for interior-point linear programming algorithms. Linear Algebra Appl 152:343–363
Terlaky T (1985) A covergent criss-cross method. Math Oper Stat Ser Optim 16:683–690
Terlaky T (1993) Pivot rules for linear programming: a survey on recent theoretical developments. Ann Oper Res 46:203–233
Terlaky T (ed) (1996) Interior point methods of mathematical programming. Kluwer, Dordrecht
Todd MJ (1982) An implementation of the simplex method for linear programming problems with variable upper bounds. Math Program 23:23–49
Todd MJ (1983) Large scale linear programming: geometry, working bases and factorizations. Math Program 26:1–23
Tomlin JA (1972) Modifying trangular factors of the basis in the simplex method. In: Rose DJ, Willoughby RA (eds) Sparse matrices and applications. Plenum, New York
Tomlin JA (1974) On pricing and backward transformation in linear programming. Math Program 6:42–47
Tomlin JA (1975) On scaling linear programming problems. Math Program Study 4:146–166
Tomlin JA (1987) An experimental approach to Karmarkar’s projective method, for linear programming. Math Program 31:175–191
Tsuchiya T (1992) Global convergence property of the affine scaling method for primal degenerate linear programming problems. Math Oper Res 17:527–557
Tsuchiya T, Muramatsu M (1995) Global convergence of a long-step affine-scaling algorithm for degenrate linear programming problems. SIAM J Optim 5:525–551
Tucker AW (1956) Dual systems of homegeneous linear relations. In: Kuhn HW, Tucker AW, Dantzig GB (eds) Linear inequalities and related systems. Princeton University Press, Princeton, pp 3–18
Turner K (1991) Computing projections for the Karmarkar algorithtm. Linear Algebra Appl 152:141–154
Vanderbei RJ, Lagarias JC (1990) I.I. Dikin’s convergence result for the affine-scaling algorithm. Contemp Math 114:109–119
Vanderbei RJ, Meketon M, Freedman B (1986) A modification of Karmarkars linear programming algorithm. Algorithmica 1:395–407
Vemuganti RR (2004) On gradient simplex methods for linear programs. J Appl Math Decis Sci 8:107–129
Wang Z (1987) A conformal elimination-free algorithm for oriented matroid programming. Chin Ann Math 8(B1):16–25
Wilkinson JH (1971) Moden error analysis. SIAM Rev 13:548–568
Wolfe P (1963) A technique for resolving degeneracy in linear programming. J Oper Res Soc 11:205–211
Wolfe P (1965) The composite simplex algorithm. SIAM Rev 7:42–54
Wolsey L (1998) Integer programming. Wiley, New York
Wright SJ (1997) Primal-dual interior-point methods. SIAM, Philadelphia
Xu X, Ye Y (1995) A generalized homogeneous and self-dual algorithm for linear prgramming. Oper Res Lett 17:181–190
Xu X, Hung P-F, Ye Y (1996) A simplified homogeneous and self-dual linear programming algorithm and its implementation. Ann Oper Res 62:151–171
Yan W-L, Pan P-Q (2001) Improvement of the subproblem in the bisection simplex algorithm (in Chinese). J Southeast Univ 31:324–241
Yan A, Pan P-Q (2005) Variation of the conventional pivot rule and the application in the deficient basis algorithm (in Chinese). Oper Res Manag Sci 14:28–33
Yan H-Y, Pan P-Q (2009) Most-obtuse-angle criss-cross algorithm for linear programming (in Chinese). Numer Math J Chin Univ 31:209–215
Yang X-Y, Pan P-Q (2006) Most-obtuse-angle dual relaxation algorithm (in Chinese). In: Preceedings of the national conference of operatios research society of China, Shenzhen. Globa-Link Informatics, Hong Kong, pp 150–155
Ye Y (1987) Eliminating columns in the simplex method for linear programming. Technical report SOL 87–14, Department of Operations Research, Stanford Univesity, Stanford
Ye Y (1990) A ‘build-down’ scheme for linear probleming. Math Program 46:61–72
Ye Y (1997) Interior point algorithms: theory and analysis. Wiley, New York
Ye Y, Todd MJ, Mizuno S (1994) An \(o(\sqrt{n}l)\)-iteration homogeneous and selfdual linear programming algorithm. Math Oper Res 19:53–67
Zhang H, Pan P-Q (2008) An interior-point algorithm for linear programming (in Chinese). In: Preceedings of the national conference of operatios research society of China, Nanjing. Globa-Link Informatics, Hong Kong, pp 183–187
Zhang J-Z, Xu S-J (1997) Linear programming (in Chinese). Science Press, Bejing
Zhang L-H, Yang W-H, Liao L-Z (2013) On an efficient implementation of the face algorithm for linear programming. J Comp Math 31:335–354
Zhou Z-J, Pan P-Q, Chen S-F (2009) Most-obtuse-angle relaxation algorithm (in Chinese). Oper Res Manag Sci 18:7–10
Zionts S (1969) The criss-cross method for solving linear programming problems. Manag Sci 15:420–445
Zlatev Z (1980) On some pivotal strategies in Gaussian elimination by sparse technique. SIAM J Numer Anal 17:12–30
Zörnig P (2006) Systematic construction of examples for cycling in the simplex method. Comput Oper Res 33:2247–2262
Zoutendijk G (1960) Methods of feasible directions. Elsevier, Amsterdam
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2014 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
PAN, PQ. (2014). Simplex Method. In: Linear Programming Computation. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-40754-3_3
Download citation
DOI: https://doi.org/10.1007/978-3-642-40754-3_3
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-40753-6
Online ISBN: 978-3-642-40754-3
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)