Abstract
Besides the simplex method and dual simplex method, a number of their variants have been proposed in the past. To take advantages of both types, attempts were made to combine them. At first, two important variants will be presented in the following two sections respectively, both of which prefixed by “primal-dual” because they execute primal as well as dual simplex steps, though they are based on different ideas. More recent variants of such type will be presented later in Chap. 18.
Access provided by Autonomous University of Puebla. Download chapter PDF
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
Besides the simplex method and dual simplex method, a number of their variants have been proposed in the past. To take advantages of both types, attempts were made to combine them. At first, two important variants will be presented in the following two sections respectively, both of which prefixed by “primal-dual” because they execute primal as well as dual simplex steps, though they are based on different ideas. More recent variants of such type will be presented later in Chap. 18.
In the other sections, the primal and dual simplex methods are generalized to handle bounded-variable LP problems, which are commonly used in practice.
1 Primal-Dual Simplex Method
The primal-dual method (Dantzig et al. 1956) will be presented in this section, which is an extension of the same named method (Ford and Fulkerson 1956) for solving transportation problems.
Just like the dual simplex method, this method proceeds toward primal feasibility while maintaining dual feasibility and complementarity. However, they pursue primal feasibility in different ways. The former attempts to fulfil x ≥ 0 while maintaining Ax = b, whereas the latter attempts to get rid of artificial variables in the auxiliary Phase-I program to fulfil Ax = b while keeping x ≥ 0.
We are concerned with the standard LP problem (1.8), whose dual problem is (4.2). Let \((\bar{y},\bar{z})\) be the current dual feasible solution, satisfying \({A}^{T}\bar{y} +\bar{ z} \leq c\).
To obtain a primal solution matching \((\bar{y},\bar{z})\), consider the auxiliary program (3.16), written as
where \(x_{a} = {(x_{n+1},\ldots,x_{n+m})}^{T}\) is an artificial variable vector. It would be well to assume b ≥ 0. Introducing index set
define the so-called “restricted program”:
Since b ≥ 0, it is clear that the feasible region of the preceding program is nonempty, and hence there is an optimal solution to it. The restricted program may be viewed as one formed by all artificial columns and columns indexed by j belonging to Q.
Assume that \((\bar{x},\bar{x}_{a})\) is an optimal solution to (7.3) with optimal value \(\bar{\zeta }\), and that \(\bar{w}\) is the associated optimal simplex multiplier.
Theorem 7.1.1.
If the optimal value \(\bar{\zeta }\) vanishes, \(\bar{x}\) and \((\bar{y},\bar{z})\) are a pair of primal and dual optimal solutions.
Proof.
\(\bar{\zeta }= {e}^{T}\bar{x}_{a} = 0\) and \(\bar{x}_{a} \geq 0\) together imply that \(\bar{x}_{a} = 0\). Thus, \(\bar{x}\) is a feasible solution to the original problem (4.1). By the definition of Q, moreover, it holds that \(\bar{{x}}^{T}\bar{z} = 0\), as exhibits complementarity. Therefore, \(\bar{x}\) and \((\bar{y},\bar{z})\) are a pair of primal and dual optimal solutions. □
When \(\bar{\zeta }> 0\), otherwise, \(\bar{x}\) could be regarded as the closest one to feasibility among all those complementary with \((\bar{y},\bar{z})\). Nevertheless, the \(\bar{x}\) is not feasible to the original problem because it does not satisfy Ax = b, but x ≥ 0 only. In other words, it should be possible to improve \((\bar{y},\bar{z})\) by increasing the associated dual objective value. To do so, consider the dual program of (7.3) in the form
Since the simplex multiplier vector \(\bar{w}\) is just an optimal solution to the preceding program, it follows from the duality that
which implies that \(\bar{w}\) is an uphill with respect to the objective b T y of the dual problem (4.2). This leads to the following line search scheme for updating \((\bar{y},\bar{z})\):
For being an improved dual feasible solution, it must satisfy the dual constraints for some β > 0, i.e.,
Since \(\bar{z} \geq 0\), and \(\bar{w}\) satisfies the constrains of (7.4), it is known that
Therefore, if index set
is empty, then (7.6) holds for all β ≥ 0, giving a class of dual feasible solutions. Since \(\bar{\zeta }> 0\), the associated dual objective value
tends to + ∞ as β infinitely increases. This implies dual unboundedness or primal infeasibility.
If, otherwise, there is some j ∈ A∖Q such that \(\bar{s}_{j} = -a_{j}^{T}\bar{w} < 0\), then (7.6) holds for the largest possible stepsize β such that
Thus, the resulting dual solution is feasible, corresponding to a strictly larger dual objective value. It is then used for the next iteration.
Let B be the optimal basis of the restricted program. If a column of B is not artificial, it must be indexed by some j ∈ Q such that \(\bar{z}_{j} = 0\). Since the associated reduced cost is zero, i.e., \(\bar{s}_{j} = 0 - a_{j}^{T}\bar{w} = 0\), it holds that
implying that the j also belongs to the next Q. Therefore, the optimal basis of the restricted program can be used as a starting basis for the next iteration. In addition, it is seen from (7.8) that there is at least one index (e.g., q) in A∖Q belongs to the next Q, and the associated reduced cost is negative, i.e., \(\bar{s}_{q} < 0\). In other words, there exist new candidates to enter the basis in the next iteration. Therefore, the restricted program in each iteration can be solved by applying primal simplex method to the original auxiliary program (7.1) itself, except the choice of columns entering the basis is restricted to those indexed by j ∈ Q ∩ N. Once an artificial variable leaves the basis, it is dropped from the auxiliary program immediately.
It is clear that optimality of the restricted program is achieved if \(Q \cap N = \varnothing \). In case when the initial set Q is empty, for instance, all the artificial columns just form an optimal basis and the optimal multiplier is \(\bar{w} = e\); so, no any simplex step is needed for the first iteration.
The steps can be summarized into the following algorithm, the meanings of whose exists are clear.
Algorithm 7.1.1 (Primal-dual simplex algorithm).
Initial: a dual feasible solution \((\bar{y},\bar{z})\), and associated Q defined by (7.2). \(B =\{ n + 1,\ldots,n + m\},\ N =\{ 1,\ldots,n\}\). This algorithm solves the standard LP problem (1.8).
-
1.
Carry out simplex steps to solve the restricted auxiliary program (7.1).
-
2.
Stop if the optimal value of the restricted program vanishes (optimality achieved).
-
3.
Stop if J defined by 7.7 is empty. (infeasible problem)
-
4.
Compute β by (7.8).
-
5.
Update \((\bar{y},\bar{z})\) by (7.5).
-
6.
Update Q by (7.2)
-
7.
Go to step 1.
Although the simplex method was used to solve the restricted program, any method for solving it will apply. The primal-dual simplex method seems to be amenable to certain network flow problems, in particular, since the labeling method solves the restricted program more efficiently and an initial dual feasible solution is easy to obtain (Papadimitriou and Steiglitz 1982)
It is noted that the objective value, corresponding to the dual feasible solution, increases monotonically iteration by iteration. Therefore, the primal-dual method will terminate if each restricted program encountered is solved in finitely many subiterations, It is however not the case as the simplex method is utilized.
Example 7.1.1.
Solve the following problem by Algorithm 7.1.1:
Answer Construct the auxiliary program below:
Initial: \(B =\{ 6,7,8\},\ N =\{ 1,\ldots,5\}\). Since the costs of the original problem are positive, a feasible dual solution \((\bar{y} = {(0,0,0)}^{T},\bar{z} = {(2,5,1,4,8)}^{T})\) is available, with Q = ∅.
Iteration 1:
-
1.
Since Q = ∅, no simplex step is needed.
-
2.
The optimal value of the restricted program is positive, and the optimal simplex multiplier is \(\bar{w} = {(1,1,1)}^{T}\).
-
3.
\(\;\,\bar{s}_{J} = {(-1,-4,-4)}^{T},\ J =\{ 1,3,5\}\neq \varnothing.\)
-
4.
\(\;\,\bar{z}_{J} = {(2,1,8)}^{T},\ \theta =\min \{ 2/1,1/4,8/4\} = 1/4,\ q = 3.\)
-
5.
\(\;\,\bar{y} = {(0,0,0)}^{T} + 1/4{(1,1,1)}^{T} = {(1/4,1/4,1/4)}^{T},\)
\(\;\bar{z}_{N} = \left (\begin{array}{c} 2\\ 5 \\ 1\\ 4 \\ 8\\ \end{array} \right )-{\left (\begin{array}{r@{\;\;\;}r@{\;\;\;}c@{\;\;\;}r@{\;\;\;}r} 1\;\;\;& - 4\;\;\;&2\;\;\;& - 2\;\;\;& 6\\ 1\;\;\; & 2\;\;\; &2\;\;\; & \;\;\; & - 4 \\ - 1\;\;\;& 1\;\;\;& \;\;\;& 2\;\;\;& 2\\ \;\;\;\end{array} \right )}^{T}\left (\begin{array}{l} 1/4 \\ 1/4 \\ 1/4\\ \end{array} \right ) = \left (\begin{array}{l} 7/4 \\ 21/4 \\ 0\\ 4 \\ 7\\ \end{array} \right ).\)
-
6.
Q = { 3}.
Iteration 2:
-
1.
Carry out restricted simplex steps of Algorithm 3.5.1:
Subiteration 1:
-
(2)
Column selection is restricted to \(Q \cap N =\{ 3\}.x_{3}\) enters the basis.
-
(4)
\(\;\,\bar{a}_{3} = a_{3} = {(2,2,0)}^{T}\not\leq 0.\)
-
(6)
\(\;\,\bar{x}_{B} = {(1,8,2)}^{T},\ \alpha =\min \{ 1/2,8/2\} = 1/2,\ p = 1,\ x_{6}\ \mathrm{leaves\ the\ basis,\ and\ is\ dropped.}\)
-
(7)
\(\;\,\bar{x}_{B} = {(1,8,2)}^{T} - 1/2{(2,2,0)}^{T} = {(0,7,2)}^{T}.\bar{x}_{3} = 1/2.\)
-
(8)
\(\;\,\!\!\!\!\!\!\!{B}^{-1} = \left (\begin{array}{@{}l@{\;\;\;}c@{\;\;\;}c@{}} 1/2\;\;\;& \;\;\;& \\ - 1 \;\;\;&1\;\;\;& \\ \;\;\; & \;\;\; &1\\ \;\;\;\end{array} \right ).\)
-
(9)
\(\;\,B =\{ 3,7,8\},\ N =\{ 1,2,4,5\}.\)
Subiteration 2:
-
(1)
\(\;\,\bar{w}\;\quad ={ \left (\begin{array}{@{}l@{\;\;\;}c@{\;\;\;}c@{}} 1/2\;\;\;& \;\;\;& \\ - 1 \;\;\;&1\;\;\;& \\ \;\;\; & \;\;\; &1\\ \;\;\;\end{array} \right )}^{T}\left (\begin{array}{c} 0\\ 1 \\ 1\\ \end{array} \right ) = \left (\begin{array}{r} - 1\\ 1 \\ 1\\ \end{array} \right ),\)
\(\;\,\bar{s}_{N}\; = -{\left (\begin{array}{r@{\;\;\;}r@{\;\;\;}r@{\;\;\;}r} 1\;\;\;& - 4\;\;\;& - 2\;\;\;& 6\\ 1\;\;\; & 2\;\;\; & \;\;\; & - 4 \\ - 1\;\;\;& 1\;\;\;& 2\;\;\;& 2\\ \;\;\;\end{array} \right )}^{T}\left (\begin{array}{r} - 1\\ 1 \\ 1\\ \end{array} \right ) = \left (\begin{array}{@{}r@{}} 1\\ - 7 \\ - 4\\ 8\\ \end{array} \right ).\qquad \qquad \qquad \qquad \quad \;\;\)
-
(2)
\(\;\,Q \cap N =\{ 3\} \cap \{ 1,2,4,5\} = \varnothing.\)
-
(2)
-
2.
The optimal value of the restricted program is positive.
-
3.
\(\;\,\bar{s}_{J} =\{ -7,-4{)}^{T},\ J =\{ 2,4\}\neq \varnothing.\)
-
4.
\(\;\,\bar{z}_{J} = {(21/4,4)}^{T},\ \beta =\min \{ (21/4)/7,4/4\} = 21/28,\ q = 2.\)
-
5.
\(\;\,\bar{y} = {(1/4,1/4,1/4)}^{T} + 21/28{(-1,1,1)}^{T} = {(-1/2,1,1)}^{T}.\)
\(\;\,\bar{z}_{N} = \left (\begin{array}{c} 2\\ 5 \\ 4\\ 8\\ \end{array} \right )-{\left (\begin{array}{r@{\;\;\;}r@{\;\;\;}r@{\;\;\;}r} 1\;\;\;& - 4\;\;\;& - 2\;\;\;& 6\\ 1\;\;\; & 2\;\;\; & \;\;\; & - 4 \\ - 1\;\;\;& 1\;\;\;& 2\;\;\;& 2\\ \;\;\;\end{array} \right )}^{T}\left (\begin{array}{l} - 1/2 \\ 1\\ 1 \\ \end{array} \right ) = \left (\begin{array}{l} 5/2 \\ 0\\ 1 \\ 13\\ \end{array} \right ).\qquad \qquad \qquad \;\)
-
6.
Q = { 3, 2}.
Iteration 3:
-
1.
Carry out simplex steps of Algorithm 3.5.1 restricted:
Subiteration 1:
-
(2)
Column selection is restricted to \(Q \cap N =\{ 2\}\). x 2 enters the basis.
-
(4)
\(\;\,\bar{a}_{2} = \left (\begin{array}{l@{\;\;\;}c@{\;\;\;}c} 1/2\;\;\;& \;\;\;& \\ - 1 \;\;\;&1\;\;\;& \\ \;\;\; & \;\;\; &1\\ \;\;\;\end{array} \right )\left (\begin{array}{r} - 4\\ 2 \\ 1\\ \end{array} \right ) = \left (\begin{array}{r} - 2\\ 6 \\ 1\\ \end{array} \right )\not\leq 0.\)
-
(6)
\(\;\,\bar{x}_{B} = {(1/2,7,2)}^{T},\ \alpha =\min \{ 7/6,2/1\} = 7/6,\ p = 2,\ x_{7}\ \mathrm{leaves\ the\ basis,and\ is\ dropped.}\)
-
(7)
\(\;\,\bar{x}_{B} = {(1/2,7,2)}^{T} - 7/6{(-2,6,1)}^{T} = {(17/6,0,5/6)}^{T},\bar{x}_{2} = 7/6.\)
-
(8)
\(\;\,\!\!\!\!\!\!\!\!{B}^{-1} = \left (\begin{array}{c@{\;\;\;}r@{\;\;\;}c} 1\;\;\;& 1/3\;\;\;& \\ \;\;\;& 1/6\;\;\;& \\ \;\;\;& - 1/6\;\;\;&1\\ \;\;\;\end{array} \right )\left (\begin{array}{l@{\;\;\;}c@{\;\;\;}c} 1/2\;\;\;& \;\;\;& \\ - 1 \;\;\;&1\;\;\;& \\ \;\;\; & \;\;\; &1\\ \;\;\;\end{array} \right ) = \left (\begin{array}{r@{\;\;\;}r@{\;\;\;}c} 1/6\;\;\;& 1/3\;\;\;& \\ - 1/6\;\;\;& 1/6\;\;\;& \\ 1/6\;\;\;& - 1/6\;\;\;&1\\ \;\;\;\end{array} \right ).\)
-
(9)
\(\;\,B =\{ 3,2,8\},\ N =\{ 1,4,5\}.\)
Subiteration 2:
-
(1)
\(\;\,\bar{w}\;\quad ={ \left (\begin{array}{r@{\;\;\;}r@{\;\;\;}c} 1/6\;\;\;& 1/3\;\;\;& \\ - 1/6\;\;\;& 1/6\;\;\;& \\ 1/6\;\;\;& - 1/6\;\;\;&1\\ \;\;\;\end{array} \right )}^{T}\left (\begin{array}{c} 0\\ 0 \\ 1\\ \end{array} \right ) = \left (\begin{array}{l} 1/6 \\ - 1/6 \\ 1\\ \end{array} \right ),\)
\(\;\,\bar{s}_{N} = -{\left (\begin{array}{r@{\;\;\;}r@{\;\;\;}r} 1\;\;\;& - 2\;\;\;& 6\\ 1\;\;\; & \;\;\; & - 4 \\ - 1\;\;\;& 2\;\;\;& 2\\ \;\;\;\end{array} \right )}^{T}\left (\begin{array}{l} 1/6 \\ - 1/6 \\ 1\\ \end{array} \right ) = \left (\begin{array}{r} 1 \\ - 5/3 \\ - 11/3\\ \end{array} \right ).\qquad \qquad \qquad \qquad \;\;\,\)
-
(2)
\(\;\,Q \cap N =\{ 3,2\} \cap \{ 1,4,5\} = \varnothing.\)
-
(2)
-
2.
The optimal value of the restricted program is positive.
-
3.
\(\;\,s_{J} = {(-5/3,-11/3)}^{T},\ J =\{ 4,5\}\neq \varnothing.\)
-
4.
\(\;\,\bar{z}_{J} = {(1,13)}^{T},\ \beta =\min \{ 1/(5/3),13/(11/3)\} = 3/5,\ q = 4.\)
-
5.
\(\;\,\bar{y} = {(-1/2,1,1)}^{T} + 3/5{(1/6,-1/6,1)}^{T} = {(-2/5,9/10,8/5)}^{T},\qquad \qquad \quad \;\)
\(\;\,\bar{z}_{N} = \left (\begin{array}{c} 2\\ 4 \\ 8\\ \end{array} \right )-{\left (\begin{array}{r@{\;\;\;}r@{\;\;\;}r} 1\;\;\;& - 2\;\;\;& 6\\ 1\;\;\; & \;\;\; & - 4 \\ - 1\;\;\;& 2\;\;\;& 2\\ \;\;\;\end{array} \right )}^{T}\left (\begin{array}{l} - 2/5 \\ 9/10 \\ 8/5\\ \end{array} \right ) = \left (\begin{array}{l} 31/10 \\ 0 \\ 54/5\\ \end{array} \right ).\)
-
6.
Q = { 3, 2, 4}.
Iteration 4:
-
1.
Carry out simplex steps of Algorithm 3.5.1 restricted:
Subiteration 1:
-
(2)
Column selection is restricted to \(Q \cap N =\{ 3,2,4\} \cap \{ 1,4,5\}\). x 4 enters the basis.
-
(4)
\(\bar{a}_{4} = \left (\begin{array}{r@{\;\;\;}r@{\;\;\;}c} 1/6\;\;\;& 1/3\;\;\;& \\ - 1/6\;\;\;& 1/6\;\;\;& \\ 1/6\;\;\;& - 1/6\;\;\;&1\\ \;\;\;\end{array} \right )\left (\begin{array}{r} - 2\\ 0 \\ 2\\ \end{array} \right ) = \left (\begin{array}{r} - 1/3 \\ 1/3 \\ 5/3\\ \end{array} \right )\not\leq 0\).
-
(6)
\(\bar{x}_{B} = {(17/6,7/6,5/6)}^{T},\ \alpha =\min \{ (7/6)/(1/3),(5/6)/(5/3)\} = 1/2,\)
p = 3, x 8 leaves the basis, and dropped.
-
(7)
\(\bar{x}_{B} = {(17/6,7/6,5/6)}^{T} - 1/2{(-1/3,1/3,5/3)}^{T} = {(3,1,0)}^{T}\), \(\bar{x}_{4} = 1/2\).
-
(8)
\({B}^{-1} =\! \left (\begin{array}{c@{\;\;\;}c@{\;\;\;}r} 1\;\;\;& \;\;\;& 1/5 \\ \;\;\;& \;\;\;& - 1/5 \\ \;\;\;&1\;\;\;& 3/5\\ \;\;\;\end{array} \right )\!\left (\begin{array}{r@{\;\;\;}r@{\;\;\;}c} 1/6\;\;\;& 1/3\;\;\;& \\ - 1/6\;\;\;& 1/6\;\;\;& \\ 1/6\;\;\;& - 1/6\;\;\;&1\\ \;\;\;\end{array} \right )\! =\! \left (\begin{array}{l@{\;\;\;}l@{\;\;\;}r} 1/5 \;\;\;& 3/10 \;\;\;& 1/5 \\ - 1/5\;\;\;& 1/5 \;\;\;& - 1/5 \\ 1/10\;\;\;& - 1/10\;\;\;& 3/5\\ \;\;\;\end{array} \right )\).
-
(9)
\(B =\{ 3,2,4\},\ N =\{ 1,5\}\).
-
(2)
-
2.
The optimal value of the restricted program is zero, optimality achieved.
The optimal solution and objective value are
$$\displaystyle{\bar{x} = {(0,1,3,1/2,0)}^{T},\qquad \bar{f} = 10.}$$
2 Self-Dual Parametric Simplex Method
Based on discussions made in Sect. 6.4, it is not difficult to go over to a method for solving problems with the costs and the right-hand side both parameterized, i.e.,
In this section, we will solve the standard LP program via handling the preceding parametric program. This method is closely related to Orchard-Hays’ work (1956), and has been used by Smale (1983b) for investigating the worst-case complexity of the simplex method.
The method belongs to a more general approach, so-called “homotopy”, which generates a continuous deformation, converting a given problem to a related but trivially solved one, and then proceeds backwards from the latter to the original by solving all the problems in between. It is seen that the standard problem (1.8) is just the parametric program (7.9) with θ = 0.
Assume availability of a simplex tableau to the standard LP problem, which is neither primal nor dual feasible. It is a simple matter to determine a value θ = θ 2 > 0 such that the objective row and the right-hand side both become nonnegative after adding it relevantly. Such doing amounts to adding some terms θ c′ and θ b′ respectively to the costs and the right-hand side of the original problem, corresponding to \(\theta =\theta _{1} = 0\). Then, θ is decreased from θ 2 down to 0 while maintaining optimality. If primal feasibility is violated first in this process, a row index p and a new θ 2 are determined; then a column index q is determined by the dual simplex ratio test. If, otherwise, dual feasibility violated first, a column index q and a new θ 2 are determined; a row index p is determined by the primal simplex ratio test. Subsequent operations in the iteration are just for a normal basis change.
Assume that the current simplex tableau is optimal to θ = θ 2, i.e.,
The procedure is put into the following algorithm, where the parametric program with θ = 0 corresponds to the original problem.
Algorithm 7.2.1 (Self-dual parametric algorithm: tableau form).
Given θ 2 > 0. Initial: a simplex tableau of the form (7.10), which is optimal for θ = θ 2. This algorithm solves the standard LP problem.
-
1.
If \(\bar{z}'_{N} \leq 0\), set β = 0; else, determine q and β such that
$$\displaystyle{\alpha = -\bar{z}_{q}/\bar{c}'_{q} =\max \{ -\bar{z}_{j}/\bar{z}'_{j}\ \vert \ \bar{z}'_{j} > 0,\ j \in N\}.}$$ -
2.
If \(\bar{b}' \leq 0\), set α = 0; else, determine p and α such that
$$\displaystyle{\beta = -\bar{b}_{p}/\bar{b}'_{p} =\max \{ -\bar{b}_{i}/\bar{b}'_{i}\ \vert \ \bar{b}'_{i} > 0,\ i = 1,\ldots,m\}.}$$ -
3.
If α ≥ β, do the following
-
(1)
If α ≤ 0, set θ 2 = 0 and stop ( optimality achieved);
-
(2)
Stop if \(\bar{a}_{q} \leq 0\) ( unbounded );
-
(3)
Determine row index p such that
$$\displaystyle{(\bar{b}_{p} +\bar{ b}'_{p}\theta )/\bar{a}_{pq} =\min \{ (\bar{b}_{i} +\bar{ b}'_{i}\theta )/\bar{a}_{iq}\ \mid \ \bar{a}_{iq} > 0,\ i = 1,\ldots,m\},}$$where θ is close to θ 2;
else
-
(4)
If β ≤ 0, set θ 2 = 0, and stop ( optimality achieved);
-
(5)
Stop if \(J =\{ j \in N\ \mid \ \bar{a}_{pj} < 0\}\) ( infeasible );
-
(6)
Determine column index q such that
$$\displaystyle{-(\bar{z}_{q} +\bar{ z}'_{q}\theta )/\bar{a}_{pq} =\min _{j\in J} - (\bar{z}_{j} +\bar{ z}'_{j}\theta )/\bar{a}_{pj},}$$where θ is close to θ 2.
-
(1)
-
4.
If α ≥ β, set θ 2 = α else set θ 2 = β.
-
5.
Convert \(\bar{a}_{p\,q}\) to 1, and eliminate the other nonzeros in the column by elementary transformations.
-
6.
Go to step 1.
An advantage of the preceding Algorithm is that it can solve problems in a single phase by starting from any basis. It is sometimes describe as “criss-cross” because of its shuttling between primal and dual sides, depending on which of α and β is larger (see step 3). Therefore, it seems critical to scale the costs and the right-hand side for equilibrium of their magnitudes before hand. On the other hand, the algorithm requires more computational effort per iteration, compared with the simplex algorithm. As a homotopy algorithm, it seems to be more suitable for solving hard problems. At least, it stands good as a tool for handling the parametric program (7.9) itself.
Discussions concerning the preceding Algorithm can be made similarly to Algorithms 6.4.1 and 6.4.2. The revised version of it is omitted.
Example 7.2.1.
Solve the following problem by Algorithm 7.2.1:
Answer Put the program into the following tableau with the costs and the right-hand side both parameterized
x 1 | x 2 | x 3 | x 4 | x 5 | RHS |
---|---|---|---|---|---|
1 | 2 | 1 | 2 | ||
− 2 | − 1 | 1 | \(-1+\theta\) | ||
− 3 | 4* | 1 | \(-3+\theta\) | ||
\(-2+\theta\) | \(-3+\theta\) |
Given θ 2 = 4 > 0.
Iteration 1:
-
1.
\(\alpha =\max \{ -(-2)/1,-(-3)/1\} = 3,\ q = 2\).
-
2.
\(\beta =\max \{ -(-1)/1,-(-3)/1\} = 3,\ p = 3\).
-
3.
α ≥ β.
-
(3)
\(\min \{(-3+\theta )/4\} = (-3+\theta )/4,\ p = 3\), where θ is close to 4.
-
(3)
-
4.
Set θ 2 = 3.
-
5.
Taking \(q = 2,\ p = 3\), according basis change leads to
x 1 | x 2 | x 3 | x 4 | x 5 | RHS |
---|---|---|---|---|---|
5∕2 | 1 | \(-1/2\) | \(7/2 - 1/2\theta\) | ||
\(-11/4\) | 1 | 1∕4 | \(-7/4 + 5/4\theta\) | ||
\(-3/4\)* | 1 | 1∕4 | \(-3/4 + 1/4\theta\) | ||
\(-17/4 + 7/4\theta\) | \(3/4 - 1/4\theta\) | \(-9/4 + 3/2\theta - 1/{4\theta }^{2}\) |
Iteration 2:
-
1.
\(\alpha =\max \{ -(-17/4)/(7/4)\} = 17/7,\ q = 1\).
-
2.
\(\beta =\max \{ -(-7/4)/(5/4),-(-3/4)/(1/4)\} = 3,\ p = 3\).
-
3.
α ≱ β.
-
(6)
\(\min \{-(-17/4 + 7/4\theta )/(-3/4))\} = -17/3 + 7/3\theta,\ q = 1\), where θ is close to 3.
-
(6)
-
4.
Set θ 2 = 3 (a degenerate step).
-
5.
Taking \(p = 3,\ q = 1\), according basis change leads to
x 1 | x 2 | x 3 | x 4 | x 5 | RHS |
---|---|---|---|---|---|
10∕3* | 1 | 1∕3 | \(1 + 1/3\theta\) | ||
\(-11/3\) | 1 | \(-2/3\) | \(1 + 1/3\theta\) | ||
1 | \(-4/3\) | \(-1/3\) | \(1 - 1/3\theta\) | ||
\(-17/3 + 7/3\theta\) | \(-2/3 + 1/3\theta\) | \(2 - 5/3\theta + 1/{3\theta }^{2}\) |
Iteration 3:
-
1.
\(\alpha =\max \{ -(-17/3)/(7/3),-(-2/3)/(1/3)\} = 17/7,\ q = 2\).
-
2.
\(\beta =\max \{ -1/(1/3),-1/(1/3)\} = -3,\ p = 1\).
-
3.
α > β.
-
(3)
\(\min \{(1 + 1/3\theta )/(10/3))\},\ p = 1\), where θ is close to 3.
-
(3)
-
4.
Set \(\theta _{2} = 17/7\).
-
5.
Taking \(q = 2,\ p = 1\), according basis change leads to
x 1 | x 2 | x 3 | x 4 | x 5 | RHS |
---|---|---|---|---|---|
1 | 3∕10 | 1∕10* | \(3/10 + 1/10\theta\) | ||
1 | 11∕10 | \(-3/10\) | \(21/10 + 7/10\theta\) | ||
1 | 2∕5 | \(-1/5\) | \(7/5 - 1/5\theta\) | ||
\(17/10 - 7/10\theta\) | \(-1/10 + 1/10\theta\) | \(37/10 - 9/5\theta + 1/1{0\theta }^{2}\) |
Iteration 4:
-
1.
\(\alpha =\max \{ -(-1/10)/(1/10)\} = 1,\ q = 5\).
-
2.
\(\beta =\max \{ -(3/10)/(1/10),-(21/10)/(7/10)\} = -3,\ p = 1\).
-
3.
α > β.
-
(3)
\(\min \{(3/10 + 1/10\theta )/(10/3))\},\ p = 1\), where θ is close to 17∕7.
-
(3)
-
4.
Set θ 2 = 1.
-
5.
Taking \(q = 5,\ p = 1\) as pivot, according basis change leads to
x 1 | x 2 | x 3 | x 4 | x 5 | RHS |
---|---|---|---|---|---|
10 | 3 | 1 | 3 +θ | ||
3 | 1 | 2 | 3 +θ | ||
1 | 2 | 1 | 2 | ||
1 −θ | 2 −θ | 4 − 2θ |
Iteration 5:
-
1.
α = 0.
-
2.
\(\beta =\max \{ -3/1,-3/1\} = -3,\ p = 1\).
-
3.
α > β.
-
(1)
θ 2 = 0. The basic optimal solution and associated objective value:
$$\displaystyle{\bar{x} = {(2,0,3,0,3)}^{T},\qquad \bar{f} = -4.}$$
-
(1)
3 General LP Problems
Sa far we have presented methods for solving standard LP problems. Nevertheless, models from practice are various, as can be put in a more general from below:
where \(A \in {\mathcal{R}}^{m\times n},\,c,l,u \in {\mathcal{R}}^{n},\,a,b \in {\mathcal{R}}^{m},\,m < n,\,\mathrm{rank}\ A = m\), and a, b, l, u are all given vectors. Such type of problems have not only upper and lower bounds on variables, but also ranges, i.e., variation range of Ax. This type of problems are usually referred to as problems with ranges and bounds.
Ranges involved in the problems can be eliminated by introducing new variables. Setting w = Ax, in fact, the preceding problem is converted to
Components of x are said to be structural variables, whereas those of w said to be logical variables.
We will focus on the following bounded-variable problem:
where \(A \in {\mathcal{R}}^{m\times n},\,c,l,u \in {\mathcal{R}}^{n},\,b \in {\mathcal{R}}^{m},\,\mathrm{rank}\ A = m,\,m < n\). Unless indicated otherwise, it is assume that l, u are finite, and l j < u j . Infinite upper or lower bounds can be represented by sufficiently large or small reals. Thereby, the standard LP problems (1.8) can be regarded as a special case of the preceding problem.
Clearly, such a problem can be converted to the standard form by variable transformations though such doing increases problem’s scale. In the following sections, we will generalize the simplex method and dual simplex method to solve the bounded-variable problem directly.
In the sequel, the following sign function will be useful:
Assume that the current basis and nonbasis are
4 Generalized Simplex Method
Almost all terms for the standard LP problem are applicable for the bounded-variable problem. A solution to Ax = b is said to be basic if nonbasic components of it attain one of the associated upper and lower bounds. It is clear that basic solution, associated with a basis, is not necessarily unique, in contrast to basic solution in the standard LP problem context.
The following results are similar to those for the standard problem, as are stated in the sequel without proofs.
Lemma 7.4.1.
If there exists a feasible solution to the bounded-variable problem, so does a basic feasible solution; if there exists an optimal solution to it, so does a basic optimal solution.
Therefore, it is possible to find a basic optimal solution among basic feasible solutions, as is a basis for the generalized simplex algorithm.
Let \(\bar{x}\) be a basic feasible solution, associated with B:
The according reduced costs and objective value are
Define index set
So it holds that
Without confusion, thereafter \(\Gamma \) and \(\Pi \) are also used to respectively denote submatrices, consisting of columns indexed by their elements.
Lemma 7.4.2.
A feasible solution \(\bar{x}\) is optimal if the following set is empty:
Proof.
Let x′ be any feasible solution. Thus it holds that
It is known by the assumption that
Hence for any j ∈ N, there two cases arising:
-
(i)
\(j \in \Gamma \). It follows from \(x'_{j} \geq l_{j} =\bar{ x}_{j}\) that
$$\displaystyle{ \bar{z}_{j}x'_{j} \geq \bar{ z}_{j}\bar{x}_{j}; }$$(7.21) -
(ii)
\(j \in \Pi \). From \(x'_{j} \leq u_{j} =\bar{ x}_{j}\) again (7.21) follows. Therefore
$$\displaystyle{\sum _{j\in N}\bar{z}_{j}x'_{j} \geq \sum _{j\in N}\bar{z}_{j}\bar{x}_{j},}$$which implies that
$$\displaystyle{c_{B}^{\mathrm{T}}{B}^{-1}b +\bar{ z}_{ N}^{\mathrm{T}}x'_{ N} \geq c_{B}^{\mathrm{T}}{B}^{-1}b +\bar{ z}_{ N}^{\mathrm{T}}\bar{x}_{ N}.}$$The preceding indicates that the objective value at x′ is no less than that at \(\bar{x}\), therefore \(\bar{x}\) is optimal. □
Assume now that J is nonempty. Thus, a column index q can be determined by
Assuming q = j t , define vector
where e q−m is the (n − m)-dimensional unit vector with the (q − m)th component 1.
Proposition 7.4.1.
\(\Delta x\) satisfies
Proof.
It is known by (7.22) that
From the first formula of (7.18) together with (7.4) and (7.22), it follows that
□
The preceding proposition says that \(-\Delta x\) is a descent direction with respect to the objective c T x.
Let α ≥ 0 be a stepsize from \(\bar{x}\) along the direction. The new iterate is then
Thus, since \(\bar{x}\) is feasible, it holds for any α ≥ 0 that
The value of stepsize α should be such that \(\hat{x}\) satisfies \(l \leq \hat{ x} \leq u\). Thereby the largest possible stepsize is
where
There are the following two cases arising:
-
(i)
\(\alpha = u_{q} - l_{q}\). In this case, if \(\bar{x}_{q} = l_{q}\), then \(\hat{x}_{q} = u_{q}\); and if \(\bar{x}_{q} = u_{q}\), then \(\hat{x}_{q} = l_{q}\). The new solution \(\hat{x}\) is basic feasible, corresponding to the same basis. Therefore, there is no need for any basis change.
-
(ii)
α < u q − l q . Determine row index p ∈ { 1, ⋯ , m} such that
$$\displaystyle{ \alpha =\alpha _{p}. }$$(7.27)Then \(\hat{x}_{j_{p}}\) attains its lower bound \(l_{j_{p}}\) or upper bound \(u_{j_{p}}\). In this case, the new basis and nonbasis follows from B and N by exchanging j p and q. In addition, it is verified that the new solution \(\hat{x}\) is a basic solution, corresponding to the new basis.
It is known from (7.23) and (7.24) that the new objective value is
which strictly decreases if α > 0. The preceding expression leads to the recurrence formula of the objective value, i.e.,
The preceding formula will not be used in each iteration in the following algorithm, however; instead, the objective value will be computed at the end from the final basis and original data.
Definition 7.4.1.
A feasible solution is degenerate (with respective to a basis) if a basic component of it is on one of its bounds.
Concerning stepsize α, the following two points should be noted.
-
(i)
In case when a basic solution is degenerate, α value would vanish, and the basic solution remains unchanged even if the basis changes.
-
(ii)
In practice, the problem should be deemed unbounded if the value of α exceeds some sufficiently large number.
From the discussions made above, the following conclusions are attained.
Lemma 7.4.3.
Let \(\bar{x}\) be a basic feasible solution. Then the new solution, determined by (7.22), (7.24), (7.25) and (7.26) , is a basis feasible solution. The corresponding objective value does not increase, while strictly decreases if nondegeneracy is assumed.
The overall steps are put in the following algorithm.
Algorithm 7.4.1 (Generalized simplex algorithm).
Initial: (B, N), B −1 and associated basic feasible solution \(\bar{x}\). This algorithm solves bounded-variable problem (7.13).
-
1.
Compute \(\bar{z}_{N} = c_{N} - {N}^{\mathrm{T}}\bar{y}\), where \(\bar{y} = {B}^{-T}c_{B}\).
-
2.
Compute \(\bar{f} = {c}^{\mathrm{T}}\bar{x}\), and stop if set J defined by (7.20) is empty.
-
3.
Select column index q such that \(q \in \max _{j\in J}\vert \bar{z}_{j}\vert \).
-
4.
Compute \(\Delta x_{B} = -\mathrm{sign}(\bar{z}_{q}){B}^{-1}a_{q}\).
- 5.
- 6.
-
7.
Go to step 1 if \(\alpha = u_{q} - l_{q}\); else determine row index p ∈ { 1, ⋯ , m} such that α = α p .
-
8.
Update B −1 by (3.23).
-
9.
Update (B, N) by exchanging j p and q.
-
10.
Go to step 1.
Theorem 7.4.1.
Algorithm 7.4.1 generates a sequence of basic feasible solutions. Assuming nondegeneracy throughout the solution process, it terminates at step 2, giving a basic optimal solution.
Proof.
Its validity comes from Lemmas 7.4.2 and 7.4.3.
Example 7.4.1.
Solve the following problem by Algorithm 7.4.1:
Answer Introduce x 3, x 4, x 5 to convert the preceding to
In the following, the unbounded variables will be handled as −∞ or ∞, upon which only the determination of stepsize touches.
Initial: \(B =\{ 3,4,5\},\ N =\{ 1,2\},\ {B}^{-1} = I,\ \bar{x}_{N} = {(0_{(-)},-2_{(-)})}^{\mathrm{T}},\ \bar{x}_{B} = {(6,2,-4)}^{\mathrm{T}}\), \(\bar{f} = 6\). The initial solution is basic feasible (with subscript “(−)” to denote on the lower bound, and superscript “(+)” on the upper bound. The same below).
Iteration 1:
-
1.
\(y\;\; = {B}^{-\mathrm{T}}c_{B} = {(0,0,0)}^{\mathrm{T}},\,\bar{z}_{N} = {(1,-3)}^{\mathrm{T}}\).
-
2.
J = { 2}.
-
3.
\(\max _{J}\vert \bar{z}_{j}\vert = 3,q = 2\), x 2 enters the basis.
-
4.
\(\bar{a}_{2}\; = {B}^{-1}a_{2} = {(3,1,-2)}^{\mathrm{T}}.\)
-
5.
\(\alpha _{1}\;\hspace{-0.6pt} = (6 - 2)/3 = 4/3,\alpha _{2} = (2 - 1)/1 = 1,\)
\(\alpha _{3}\;\hspace{-0.6pt} = (-4 - 0)/ - 2 = 2\); \(\alpha =\min \{ \infty,4/3,1,2\} = 1\).
-
6.
\(\bar{x}_{B}\hspace{0.3pt} = {(6,2,-4)}^{\mathrm{T}} - 1 \times {(3,1,-2)}^{\mathrm{T}} = {(3,1,-2)}^{\mathrm{T}},\)
\(\bar{x}_{N}\hspace{-0.5pt} = {(0_{(-)},-2)}^{\mathrm{T}} - 1 \times (0,-1) = {(0_{(-)},-1)}^{\mathrm{T}}\).
-
7.
\(p\;\;\hspace{-0.3pt} = 2\), x 4 leaves the basis.
-
8.
\({B}^{-1}\hspace{-0.1pt} = \left (\begin{array}{c@{\;\;\;}r@{\;\;\;}c} 1\;\;\;& - 3\;\;\;& \\ \;\;\; & 1\;\;\; & \\ \;\;\;& 2\;\;\;&1\\ \;\;\;\end{array} \right )\).
-
9.
\(B\;\,\hspace{-0.5pt} =\{ 3,2,5\},\,N =\{ 1,4\};\bar{x}_{B} = {(3,-1,-2)}^{\mathrm{T}},\,\bar{x}_{N} = (0_{(-)},1_{(-)}^{\mathrm{T}})\).
Iteration 2:
-
1.
\(y\;\;\hspace{0.2pt} = {B}^{-\mathrm{T}}c_{B} = {(0,-3,0)}^{\mathrm{T}},\,\bar{z}_{N} = c_{N} - {N}^{\mathrm{T}}y = {(1,0)}^{\mathrm{T}} - {(3,-3)}^{\mathrm{T}} = {(-2,3)}^{\mathrm{T}}\).
-
2.
\(J\;\;\hspace{-0.7pt} =\{ 1\}\).
-
3.
\(\max _{J}\vert \bar{z}_{j}\vert = 2,q = 1\), x 1 enters the basis.
-
4.
\(\bar{a}_{1}\;\hspace{-0.2pt} = {B}^{-1}a_{1} = {(1,-1,-1)}^{\mathrm{T}}.\)
-
5.
\(\alpha _{1}\;\hspace{-0.5pt} = (3 - 2)/1 = 1,\ \alpha _{2} = (-1 -\infty )/ - 1 = \infty,\)
\(\alpha _{3}\;\hspace{-0.5pt} = (-2 - 0)/ - 1 = 2\), \(\alpha =\min \{ 6 - 0,1,\infty,2\} = 1\).
-
6.
\(\bar{x}_{B}\,\hspace{-1.2pt}= {(3,-1,-2)}^{\mathrm{T}} - 1 \times {(1,-1,-1)}^{\mathrm{T}} = {(2,0,-1)}^{\mathrm{T}}\),
\(\bar{x}_{N}\hspace{-0.4pt} = {(0,1)}^{\mathrm{T}} - 1 \times (-1,0) = {(1,1)}^{\mathrm{T}}\).
-
7.
\(p\;\;\hspace{-0.4pt} = 1\), x 3 leaves the basis.
-
8.
\({B}^{-1}\hspace{-0.1pt} = \left (\begin{array}{c@{\;\;\;}c@{\;\;\;}c} 1\;\;\;& \;\;\;& \\ 1\;\;\; &1\;\;\; & \\ 1\;\;\;& \;\;\;&1\\ \;\;\;\end{array} \right )\left (\begin{array}{c@{\;\;\;}r@{\;\;\;}c} 1\;\;\;& - 3\;\;\;& \\ \;\;\; & 1\;\;\; & \\ \;\;\;& 2\;\;\;&1\\ \;\;\;\end{array} \right ) = \left (\begin{array}{c@{\;\;\;}c@{\;\;\;}c} 1\;\;\;& - 3\;\;\;& \\ 1\;\;\; & - 2\;\;\; & \\ 1\;\;\;& - 1\;\;\;&1\\ \;\;\;\end{array} \right )\).
-
9.
\(B\;\;\hspace{-1.7pt}=\{ 1,2,5\},\,N =\{ 3,4\};\ \bar{x}_{B} == {(1,0,-1)}^{\mathrm{T}},\ \bar{x}_{N} = {(2_{(-)},1_{(-)})}^{\mathrm{T}}\).
Iteration 3:
-
1.
\(y\;\;\hspace{0.2pt} = {B}^{-\mathrm{T}}c_{B} = {(-2,3,0)}^{\mathrm{T}},\ \bar{z}_{N} = c_{N} - {N}^{\mathrm{T}}y = {(0,0)}^{\mathrm{T}} - {(-2,3)}^{\mathrm{T}} = {(2,-3)}^{\mathrm{T}}\).
-
2.
\(J\;\;\hspace{-0.7pt} =\{ 4\}\).
-
3.
\(\max _{J}\vert \bar{z}_{j}\vert = 3,q = 4\), x 4 enters the basis.
-
4.
\(\bar{a}_{4}\;\hspace{-0.2pt} = {B}^{-1}a_{4} = {(-3,-2,-1)}^{\mathrm{T}}\).
-
5.
\(\alpha _{1}\;\hspace{-0.5pt} = (1 - 6)/ - 3 = 5/3,\ \alpha _{2} = (0 -\infty )/ - 2 = \infty \),
\(\alpha _{3}\;\hspace{-0.5pt} = (-1 - 0)/ - 1 = 1;\ \alpha =\min \{ 5 - 1,5/3,\infty,1\} = 1\).
-
6.
\(\bar{x}_{B}\,\hspace{-1.2pt}= {(1,0,-1)}^{\mathrm{T}} - 1 \times {(-3,-2,-1)}^{\mathrm{T}} = {(4,2,0)}^{\mathrm{T}}\),
\(\bar{x}_{N}\hspace{-0.4pt} = {(2,1)}^{\mathrm{T}} - 1 \times (0,-1) = {(2,2)}^{\mathrm{T}}\).
-
7.
\(p\;\;\hspace{-0.4pt} = 3\), x 5 leaves the basis.
-
8.
\({B}^{-1}\hspace{-0.1pt} = \left (\begin{array}{c@{\;\;\;}c@{\;\;\;}c} 1\;\;\;& \;\;\;& - 3\\ \;\;\; &1\;\;\; & - 2 \\ \;\;\;& \;\;\;& - 1\\ \;\;\;\end{array} \right )\left (\begin{array}{c@{\;\;\;}c@{\;\;\;}c} 1\;\;\;& - 3\;\;\;& \\ 1\;\;\; & - 2\;\;\; & \\ 1\;\;\;& - 1\;\;\;&1\\ \;\;\;\end{array} \right ) = \left (\begin{array}{c@{\;\;\;}c@{\;\;\;}c} - 2\;\;\;& \;\;\;& - 3\\ - 1\;\;\; & \;\;\; & - 2 \\ - 1\;\;\;&1\;\;\;& - 1\\ \;\;\;\end{array} \right )\).
-
9.
\(B\;\;\hspace{-1.7pt}=\{ 1,2,4\},\,N =\{ 3,5\};\ \bar{x}_{B} = {(4,2,2)}^{\mathrm{T}},\ \bar{x}_{N} = {(2_{(-)},{0}^{(+)})}^{\mathrm{T}}\).
Iteration 4:
-
1.
\(y\;\;\hspace{0.1pt} = {B}^{\mathrm{-T}}c_{B} = {(1,0,3)}^{\mathrm{T}},\,\bar{z}_{N} = c_{N} - {N}^{\mathrm{T}}y = {(0,0)}^{\mathrm{T}} - {(1,3)}^{\mathrm{T}} = {(-1,-3)}^{\mathrm{T}}\).
-
2.
\(J\;\;\hspace{-0.8pt} =\{ 3\}\).
-
3.
\(\max _{J}\vert \bar{z}_{j}\vert = 1,q = 3\), x 3 enters the basis.
-
4.
\(\bar{a}_{3}\;\hspace{-0.2pt} = {B}^{-1}a_{3} = {(-2,-1,-1)}^{\mathrm{T}}.\)
-
5.
\(\alpha _{1}\;\hspace{-0.5pt} = (4 - 6)/ - 2 = 1,\alpha _{2} = (2 -\infty )/ - 1 = \infty \),
\(\alpha _{3}\;\hspace{-0.5pt} = (2 - 5)/ - 1 = 3;\alpha =\min \{ 10 - 2,1,\infty,3\} = 1\).
-
6.
\(\bar{x}_{B}\,\hspace{-1.2pt}= {(4,2,2)}^{\mathrm{T}} - 1 \times {(-2,-1,-1)}^{\mathrm{T}} = {(6,3,3)}^{\mathrm{T}}\),
\(\bar{x}_{N}\hspace{-0.4pt} = {(2,0)}^{\mathrm{T}} - 1 \times (-1,0) = {(3,0)}^{\mathrm{T}}\).
-
7.
\(p\;\;\hspace{-0.2pt} = 1\), x 1 leaves the basis.
-
8.
\({B}^{-1} = \left (\begin{array}{c@{\;\;\;}c@{\;\;\;}c} - 1/2\;\;\;& \;\;\;& \\ - 1/2\;\;\;&1\;\;\;& \\ - 1/2\;\;\;& \;\;\;&1\\ \;\;\;\end{array} \right )\left (\begin{array}{c@{\;\;\;}c@{\;\;\;}c} - 2\;\;\;& \;\;\;& - 3\\ - 1\;\;\; & \;\;\; & - 2 \\ - 1\;\;\;&1\;\;\;& - 1\\ \;\;\;\end{array} \right ) = \left (\begin{array}{c@{\;\;\;}c@{\;\;\;}r} 1\;\;\;& \;\;\;& 3/2 \\ \;\;\;& \;\;\;& - 1/2 \\ \;\;\;&1\;\;\;& 1/2\\ \;\;\;\end{array} \right )\).
-
9.
\(\bar{x}_{B}\,\hspace{-1.2pt}= {(3,3,3)}^{\mathrm{T}},\,B =\{ 3,2,4\};\ \bar{x}_{N} = {({6}^{(+)},{0}^{(+)})}^{\mathrm{T}},\ N =\{ 1,5\}.\)
Iteration 5:
-
1.
\(y\;\,\hspace{0.2pt} = {B}^{-T}c_{B} = {(0,0,3/2)}^{\mathrm{T}},\)
\(\bar{z}_{N} = c_{N} - {N}^{\mathrm{T}}y = {(1,0)}^{\mathrm{T}} - {(3/2,3/2)}^{\mathrm{T}} = {(-1/2,-3/2)}^{\mathrm{T}}\).
-
2.
J = ∅. The basic optimal solution and associated objective value:
$$\displaystyle{\bar{x} = {(6,3,3,3,0)}^{\mathrm{T}},\quad \bar{f} = 6 - 3 \times 3 = -3.}$$
As for the tableau version of Algorithm 7.4.1, the associated simplex tableau is the same as the conventional, except there is no need for RHS column to display the corresponding basic solution. We add three additional rows \((u,\bar{x},l)\), respectively, listing upper bounds, variable values and lower bounds. The simplex tableau is of the form below:
x B T | x N T | |
---|---|---|
I | \(\bar{N}\) | |
\(\bar{z}_{ N}\) | ||
u | u B T | u N T |
\(\bar{x}\) | \(\bar{x}_{B}^{\mathrm{T}}\) | \(\bar{x}_{N}^{\mathrm{T}}\) |
l | l B T | l N T |
Based on Table 3.1, Algorithm 7.4.1 can be revised to a tableau form. As \(\bar{a}_{q} = {B}^{-1}a_{q}\), (7.26) should be replaced by
Algorithm 7.4.2 (Generalized simplex algorithm: tableau form).
Initial: feasible tableau of form (7.4), associated with \(\bar{x}\). This algorithm solves the bounded-variable problem (7.13).
-
1.
Compute \(\bar{f} = {c}^{\mathrm{T}}\bar{x}\), and stop (optimality achieved) if J defined by (7.20) is empty.
-
2.
Select column index q such that \(q \in \max _{j\in J}\vert \bar{z}_{j}\vert \).
-
3.
Determine stepsize α by (7.25), where α i defined by (7.28).
-
4.
Set \(\bar{x}_{q} = -\mathrm{sign}(\bar{z}_{q})\alpha\), and update \(\bar{x}_{B} =\bar{ x}_{B} +\alpha \mathrm{sign}(\bar{z}_{q})\bar{a}_{q}\) if α ≠ 0.
-
5.
If \(\alpha = u_{q} - l_{q}\), go to step 1; else, determine row index p ∈ { 1, ⋯ , m} such that α = α p .
-
6.
Convert \(\bar{a}_{p\,q}\) to 1, and eliminate the other nonzeros in the column by elementary transformations.
-
7.
Go to step 1.
Note The last three rows in the tableau should be updated in each iteration.
4.1 Generalized Phase-I
The following is devoted to generate an initial feasible tableau to Algorithm 7.4.2,
Assume that B and N are respectively basis and nonbasis at the current iteration, associated with basic solution \(\bar{x}\). Introduce index set
If I 1 ∪ I 2 = ∅, then \(\bar{x}\) is feasible. In the other case, construct the following auxiliary program:
where the objective function is termed “infeasible-sum”.
The according tableau of the auxiliary program is manipulated by one iteration of Algorithm 7.4.1 (in which the row pivot rule should be modified slightly, see below). Then a new auxiliary program is formed, and so on, until I 1 ∪ I 2 becomes empty, or infeasibility is detected.
Related discussions are similar to those with the infeasible-sum Phase-I method for the standard LP problem (for details, see Sect. 13.1).
5 Generalized Dual Simplex Method: Tableau Form
Let B and N are given by (7.15) and let (7.4) be the according simplex tableau. Assume that the associated basic solution \(\bar{x}\) are valued by
and
Index sets \(\Gamma \) and \(\Pi \) are defined by (7.19). If the following conditions hold:
the simplex tableau is said to be dual feasible. If, further, l B ≤ x B ≤ u B holds, \(\bar{x}\) is clearly a basic optimal solution.
Whether a simplex tableau of a bounded-variable problem is dual feasible dependents on the values taken by nonbasic components of the solution. In principle, in the case when components of l and u are finite, it is always possible to have nonbasic components valued, such that the resulting solution be dual feasible, though l B ≤ x B ≤ μ B does not hold in general.
Introduce “bound-violation” quantities
and determine row index p by the following rule:
If ρ p = 0, optimality is achieved. Now assume that ρ p ≠ 0: ρ p > 0 indicates that \(\bar{x}_{p}\) violates the lower bound while ρ p < 0 indicates that it violates the upper bound. Introduce index set
It is not difficult to show that the original problem is infeasible if J = ∅; else, a column index q and a step size β are determined such that
Takin \(\bar{a}_{pq}\) as the pivot, convert the simplex tableau by relevant elementary transformations. Then the resulting simplex tableau corresponds to the new basis and nonbasis below:
It might be well to still use (7.4) to denote the new simplex tableau, \(\hat{x}\) denote the associated basic solution. As new tableau is equivalent to the old, \(\hat{x}\) and \(\bar{x}\) satisfy
Now set the new nonbasic component \(\hat{x}_{p}\) to the violated bound, i.e.,
and maintain other nonbasic components unchanged, i.e.,
Then from subtraction of the two equalities of (7.34), the updating formula of \(\bar{x}_{B}\) follows:
It is not difficulty to show that the new simplex tableau with such a \(\hat{x}\) is still dual feasible. The β is actually the largest possible stepsize maintaining dual feasibility.
Noting that \(\bar{z}_{j_{p}} = \mathrm{sign}(\rho _{p})\beta\) holds for the new tableau, the following recurrence formula of the objective value can be derived from \(\hat{x}\) and \(\bar{x}\) satisfying (7.35) and the equality associated with the bottom row of the tableau:
which indicates that the objective value increases. If all components of \(\bar{z}_{N}\) are nonzero, the simplex tableau is said to be dual nondegenerate, and hence β > 0, so that the objective value strictly increases.
The overall steps are put into the following algorithm, in which the objective value is calculated at the end.
Algorithm 7.5.1 (Generalized dual simplex algorithm: tableau form).
Initial: a dual feasible tableau of form (7.4), corresponding to \(\bar{x}\). This algorithm solves the bounded-variable problem (7.13).
- 1.
-
2.
If ρ p = 0, compute \(\bar{f} = {c}^{\mathrm{T}}\bar{x}\), and stop (optimality achieved).
-
3.
Stop if J defined by (7.32) is empty (infeasible problem).
-
4.
Determine a column index q by (7.33).
-
5.
Convert \(\bar{a}_{p\,q}\) to 1, and eliminate the other nonzeros in the column by elementary transformations.
- 6.
-
7.
Go to step 1.
Note The last three rows in the simplex tableau should be updated in each iteration.
The proof regrading meanings of the algorithm’s exits are delayed to the derivation of its revised version.
Example 7.5.1.
Solve the following problem by Algorithm 7.5.1:
Answer Initial tableau:
x 1 | x 2 | x 3 | x 4 | x 5 | x 6 | x 7 | |
---|---|---|---|---|---|---|---|
− 2 | 3 | − 4 | 2 | 1 | |||
− 3 | 4* | − 5 | 6 | 1 | |||
1 | − 2 | 2 | − 7 | 1 | |||
2 | − 1 | 3 | − 6 | ||||
u | 30 | 20 | 10 | 15 | 26 | 34 | 19 |
\(\bar{x}\) | − 15 | 20 | − 17 | 15 | − 174 | − 284 | 179 |
l | − 15 | − 12 | − 17 | − 8 | − 10 | − 13 | 0 |
Take
Iteration 1:
-
1.
\(\rho _{1} = -10 - (-174) = 164,\rho _{2} = -13 - (-284) = 271,\)
\(\rho _{3} = 19 - 179 = -160\). \(\max \{\vert 164\vert,\vert 271\vert,\vert - 160\vert \} = 271\neq 0,p = 2,j_{2} = 6\).
-
3.
\(J\;\hspace{-0.7pt} =\{ 1,2,3,4\}\neq \varnothing \).
-
4.
\(\min \{-2/(-3),-(-1)/4,-3/(-5),-(-6)/6\} = 1/4,q = 2\).
-
5.
Multiply row 2 by 1∕4, and then add − 3, 2, 1 times of row 2 to rows 1,3,4, respectively.
-
6.
\(\bar{x}_{6}\;\hspace{-0.6pt} = -284 + 271 = -13\).
\(\bar{x}_{B} = {(-174,20,179)}^{\mathrm{T}} - 271{(-3/4,1/4,1/2,-1/4)}^{\mathrm{T}}\).
\(\hspace{-0.6pt}= {(117/4,-191/4,87/2)}^{\mathrm{T}},\ B =\{ 5,2,7\}\).
x 1 | x 2 | x 3 | x 4 | x 5 | x 6 | x 7 | |
---|---|---|---|---|---|---|---|
1∕4 | \(-1/4\) | \(-5/2\) | 1 | \(-3/4\)* | |||
\(-3/4\) | 1 | \(-5/4\)* | 3∕2 | 1∕4 | |||
\(-1/2\) | \(-1/2\) | − 4 | 1∕2 | 1 | |||
5∕4 | 7∕4 | \(-9/2\) | 1∕4 | ||||
u | 30 | 20 | 10 | 15 | 26 | 34 | 19 |
\(\bar{x}\) | − 15 | \(-191/4\) | − 17 | 15 | 117∕4 | − 13 | 87∕2 |
l | − 15 | − 12 | − 17 | − 8 | − 10 | − 13 | 0 |
Iteration 2:
-
1.
\(\;\,\rho _{1} = 26 - 117/4 = -13/4,\rho _{2} = -12 - (-191/4) = 143/4,\)
\(\;\,\rho _{3} = 19 - 87/2 = -49/2.\max \{\vert - 13/4\vert,\vert 143/4\vert,\vert - 49/2\vert \} = 143/4\neq 0,\quad\)
\(\;\,p = 2,j_{2} = 2.\)
-
3.
\(\;\,J =\{ 1,3,4\}\neq \varnothing.\)
-
4.
\(\min \{-(5/4)/(-3/4),-(7/4)/(-5/4),-(-9/2)/(3/2)\} = 7/5,\ q = 3\).
-
5.
Multiply row 2 by \(-4/5\), and then add \(1/4,1/2,-7/4\) times of row 2 to rows 1,3,4, respectively.
-
6.
\(\;\,\bar{x}_{2} = -191/4 + 143/4 = -12,\)
\(\;\,\bar{x}_{B} = {(117/4,-17,87/2)}^{\mathrm{T}} - (143/4){(-1/5,-4/5,-2/5)}^{\mathrm{T}}.\)
\(\;\,= {(182/5,58/5,289/5)}^{\mathrm{T}},\ B =\{ 5,3,7\}.\)
x 1 | x 2 | x 3 | x 4 | x 5 | x 6 | x 7 | |
---|---|---|---|---|---|---|---|
2∕5 | \(-1/5\) | \(-14/5\) | 1 | \(-4/5\) | |||
3∕5 | \(-4/5\) | 1 | \(-6/5\) | \(-1/5\) | |||
\(-1/5\) | \(-2/5\) | \(-23/5\)* | 2∕5 | 1 | |||
1∕5 | 7∕5 | \(-12/5\) | 3∕5 | ||||
u | 30 | 20 | 10 | 15 | 26 | 34 | 19 |
\(\bar{x}\) | − 15 | − 12 | 58∕5 | 15 | 182∕5 | − 13 | 289∕5 |
l | − 15 | − 12 | − 17 | − 8 | − 10 | − 13 | 0 |
Iteration 3:
-
1.
\(\rho _{1} = 26 - 182/5 = -52/5,\rho _{2} = 10 - 58/5 = -8/5,\)
\(\rho _{3} = 19 - 289/5 = -194/5\). \(\max \{\vert - 52/5\vert,\vert - 8/5\vert,\vert - 194/5\vert \}\)
\(= 194/5\neq 0,p = 3,j_{3} = 7\).
-
3.
\(J\ \hspace{-0.4pt} =\{ 4,6\}\neq \varnothing \).
-
4.
\(\min \{-(-12/5)/(-23/5),-(3/5)/(2/5)\} = 12/23,\ q = 4\).
-
5.
Multiply row 3 by \(-5/23\), and then add \(14/5,6/5,12/5\) times of row 3 to rows 1,2,4, respectively.
-
6.
\(\bar{x}_{7}\;\hspace{-0.7pt} = 289/5 - 194/5 = 19\).
\(\bar{x}_{B} = {(182/5,58/5,15)}^{\mathrm{T}} - (-194/5){(-14/23,-6/23,-5/23)}^{\mathrm{T}}\).
\(= {(294/23,34/23,151/23)}^{\mathrm{T}},\ B =\{ 5,3,4\}\).
x 1 | x 2 | x 3 | x 4 | x 5 | x 6 | x 7 | |
---|---|---|---|---|---|---|---|
12∕23 | 1∕23 | 1 | \(-24/23\) | \(-14/23\) | |||
15∕23 | \(-16/23\) | 1 | \(-7/23\) | \(-6/23\) | |||
1∕23 | 2∕23 | 1 | \(-2/23\) | \(-5/23\) | |||
7∕23 | 37∕23 | 9∕23 | \(-12/23\) | ||||
u | 30 | 20 | 10 | 15 | 26 | 34 | 19 |
\(\bar{x}\) | − 15 | − 12 | 34∕23 | 151∕23 | 294∕23 | − 13 | 19 |
l | − 15 | − 12 | − 17 | − 8 | − 10 | − 13 | 0 |
Iteration 4:
-
1.
\(\rho _{1} =\rho _{2} =\rho _{3} = 0\). The basic optimal solution and optimal value are
$$\displaystyle\begin{array}{rcl} \bar{x}& =& {(-15,-12,34/23,151/23,294/23,-13,19)}^{\mathrm{T}}, {}\\ \bar{f}& =& (2,-1,3,-6){(-15,-12,34/23,151/23)}^{\mathrm{T}} = -1,218/23. {}\\ \end{array}$$
5.1 Generalized Dual Phase-I
It is not difficult to generalize dual Phase-I methods (Chap. 14) for standard problems to initiate the generalized dual simplex algorithm.
Using a generalized version of the most-obtuse-angle row rule (14.3), Koberstein and Suhl (2007) designed a dual Phase-I procedure, named by PAN, for solving generale problems. Taking MOPSFootnote 1 as a platform, they tested several main dual Phase-1 methods on 46 typical large-scale sparse problems, the largest among which involves more than 500,000 constraints and 1,000,000 variables. The numerical results show that for most of the tested problem, PAN required a small number of iterations; only for few most difficult problems, the required iterations exceeded an acceptable amount. In the latter cases, they turned to a version of the dual infeasibility-sum Phase-I, named by SDI. It turned out that such a combination, PAN + SDI, is the best among four commonly used Phase-I methods. Therefore, PAN+SDI was taken as the default option for MOPS dual simplex algorithm.
In view of the preceding facts, the author suggests generalizing Rule 14.3.2 by replacing (7.25) with
where 0 < τ ≤ 1, α i , i = 1, ⋯ , m, are defined by (7.28). The basic idea of such doing is to restrict stepsizes to some extent.
This consideration leads to the following algorithm, yielding from modifying Algorithm 7.4.2.
Algorithm 7.5.2 (Tableau generalized dual Phase-I: the most-obtuse-angle rule).
Given 0 < τ ≤ 1. Initial: a dual feasible simplex tableau of form (7.4), associated with \(\bar{x}\). This algorithm solves the bounded-variable problem (7.13).
-
1.
If J defined by (7.20) is empty, compute \(\bar{f} = {c}^{\mathrm{T}}\bar{x}\), and stop (optimality achieved).
-
2.
Select column index q such that \(q \in \max _{j\in J}\vert \bar{z}_{j}\vert \).
-
3.
Determine stepsize α by (7.37).
-
4.
Set \(\bar{x}_{q} = -\mathrm{sign}(\bar{z}_{q})\alpha\), and update \(\bar{x}_{B} =\bar{ x}_{B} +\alpha \mathrm{sign}(\bar{z}_{q})\bar{a}_{q}\) if α ≠ 0.
-
5.
If \(\alpha = u_{q} - l_{q}\), go to step 1; else, determine row index p ∈ { 1, ⋯ , m} such that α = α p .
-
6.
Convert \(\bar{a}_{p\,q}\) to 1, and eliminate the other nonzeros in the column by elementary transformations.
-
7.
Go to step 1.
6 Generalized Dual Simplex Method
According to Table 3.1, which gives the correspondence between entries of the simplex tableau and the revised simplex tableau, it is easy to formulate the revised version of Algorithm 7.5.1. However, we will not do so, but derive it based on local duality (Sect. 25.5), revealing that such an algorithm actually solves the dual bounded-variable problem.
Let B = { j 1, ⋯ , j m } and N = A∖B be the current basis and nonbasis, respectively, associated with primal basic solution \(\bar{x}\), i.e.,
Notation \(\Gamma,\Pi,\) are again defined by (7.19), and ρ i is defined by (7.30). Assume that row index p has already been determined by (7.31).
Consider the following local problem at \(\bar{x}\) (25.5):
Using notation
the local dual problem can be written
Based on the equality constraints, eliminate variable \(v_{\Pi },w_{\Gamma }\), and combine (7.19) and (7.20) to reduce the objective function to
Then setting \(z_{\Gamma } = w_{\Gamma },z_{\Pi } = -v_{\Pi }\), transform the local dual problem to the following equivalent form:
Now, define
and assume that the following conditions hold:
under which it is not difficult to verify that the primal objective value at \(\bar{x}\) and the dual objective value at \((\bar{y},\bar{z})\) are equal, i.e., \(\bar{f} =\bar{ g}\). Using the preceding notation, moreover, the following is valid.
Lemma 7.6.1.
\((\bar{y},\bar{z})\) is a basic feasible solution to the local dual problem, which exhibits complementarity with \(\bar{x}\) .
Proof.
It is clear that \((\bar{y},\bar{z})\) is the basic solutio to (7.42), satisfying the sign constraints at the bottom. So, it is only needed to show
From (7.43) and the second expression of (7.44), the first expression of (7.45) follows. By the first expression of (7.44), it holds that
Similarly that
Therefore, (7.46) is valid.
By (7.19), on the other hand, it is clear that (7.47) holds; and it is known from the second expression of (7.44) that (7.48) or (7.49) holds. □
Setting
it is not difficult to verify that \((\bar{y},\bar{v},\bar{w})\) is a basic feasible solution to dual problem of (7.13), i.e.,
(see the last paragraph of Sect. 25.5). It and \(\bar{x}\) satisfy complementarity condition. In this sense, \((\bar{y},\bar{z})\) is called a dual feasible solution.
Lemma 7.6.2.
If \(l_{B} \leq \bar{ x}_{B} \leq u_{B}\) holds, then \(\bar{x}\) is a basic optimal solution.
Proof.
When \(l_{B} \leq \bar{ x}_{B} \leq u_{B}\) holds, \(\bar{x}\) is clearly a basic feasible solution to the (full) problem (7.13), hence the same to the local problem (7.39). By Lemma 7.6.1, it is known that \((\bar{y},\bar{z})\) is local dual feasible, exhibiting complementarity with \(\bar{x}\). Therefore, the two are local primal and dual basic optimal solutions, respectively. By Proposition 25.4.2, it is known that \(\bar{x}\) is a basic optimal solution to (7.13). □
Now we will find a new dual solution to improve the objective value. To this end, define search direction
Lemma 7.6.3.
Under the preceding definition, the search direction satisfies the following conditions:
Proof.
Its first half is easily verified, it is only needed to show (7.58).
From (7.55) and the second expression of (7.38), it follows that
Then from (7.14) and (7.40) it follows that the right-hand side of the preceding equals
when ρ p > 0, while equals
when ρ p < 0 □ .
Consider the following line search scheme:
Introduce index set
Assume that J ≠ ∅. Then from (7.60) and sign conditions \(\hat{z}_{\Pi } \leq 0\) and \(\hat{z}_{\Gamma } \geq 0\), it is known that the largest possible stepsize β and pivot column index q satisfy the minimum-ratio test
If all components of \(\bar{z}_{N}\) are nonzero, then the solution is dual nondegenerate, hence the determined stepsize is positive.
Lemma 7.6.4.
If J ≠ ∅, the new solution, determined by (7.59) and (7.60) together with (7.62) , is a basic feasible solution to the local dual problem. The according objective value increases, and strictly increases if dual nondegeneracy is assumed.
Proof.
From (7.59), the first expression of (7.57) and (7.43), it is known for any β ≥ 0 that
From the first expression of (7.59), (7.60), the second expression of (7.57) and (7.44), it follows that
In addition, by (7.59), (7.60) and (7.57) is is known that \(\hat{z}_{j_{p}},\hat{z}\) satisfies the sign condition at the bottom of problem (7.42), hence the new solution is basic feasible solution, associated with the objective value increasing to
where the inequality comes from (7.58) and β ≥ 0. In the dual nondegeneracy case, β > 0, and hence the strict inequality holds, as implies strict increase of the the objective value. □
When J is empty, (7.57) is not well-defined but the following is valid.
Lemma 7.6.5.
If J = ∅, then the original problem (7.13) is infeasible.
Proof.
J = ∅ implies that
Combining the preceding two expressions together with \(\bar{z}_{\Pi } \leq 0\) and \(\bar{z}_{\Gamma } \geq 0\) leads to
Similarly to the proof of Theorem 7.6.4, it can be shown that \((\hat{y},\hat{z})\) satisfies the other constraints, with the objective value denoted again by (7.65). Thus, noting (7.58), it is known that
Therefore, the local dual problem is unbounded. By Proposition 25.4.2, the original problem is infeasible. □
Now we need to determine a primal solution that is complementary with the dual solution, based on the local problem (7.39). For the value of basic variable x p to change from \(\bar{x}_{p}\) to the violated bound, it is necessary to let the value of nonbasic variable x q change from \(\bar{x}_{q}\) accordingly by a range, i.e.,
Therefore, the new values are
where \(\bar{a}_{q} = {B}^{-1}a_{q}\), associated with the new objective value
Note that all components of \(\hat{x}_{N}\) are the same as those of \(\bar{x}_{N}\), except for \(\hat{x}_{q}\). From the first expression of (7.68) and the second expression of (7.59), it is known that if ρ p > 0, then \(\hat{x}_{j_{p}} = l_{j_{p}}\) and \(\hat{z} \geq 0\) hold, while if ρ p < 0, then \(\hat{x}_{j_{p}} = u_{j_{p}}\) and \(\hat{z}_{j_{p}} \leq 0\) hold. Therefore, after updating basis and nonbasis by exchanging p and q, \(\hat{x}\) and \((\hat{y},\hat{z})\) exhibit complementarity, and the latter satisfies according dual feasible conditions, so that we are ready to go on the next iteration.
The overall steps are summarized into the following algorithm, a revision of Algorithm 7.5.1.
Algorithm 7.6.1 (Generalized dual simplex algorithm).
Initial: (B, N), B −1; \(\bar{y},\bar{z},\bar{x}\) satisfying (7.38), (7.43) and (7.44). This algorithm solves the bounded-variable problem (7.13).
-
1.
Select a pivot row index \(p \in \arg \max \{\vert \rho _{i}\vert \ \vert \ i = 1,\cdots \,,m\}\), where ρ i is defined by (7.30).
-
2.
If ρ p = 0, compute \(\bar{f} = {c}^{\mathrm{T}}\bar{x}\), and stop.
-
3.
Compute \(\sigma _{N} = -{N}^{\mathrm{T}}h\), where \(h = -\mathrm{sign}(\rho _{p}){B}^{-\mathrm{T}}e_{p}\).
-
4.
Stop if J defined by (7.61) is empty.
-
5.
Determine β and pivot column index q by (7.62).
-
6.
Compute \(\Delta x_{q}\) by (7.67).
-
7.
Compute \(\bar{a}_{q} = {B}^{-1}a_{q}\).
-
8.
Update \(\bar{x}\) by (7.68).
-
9.
Update \(\bar{y},\,\bar{z}_{N},\,\bar{z}_{j_{p}}\) by (7.59) and (7.60).
-
10.
Update B −1 by (3.23).
-
11.
Update (B, N) by exchanging j p and q.
-
12.
Go to step 1.
Theorem 7.6.1.
Algorithm 7.6.1 generates a sequence of primal and of dual basic solutions. Assuming nondegeneracy, it terminates either at
-
(i)
Step 2, giving a pair of primal and dual basic optimal solutions; or at
-
(ii)
Step 4, detecting infeasibility of the problem.
Proof.
The validity comes from Lemmas 7.6.2, 7.6.4 and 7.6.5, and related discussions, made preceding Algorithm 7.6.1. □
Example 7.6.1.
Solve the following problem by Algorithm 7.6.1:
Answer Initial:\(B =\{ 4,5,6\},\ N =\{ 1,2,3\},\ {B}^{-1} = I,\ \bar{x}_{N} = {(1_{(-)},-2_{(-)},{0}^{(+)})}^{\mathrm{T}}\), \(\bar{x}_{B} = {(4,-1,-3)}^{\mathrm{T}},\ \bar{y} = (0,0,0),\ \bar{z}_{N} = {(1,2,-2)}^{\mathrm{T}},\ \bar{f} = -3\).
Iteration 1:
-
1.
\(\max \{0,\vert 0 - (-1)\vert,0\} = 1,p = 2\), x 5 leaves the basis.
-
3.
\(\;\,h = -\mathrm{sign}(\rho _{2}){B}^{-\mathrm{T}}e_{2} = {(0,-1,0)}^{\mathrm{T}},\quad \sigma _{N} = -{N}^{\mathrm{T}}h = {(-1,-1,1)}^{\mathrm{T}}.\qquad \quad \;\,\)
-
4.
J = { 1, 2, 3} ≠ ∅.
-
5.
\(\;\,\beta =\min \{ -1/(-1),-2/(-1),-(-2)/1\} = 1,\ q = 1.\)
-
6.
\(\;\,\!\!\!\!\!\!\!\!\Delta x_{1} =\rho _{2}/\vert \sigma _{1}\vert = 1.\)
-
7.
\(\;\,\bar{a}_{1} = {B}^{-1}a_{1} = {(-2,-1,1)}^{\mathrm{T}}.\)
-
8.
\(\;\,\bar{x}_{B} =\bar{ x}_{B} - \Delta x_{1}\bar{a}_{1} = {(4,-1,-3)}^{\mathrm{T}} - {(-2,-1,1)}^{\mathrm{T}} = {(6,0,-4)}^{\mathrm{T}}.\)
\(\bar{x}_{N} =\bar{ x}_{N} + \Delta x_{1}e_{1} = {(1,-2,0)}^{\mathrm{T}} + {(1,0,0)}^{\mathrm{T}} = {(2,-2,0)}^{\mathrm{T}}.\)
-
9.
\(\;\,\bar{y} =\bar{ y} +\beta h = {(0,-1,0)}^{\mathrm{T}} + 1 \times {(0,-1,0)}^{\mathrm{T}} = {(0,-2,0)}^{\mathrm{T}},\)
\(\bar{z}_{N} =\bar{ z}_{N} +\beta \sigma _{N} = {(1,2,-2)}^{\mathrm{T}} + 1{(-1,-1,1)}^{\mathrm{T}} = {(0,1,-1)}^{\mathrm{T}},\)
\(\bar{z}_{5} = \mathrm{sign}(\rho _{2})\beta = 1.\)
-
10.
Update \({B}^{-1} = \left (\begin{array}{c@{\;\;\;}r@{\;\;\;}c} 1\;\;\;& - 2\;\;\;& \\ \;\;\; & - 1\;\;\; & \\ \;\;\;& 1\;\;\;&1\\ \;\;\;\end{array} \right )\).
-
11.
\(\bar{x}_{B} = {(6,2,-4)}^{\mathrm{T}},\ B =\{ 4,1,6\}\); \(\bar{x}_{N} = {(0_{(-)},-2_{(-)},{0}^{(+)})}^{\mathrm{T}},\bar{z}_{N} = {(1,1,-1)}^{\mathrm{T}},\)
\(N\;\hspace{-0.4pt} =\{ 5,2,3\}\).
Iteration 2:
-
1.
\(\max \{\vert 5 - 6\vert,0,(-3) - (-4)\} = 1,p = 1\), x 4 leaves the basis.
-
3.
\(\;\,h = -\mathrm{sign}(\rho _{1}){B}^{-\mathrm{T}}e_{1} = {(1,-2,0)}^{\mathrm{T}},\sigma _{N} = -{N}^{\mathrm{T}}h = {(2,-3,1)}^{\mathrm{T}}.\)
-
4.
\(\;\,J =\{ 2,3\}\neq \varnothing.\)
-
5.
\(\;\,\beta =\min \{ -1/(-3),-(-1)/1\} = 1/3,\ q = 2.\)
-
6.
\(\;\,\!\!\!\!\!\!\!\!\Delta x_{2} = -\rho _{1}/\vert \sigma _{2}\vert = 1/3.\)
-
7.
\(\;\,\bar{a}_{2} = {B}^{-1}a_{2} = {(3,1,-2)}^{\mathrm{T}}.\)
-
8.
\(\;\,\bar{x}_{B} = {(6,2,-4)}^{\mathrm{T}} - (1/3){(3,1,-2)}^{\mathrm{T}} = {(5,5/3,-10/3)}^{\mathrm{T}},\)
\(\bar{x}_{N} =\bar{ x}_{N} + \Delta x_{2}e_{2} = {(0,-2,0)}^{\mathrm{T}} + {(0,1/3,0)}^{\mathrm{T}} = {(0,-5/3,0)}^{\mathrm{T}}.\)
-
9.
\(\;\,\bar{y} =\bar{ y} +\beta h = {(0,-2,0)}^{\mathrm{T}},\)
\(\bar{z}_{N} =\bar{ z}_{N} +\beta \sigma _{N} = {(1,1,-1)}^{\mathrm{T}} + (1/3){(2,-3,1)}^{\mathrm{T}} = {(5/3,0,-2/3)}^{\mathrm{T}},\qquad \;\;\)
\(\bar{z}_{4} = \mathrm{sign}(\rho _{1})\beta = -1/3.\)
-
10.
Update \({B}^{-1} = \left (\begin{array}{r@{\;\;\;}c@{\;\;\;}c} 1/3\;\;\;& \;\;\;& \\ - 1/3\;\;\;&1\;\;\;& \\ 2/3\;\;\;& \;\;\;&1\\ \;\;\;\end{array} \right )\left (\begin{array}{c@{\;\;\;}r@{\;\;\;}c} 1\;\;\;& - 2\;\;\;& \\ \;\;\; & - 1\;\;\; & \\ \;\;\;& 1\;\;\;&1\\ \;\;\;\end{array} \right ) = \left (\begin{array}{r@{\;\;\;}c@{\;\;\;}c} 1/3\;\;\;& - 2/3\;\;\;& \\ - 1/3\;\;\;& - 1/3\;\;\;& \\ 2/3\;\;\;& - 1/3\;\;\;&1\\ \;\;\;\end{array} \right )\).
-
11.
B = { 2, 1, 6}, N = { 5, 4, 3}, \(\bar{x}_{B} = {(-5/3,5/3,-10/3)}^{\mathrm{T}}\),
\(\bar{x}_{N}\,\hspace{-2.6pt}= {(0_{(-)},{5}^{(+)},{0}^{(+)})}^{\mathrm{T}},\)
\(\bar{z}_{N}\,\hspace{-1.1pt} = {(5/3,-1/3,-2/3)}^{\mathrm{T}}\).
Iteration 3:
-
1.
\(\max \{0,0,\vert (-3) - (-10/3)\vert \} = 1/3,p = 3\), x 6 leaves the basis.
-
3.
\(\;\,h = -\mathrm{sign}(\rho _{3}){B}^{-\mathrm{T}}e_{3} = {(-2/3,1/3,-1)}^{\mathrm{T}},\sigma _{N} = -{N}^{\mathrm{T}}h\)
\(= {(-1/3,2/3,-5/3)}^{\mathrm{T}}.\)
-
4.
\(\;\,J =\{ 1,2\}\neq \varnothing.\)
-
5.
\(\;\,2 = q \in \min \{-(5/3)/(-1/3),-(-1/3)/(2/3)\},\beta = 1/2,\)
\(\quad \;x_{4}\ \mathrm{enters\ the\ basis.}\)
-
6.
\(\;\,\!\!\!\!\!\!\!\!\Delta x_{2} =\rho _{3}/\vert \sigma _{2}\vert = -(1/3)/(2/3) = -1/2.\)
-
7.
\(\;\,\bar{a}_{2} = {B}^{-1}a_{2} = {(1/3,-1/3,2/3)}^{\mathrm{T}}.\)
-
8.
\(\;\,\bar{x}_{B} = {(-5/3,5/3,-10/3)}^{\mathrm{T}} - (-1/2){(1/3,-1/3,2/3)}^{\mathrm{T}}\! =\! {(-3/2,3/2,-3)}^{\mathrm{T}},\)
\(\bar{x}_{N} =\bar{ x}_{N} + \Delta x_{2}e_{2} = {(0,5,0)}^{\mathrm{T}} + {(0,-1/2,0)}^{\mathrm{T}} = {(0,9/2,0)}^{\mathrm{T}}.\)
-
9.
\(\;\,\bar{y} = {(0,-2,0)}^{\mathrm{T}} + (1/2){(-2/3,1/3,-1)}^{\mathrm{T}} = {(-1/3,-11/6,-1/2)}^{\mathrm{T}},\)
\(\bar{z}_{N} =\bar{ z}_{N} +\beta \sigma _{N} = {(5/3,-1/3,-2/3)}^{\mathrm{T}} + (1/2){(-1/3,2/3,-5/3)}^{\mathrm{T}}\)
\(= {(3/2,0,-3/2)}^{\mathrm{T}},\)
\(\bar{z}_{6} = \mathrm{sign}(\rho _{3})\beta = 1/2.\)
-
10.
Update \({B}^{-1}\! =\! \left (\begin{array}{c@{\;\;\;}c@{\;\;\;}r} 1\;\;\;& \;\;\;& - 1/2 \\ \;\;\;&1\;\;\;& 1/2 \\ \;\;\;& \;\;\;& - 3/2\\ \;\;\;\end{array} \right )\!\left (\begin{array}{r@{\;\;\;}c@{\;\;\;}c} 1/3\;\;\;& - 2/3\;\;\;& \\ - 1/3\;\;\;& - 1/3\;\;\;& \\ 2/3\;\;\;& - 1/3\;\;\;&1\\ \;\;\;\end{array} \right )\! =\! \left (\begin{array}{c@{\;\;\;}r@{\;\;\;}r} \;\;\;& - 1/2\;\;\;& - 1/2 \\ \;\;\;& - 1/2\;\;\;& 1/2 \\ - 1\;\;\;& 1/2\;\;\;& - 3/2\\ \;\;\;\end{array} \right )\).
-
11.
It is satisfied that \(l_{B} \leq \bar{ x}_{B} \leq u_{B}\). The optimal solutio and value are
$$\displaystyle{\bar{x} = {(3/2,-3/2,0,9/2,0,-3)}^{\mathrm{T}},\quad \bar{f} = 3/2 + 2(-3/2) = -3/2.}$$
7 Bound Flipping
The so-called “bound-flipping” technique can improve the effect of the generalized dual simplex method significantly. In fact, it might be the main cause for the dual simplex method to outperform its primal counterpart at present (Kirillova et al. 1979; Koberstein and Suhl 2007; Kostina 2002; Maros 2003a).
Let \((\bar{y},\bar{z})\) be the current dual basic feasible solution and let \(\bar{x}\) be the associate primal solution. Assume that a row index p has been determined by (7.31) and that a column index q determined by the minimum-ratio test (7.62). Let the nonbasic variable x q change from the current value \(\bar{x}_{q}\) (going up or down) toward the other bound, while keeping the other nonbasic variables unchanged. For the basic variable \(x_{j_{p}}\) to attain the violated bound, the value of x q could fall either within the range between the lower and upper bounds, or beyond the other bound. In the latter case, it is favorable to adopt the “bound-flipping”: fix the value of x q on the other bound and update values of basic variables accordingly; then find a new column index q that attains the second minimum-ratio, and do the same thing again, until the value of \(x_{j_{p}}\) will attain the violated bound if the current value of x q falls within the range between its lower and upper bounds. Then, a normal dual step is taken by dropping \(x_{j_{p}}\) from and enter x q to the basis, and updating the primal and dual solutions. It is seen that the dual feasibility still maintains.
The bound-flipping technique is embedded in the following subalgorithm, which is called in step 10 of Algorithm 7.7.2.
Algorithm 7.7.1 (Bound-flipping subalgorithm).
This algorithm provide the pivot column index q, dual stepsize β, and carries out related computations.
-
1.
Set \(j = 0,v = 0\), and compute \(r_{j} = -\bar{z}_{j}/\sigma _{j},\quad \forall j \in J\).
-
2.
Set \(j = j + 1\).
-
3.
Set \(v = v +\delta a_{q}\).
-
4.
Set \(\bar{x}_{q} = \left \{\begin{array}{l@{\quad }l} u_{q},\quad &\mathrm{if}\ \ \bar{x}_{q} = l_{q}, \\ l_{q},\qquad \quad &\mathrm{if}\ \ \bar{x}_{q} = u_{q}.\\ \quad \end{array} \right.\)
-
5.
Update: ρ p = ρ p − | δ σ q | .
-
6.
Determine q and r q such that r q = min j ∈ J r j .
-
7.
Compute \(\Delta x_{q}\) by (7.67).
-
8.
Update: J = J∖{q}.
-
9.
Go to step 13 if J = ∅.
-
10.
Compute \(\delta = \left \{\begin{array}{ll} u_{q} - l_{q},&\mathrm{if}\ \ \bar{x}_{q} = l_{q}, \\ l_{q} - u_{q},&\mathrm{if}\ \ \bar{x}_{q} = u_{q}.\\ \end{array} \right.\)
-
11.
Go to step 2 if \(\vert \Delta x_{q}\vert \geq \vert \delta \vert \).
-
12.
Set β = r q .
-
13.
Compute \(u = {B}^{-1}v\), and update: \(\bar{x}_{B} =\bar{ x}_{B} - u\).
-
14.
Return.
The following master algorithm is a slight modification of Algorithm 7.6.1.
Algorithm 7.7.2 (Generalized dual simplex algorithm: bound-flipping).
Initial: (B, N), B −1, \(\bar{y},\bar{z},\bar{x}\) satisfying (7.38), (7.43) and (7.44). This algorithm solves bounded-variable problem (7.13).
- 1.
-
2.
Stop if ρ p = 0 (optimality achieved).
- 3.
-
4.
Stop if J defined by (7.61) is empty (dual unbounded or primal infeasible).
-
5.
Determine column index q by (7.62).
-
6.
Compute \(\Delta x_{q}\) by (7.67).
-
7.
Set J = J∖{q}.
-
8.
If J = ∅, go to step 12.
-
9.
Compute \(\delta = \left \{\begin{array}{ll} u_{q} - l_{q}, &\mathrm{if}\ \ \bar{x}_{q} = l_{q}, \\ l_{q} - u_{q},\qquad &\mathrm{if}\ \ \bar{x}_{q} = u_{q}.\\ \end{array} \right.\)
-
10.
If \(\vert \Delta x_{q}\vert \geq \vert \delta \vert \), call Algorithm 7.7.1.
-
11.
Compute \(\bar{a}_{q} = {B}^{-1}a_{q}\).
-
12.
Update \(\bar{x}\) by (7.68).
-
13.
Update \(\bar{y},\,\bar{z}_{N},\,\bar{z}_{j_{p}}\) by (7.59) together with (7.60).
-
14.
Update B −1 by (3.23).
-
15.
Update (B, N) by exchanging j p and q.
-
16.
Go to step 1.
The bound-flipping increases computational work associated therewith, in particular, involving an additional linear system (in step 13 of Algorithm 7.7.1). This is inappreciable, however, if compared with profitable return. Since the associated dual stepsize is usually much larger than that without bound-flipping, so is the increment in objective value, especially when ρ p is large. As a result, the number of iterations are usually decreased significantly. In fact, the bound-flipping has been unable to be omitted in current dual simplex codes.
Example 7.7.1.
Solve the following problem by Algorithm 7.7.2:
Answer Initial: \(B =\{ 2,5,6\},\ N =\{ 1,3,4\},\,{B}^{-1} = I,\ \bar{x}_{N} = {({2}^{(+)},0_{(-)},1_{(-)})}^{\mathrm{T}}\), \(\bar{x}_{B} = {(1,-2,1)}^{\mathrm{T}},\ \bar{y} = (0,0,0),\ \bar{z}_{N} = {(-1,2,3)}^{\mathrm{T}},\ \bar{f} = 1\).
Iteration 1:
-
1.
\(\max \{0,2 - (-2),0\} = 4,p = 2,\ x_{5}\) leaves the basis.
-
3.
\(\;\,h = -\mathrm{sign}(\rho _{2}){B}^{-\mathrm{T}}e_{2} = {(0,-1,0)}^{\mathrm{T}},\ \sigma _{N} = -{N}^{\mathrm{T}}h = {(1,-1,1)}^{\mathrm{T}}.\qquad \qquad \;\;\)
-
4.
J = { 1, 2}.
-
5.
\(\;\,\beta =\min \{ -(-1)/1,-2/ - 1\} = 1,\ q = 1.\)
-
6.
\(\;\,\Delta x_{1} = -\rho _{2}/\vert \sigma _{1}\vert = -4/1 = -4.\)
-
7.
\(\;\,J = J\setminus \{1\} =\{ 2\}\neq \varnothing.\)
-
9.
\(\;\,\delta = l_{1} - u_{1} = 0 - 2 = -2.\)
-
10.
\(\vert \Delta x_{1}\vert > \vert \delta \vert \), so call Algorithm 7.7.1.
-
(1)
\(\;\,j = 0,v = 0,\,r_{2} = -2/ - 1 = 2;\)
-
(2)
\(\;\,j = j + 1 = 1;\)
-
(3)
\(\;\,v = v +\delta a_{1} = (-2){(-2,1,1)}^{\mathrm{T}} = {(4,-2,-2)}^{\mathrm{T}};\)
-
(4)
\(\;\,\bar{x}_{1} = l_{1} = 0;\)
-
(5)
\(\;\,\rho _{2} =\rho _{2} -\vert \delta \sigma _{1}\vert = 4 - 2 \times 1 = 2;\)
-
(6)
q = 2;
-
(7)
\(\;\,\Delta x_{2} = -\rho _{2}/\vert \sigma _{2}\vert = -2/(-1) = 2;\)
-
(8)
\(\;\,J = J\setminus \{2\} = \varnothing;\)
-
(12)
\(\;\,\beta = r_{2} = 2;\)
-
(13)
\(\;\,u = {B}^{-1}v = {(4,-2,-2)}^{\mathrm{T}};\)
\(\bar{x}_{B} =\bar{ x}_{B} - u = {(1,-2,1)}^{\mathrm{T}} - {(4,-2,-2)}^{\mathrm{T}} = {(-3,0,3)}^{\mathrm{T}};\)
-
(14)
Return.
-
(1)
-
11.
\(\;\,\bar{a}_{2} = {B}^{-1}a_{2} = {(1,-1,-2)}^{\mathrm{T}}.\)
-
12.
\(\;\,\bar{x}_{B} = {(-3,0,3)}^{\mathrm{T}} - 2{(1,-1,-2)}^{\mathrm{T}} = {(-5,2,7)}^{\mathrm{T}},\)
\(\bar{x}_{N} = {(0,0,1)}^{\mathrm{T}} + {(0,2,0)}^{\mathrm{T}} = {(0,2,1)}^{\mathrm{T}}.\)
-
13.
\(\;\,\bar{y} = {(0,0,0)}^{\mathrm{T}} + 2{(0,-1,0)}^{\mathrm{T}} = {(0,-2,0)}^{\mathrm{T}},\)
\(\bar{z}_{N} =\bar{ z}_{N} +\beta \sigma _{N} = {(-1,2,3)}^{\mathrm{T}} + 2{(1,-1,1)}^{\mathrm{T}} = {(1,0,5)}^{\mathrm{T}},\)
\(\bar{z}_{j_{2}} = \mathrm{sign}(\rho _{2})\beta = 2.\)
-
14.
\(\;\,{B}^{-1} = \left (\begin{array}{c@{\;\;\;}r@{\;\;\;}c} 1\;\;\;& 1\;\;\;& \\ \;\;\; & - 1\;\;\; & \\ \;\;\;& - 2\;\;\;&1\\ \;\;\;\end{array} \right ).\)
-
16.
\(\;\,\bar{x}_{B} = {(-5,2,7)}^{\mathrm{T}},\,\bar{x}_{N} = {(0_{(-)},2_{(-)},1_{(-)})}^{\mathrm{T}},\bar{z}_{N} = {(1,2,5)}^{\mathrm{T}},\)
\(B =\{ 2,3,6\},\ N =\{ 1,5,4\}.\)
Iteration 2:
-
1.
\(\max \{0,0,7 - 6\} = 1,p = 3\), x 6 leaves the basis.
-
3.
\(h\, = -\mathrm{sign}(\rho _{3}){B}^{-\mathrm{T}}e_{3} = {(0,-2,1)}^{\mathrm{T}}\), \(\sigma _{N} = -{N}^{\mathrm{T}}h = {(1,2,5)}^{\mathrm{T}}\).
-
4.
J = ∅, hence dual unbounded or primal infeasible.
If bound-flipping had not been used, solving the preceding problem would have required much more iterations.
Notes
- 1.
MOPS is a developed parkage by Suhl et al. of College of Production Information Economy and Operations Research of Berlin Free University see Suhl (1994).
References
Abadie J, Corpentier J (1969) Generalization of the Wolfe reduced gradient method to the case of mon-linear constrained optimization. In: Fletcher R (ed) Optimization. Academic, London, pp 37–48
Abel P (1987) On the choice of the pivot columns of the simplex method: gradient criteria. Computing 38:13–21
Adler I, Megiddo N (1985) A simplex algorithm whose average number of steps is bounded between two quadratic functions of the smaller dimension. J ACM 32:871–895
Adler I, Resende MGC, Veige G, Karmarkar N (1989) An implementation of Karmarkar’s algorithm for linear programming. Math Program 44:297–335
Andersen E, Andersen K (1995) Presolving in linear programming. Math Program 71:221–245
Andersen ED, Gondzio J, Mészáros C, Xu X (1996) Implementation of interior-point methods for large scale linear programming. In: Terlaky T (ed) Interior point methods of mathematical programming. Kluwer, Dordrecht
Andrel N, Barbulescu M (1993) Balance constraints reduction of large-scale linear programming problems. Ann Oper Res 43:149–170
Anstreicher KM, Watteyne P (1993) A family of search directions for Karmarkar’s algorithm. Oper Res 41:759–767
Arrow KJ, Hurwicz L (1956) Reduction of constrained maxima to saddle-point problems. In: Neyman J (ed) Proceedings of the third Berkeley symposium on mathematical statistics and probability, vol 5. University of California Press, Berkeley, pp 1–26
Avis D, Chvatal V (1978) Notes on Bland’s pivoting rule. Math Program 8:24–34
Balas E (1965) An additive algorithm for solving linear programs with zero-one variables. Oper Res 13:517–546
Balinski ML, Gomory RE (1963) A mutual primal-dual simplex method. In: Graves RL, Wolfe P (eds) Recent advances in mathematical programming. McGraw-Hill, New York
Balinski ML, Tucker AW (1969) Duality theory of linear problems: a constructive approach with applications. SIAM Rev 11:347–377
Barnes ER (1986) A variation on Karmarkars algorithm for solving linear programming problems. Math Program 36:174–182
Bartels RH (1971) A stabilization of the simplex method. Numer Math 16:414–434
Bartels RH, Golub GH (1969) The simplex method of linear programming using LU decomposition. Commun ACM 12:266–268
Bartels RH, Stoer J, Zenger Ch (1971) A realization of the simplex method based on triangular decompositions. In: Wilkinson JH, Reinsch C (eds) Contributions I/II in handbook for automatic computation, volume II: linear algebra. Springer, Berlin/London
Bazaraa MS, Jarvis JJ, Sherali HD (1977) Linear programming and network flows, 2nd edn. Wiley, New York
Beale EML (1954) An alternative method for linear programming. Proc Camb Philos Soc 50:513–523
Beale EML (1955) Cycling in the dual simplex algorithm. Nav Res Logist Q 2:269–275
Beale E (1968) Mathematical programming in practice. Topics in operations research. Pitman & Sons, London
Benders JF (1962) Partitioning procedures for solving mixed-variables programming problems. Numer Math 4:238–252
Benichou MJ, Cautier J, Hentges G, Ribiere G (1977) The efficient solution of large scale linear programming problems. Math Program 13:280–322
Bixby RE (1992) Implementing the simplex method: the initial basis. ORSA J Comput 4:287–294
Bixby RE (1994) Progress in linear programming. ORSA J Comput 6:15–22
Bixby RE (2002) Solving real-world linear problems: a decade and more of progress. Oper Res l50:3–15
Bixby RE, Saltzman MJ (1992) Recovering an optimal LP basis from the interior point solution. Technical report 607, Dapartment of Mathematical Sciences, Clemson University, Clemson
Bixby RE, Wagner DK (1987) A note on detecting simple redundancies in linear systems. Oper Res Lett 6:15–17
Bixby RE, Gregory JW, Lustig IJ, Marsten RE, Shanno DF (1992) Very large-scale linear programming: a case study in combining interior point and simplex methods. Oper Res 40:885–897
Björck A, Plemmons RJ, Schneider H (1981) Large-scale matrix problems. North-Holland, Amsterdanm
Bland RG (1977) New finite pivoting rules for the simplex method. Math Oper Res 2:103–107
Borgwardt K-H (1982a) Some distribution-dependent results about the asymptotic order of the arerage number of pivot steps of the simplex method. Math Oper Res 7:441–462.
Borgwardt K-H (1982b) The average number of pivot steps required by the simplex method is polynomial. Z Oper Res 26:157–177
Botsaris CA (1974) Differential gradient methods. J Math Anal Appl 63:177–198
Breadley A, Mitra G, Williams HB (1975) Analysis of mathematica problems prior to applying the simplex algorithm. Math Program 8:54–83
Brown AA, Bartholomew-Biggs MC (1987) ODE vs SQP methods for constrained optimization. Technical report, 179, The Numerical Center, Hatfield Polytechnic
Carolan MJ, Hill JE, Kennington JL, Niemi S, Wichmann SJ (1990) An empirical evaluation of the KORBX algorithms for military airlift applications. Oper Res 38:240–248
Cavalier TM, Soyster AL (1985) Some computational experience and a modification of the Karmarkar algorithm. ISME working paper, The Pennsylvania State University, pp 85–105
Chan TF (1985) On the existence and computation of LU factorizations with small pivots. Math Comput 42:535–548
Chang YY (1979) Least index resolution of degeneracy in linear complementarity problems. Technical Report 79–14, Department of OR, Stanford University
Charnes A (1952) Optimality and degeneracy in linear programming. Econometrica 20:160–170
Cheng MC (1987) General criteria for redundant and nonredundant linear inequalities. J Optim Theory Appl 53:37–42
Chvatal V (1983) Linear programming. W.H. Freeman, New York
Cipra BA (2000) The best of the 20th century: editors name top 10 algorithms. SIAM News 33: 1–2
Coleman TF, Pothen A (1987) The null space problem II. Algorithms. SIAM J Algebra Discret Methods 8:544–562
Cook AS (1971) The complexity of theorem-proving procedure. In: Proceedings of third annual ACM symposium on theory of computing, Shaker Heights, 1971. ACM, New York, pp 151–158
Cottle RW, Johnson E, Wets R (2007) George B. Dantzig (1914–2005). Not AMS 54:344–362
CPLEX ILOG (2007) 11.0 User’s manual. ILOG SA, Gentilly, France
Curtis A, Reid J (1972) On the automatic scaling of mtrices for Gaussian elimination. J Inst Math Appl 10:118–124
Dantzig GB (1948) Programming in a linear structure, Comptroller, USAF, Washington, DC
Dantzig GB (1951a) Programming of interdependent activities, mathematical model. In: Koopmas TC (ed) Activity analysis of production and allocation. Wiley, New York, pp 19–32; Econometrica 17(3/4):200–211 (1949)
Dantzig GB (1951b) Maximization of a linear function of variables subject to linear inequalities. In: Koopmans TC (ed) Activity analysis of production and allocation. Wiley, New York, pp 339–347
Dantzig GB (1951c) A proof of the equivalence of the programming problem and the game problem. In: Koopmans T (ed) Activity analysis of production and allocation. Wiley, New York, pp 330–335
Dantzig GB (1963) Linear programming and extensions. Princeton University Press, Princeton
Dantzig GB (1991) linear programming. In: Lenstra JK, Rinnooy Kan AHG, Schrijver A (eds) History of mathematical programming. CWI, Amsterdam, pp 19–31
Dantzig GB, Ford LR, Fulkerson DR (1956) A primal-dual algorithm for linear programs. In: Kuhn HK, Tucker AW (eds) Linear inequalities and related systems. Princeton University Press, Princeton, pp 171–181
Dantzig GB, Orchard-Hays W (1953) Alternate algorithm for the revised simplex method using product form for the inverse. Notes on linear programming: part V, RM-1268. The RAND Corporation, Santa Monica
Dantzig GB, Orchard-Hayes W (1954) The product form for the inverse in the simplex method. Math Tables Other Aids Comput 8:64–67
Dantzig GB, Thapa MN (1997) Linear programming 1: introduction. Springer, New York
Dantzig GB, Thapa MN (2003) Linear programming 2: theory and extensions. Springer, New York
Dantzig GB, Wolfe P (1960) Decomposition principle for linear programs. Oper Res 8:101–111
Dantzig GB, Orden A, Wolfe P (1955) The generalized simplex method for minimizing a linear form under linear inequality constraints. Pac J Math 5:183–195
de Ghellinck G, Vial J-Ph, Polynomial (1986) Newton method for linear programming. Algorithnica 1:425–453. (Special issue)
Dikin I (1967) Iterative solution of problems of linear and quadratic programming. Sov Math Dokl 8:674–675
Dikin I (1974) On the speed of an iterative process. Upravlyaemye Sistemi 12:54–60
Dorfman R, Samuelson PA, Solow RM (1958) Linear programming and economic analysis. McGraw-Hill, New York
Duff IS, Erisman AM, Reid JK (1986) Direct methods for sparse matrices. Oxford University Press, Oxford
Evtushenko YG (1974) Two numerical methods for solving non-linear programming problems. Sov Math Dokl 15:420–423
Fang S-C (1993) Linear optimization and extensions: theory and algorithms. AT & T, Prentice-Hall, Englewood Cliffs
Farkas J (1902) Uber die Theorie der Einfachen Ungleichungen. Journal für die Reine und Angewandte Mathematik 124:1–27
Fiacco AV, Mccormick GP (1968) Nonlinear programming: sequential unconstrained minimization techniques. Wiley, New York
Fletcher R (1981) Practical methods of optimization. Volume 2: constrained optimization. Wiley, Chichester
Ford Jr LR, Fulkerson DR (1956) Maximal flow through a network. Can J Math 8:399–407
Forrest JJH, Goldfarb D (1992) Steepest edge simplex algorithm for linear programming. Math Program 57:341–374
Forrest J, Tomlin J (1972) Updating triangular factors of the basis to maintain sparsity in the product form simplex method. Math Program 2:263–278
Forsythe GE, Malcom MA, Moler CB (1977) Computer methods for mathematical computations. Pretice-Hall, EnglewoodCliffs
Fourer R (1979) Sparse Gaussian elimination of staircase linear systems. Tehchnical report ADA081856, Calif Systems Optimization LAB, Stanford University
Fourier JBJ (1823) Analyse des travaux de l’Academie Royale des Science, pendant l’Iannee, Partie mathematique, Histoire de l’Acanemie Royale des Sciences de l’nstitut de France 6 [1823] (1826), xxix–xli (partially reprinted as: Premier extrait, in Oeuvres de Fourier, Tome 11 (Darboux G, ed.), Gauthier-Villars, Paris, 1890, (reprinted: G. Olms, Hildesheim, 1970), pp 321–324
Frisch KR (1955) The logarithmic potentical method of convex programming. Memorandum, University Institute of Economics, Oslo
Fulkerson D, Wolfe P (1962) An algorithm for scaling matrices. SIAM Rev 4:142–146
Gale D, Kuhn HW, Tucker AW (1951) Linear programming and the theory of games. In: Koopmans T (ed) Activity analysis of production and allocation. Wiley, New York, pp 317–329
Gass SI (1985) Linear programming: methods and applications. McGraw-Hill, New York
Gass SI, Saaty T (1955) The computational algorithm for the parametric objective function. Nav Res Logist Q 2:39–45
Gay DM (1978) On combining the schemes of Reid and Saunders for sparse LP bases. In: Duff IS, Stewart GW (eds) Sparse matrix proceedings. SIAM, Philadelphia, pp 313–334
Gay D (1985) Electronic mail distribution of linear programming test problems. COAL Newsl 13:10–12
Gay DM (1987) A variant of Karmarkar’s linear programming algorithm for problems in standard form. Math Program 37:81–90
Geoffrion AM (1972) Generalized Benders decomposition. JOTA 10:137–154
George A, Liu W-H (1981) Computing solution of large sparse positive definite systems. Prentice-Hall, Engleewood Cliffs
Gill PE, Murray W (1973) A numerically stable form of the simplex algorithm. Linear Algebra Appl 7:99–138
Gill PE, Murray W, Saunders MA, Tomlin JA, Wright MH (1985) On projected Newton barrier methods for linear programming and an equivalence to Karmarkar’s projected method. Technical report SOL 85–11, Department of Operations Research, Stanford University
Gill PE, Murray W, Saunders MA, Tomlin JA, Wright MH (1986) On projected Newton methods for linear programming and an equivalence to Karmarkar’s projected method. Math Program 36:183–209
Gill PE, Murray W, Saunders MA, Wright MH (1987) Maintaining LU factors of a general sparse matrix. Linear Algebra Appl 88/89:239–270
Gill PE, Murray W, Saunders MA, Wright MH (1989) A practical anti-cycling procedure for linearly constrainted optimization. Math Program 45:437–474
Goldfarb D (1977) On the Bartels-Golub decomposition for linear programming bases. Math Program 13:272–279
Goldfarb D, Reid JK (1977) A practicable steepest-edge simplex algorithm. Math Program 12:361–371
Goldman AJ, Tucker AW (1956a) Polyhedral convex cones. In: Kuhn HW, Tucker AW (eds) Linear inequalities and related systems. Annals of mathmatical studies, vol 38. Princeton University Press, Princeton, pp 19–39
Goldman AJ, Tucker AW (1956b) Theory of linear programming. In: Kuhn HW, Tucker AW (eds) Linear inequalities and related systems. Annals of mathmatical studies, vol 38. Princeton University Press, Princeton, pp 53–97
Golub GH (1965) Numerical methods for solving linear least squares problems. Numer Math 7:206–216
Golub GH, Van Loan CF (1989) Matrix computations, 2edn. The Johns Hopkins University Press, Baltimore
Gomory AW (1958) Outline of an algorithm for integer solutions to linear programs. Bull Am Math Soc 64:275–278
Gonzaga CC (1987) An algorithm for solving linear programming problems in O(n 3 L) operations. Technical Report UCB/ERL M87/10, Electronics Research Laboratory, University of California, Berkeley
Gonzaga CC (1990) Convergence of the large step primal affine-scaling algorithm for primal non-degenerate linear problems. Technical report, Department of Systems Engineering and Computer Sciences, COPPE-Federal University of Riode Janeiro
Gould N, Reid J (1989) New crash procedure for large systems of linear constraints. Math Program 45:475–501
Greenberg HJ (1978) Pivot selection tactics. In: Greenberg HJ (ed) Design and implementation of optimization software. Sijthoff and Noordhoff, Alphen aan den Rijn, pp 109–143
Greenberg HJ, Kalan J (1975) An exact update for Harris’ tread. Math Program Study 4:26–29
Guerrero-Garcia P, Santos-Palomo A (2005) Phase I cycling under the most-obtuse-angle pivot rule. Eur J Oper Res 167:20–27
Guerrero-Garcia P, Santos-Palomo A (2009) A deficient-basis dual counterpart of Pararrizos, Samaras ans Stephanides’ primal-dual simplex-type algorithm. Optim Methods Softw 24:187–204
Güler O, Ye Y (1993) Convergence behavior of interior-point algorithms. Math Program 60:215–228
Hadley G (1972) Linear programming. Addison-Wesley, Reading
Hager WW (2002) The dual active set algorithm and its application to linear programming. Comput Optim Appl 21:263–275
Hall LA, Vanderbei RJ (1993) Two-third is sharp for affine scaling. Oper Res Lett 13:197–201
Hamming RW (1971) Introduction to applied numerical analysis. McGraw-Hill, New York
Harris PMJ (1973) Pivot selection methods of the Devex LP code. Math Program 5:1–28
Hattersley B, Wilson J (1988) A dual approach to primal degeneracy. Math Program 42:135–145
He X-C, Sun W-Y (1991) An intorduction to generalized inverse matrix (in Chinese). Jiangsu Science and Technology Press, Nanjing
Hellerman E, Rarick DC (1971) Reinversion with the preassigned pivot procedure. Math Program 1:195–216
Hellerman E, Rarick DC (1972) The partitioned preassigned pivot procedure. In: Rose DJ, Willouhby RA (eds) Sparse matrices and their applications. Plenum, New York, pp 68–76
Hertog DD, Roos C (1991) A survey of search directions in interior point methods for linear programming. Math Program 52:481–509
Hoffman AJ (1953) Cycling in the simplex algorithm. Technical report 2974, National Bureau of Standards
Hu J-F (2007) A note on “an improved initial basis for the simplex algorithm”. Comput Oper Res 34:3397–3401
Hu J-F, Pan P-Q (2006) A second note on ‘A method to solve the feasible basis of LP’ (in Chinese). Oper Res Manag Sci 15:13–15
Hu J-F, Pan P-Q (2008a) Fresh views on some recent developments in the simplex algorithm. J Southeast Univ 24:124–126
Hu J-F, Pan P-Q (2008b) An efficient approach to updating simplex multipliers in the simplex algorithm. Math Program Ser A 114:235–248
Jansen B, Terlakey T, Roos C (1994) The theory of linear programming: skew symmetric self-dual problems and the central path. Optimization 29:225–233
Jansen B, Roos C, Terlaky T (1996) Target-following methods for linear programming. In: Terlaky T (ed) Interior point methods of mathematical programming. Kluwer, Dordrecht
Jeroslow R (1973) The simplex algorithm with the pivot rule of maximizing criterion improvement. Discret Appl Math 4:367–377
Kalantari B (1990) Karmarkar’s algorithm with improved steps. Math Program 46:73–78
Kallio M, Porteus EL (1978) A class of methods for linear programming. Math Program 14:161–169
Kantorovich LV (1960) Mathematical methods in the organization and planning of production. Manag Sci 6:550–559. Original Russian version appeared in 1939
Karmarkar N (1984) A new polynomial time algorithm for linear programming. Combinatorica 4:373–395
Karmarkar N, Ramakrishnan K (1985) Further developments in the new polynomial-time algorithm for linear progrmming. In: Talk given at ORSA/TIMES national meeting, Boston, Apr 1985
Khachiyan L (1979) A polynomial algorithm in linear programming. Doklady Academiia Nauk SSSR 244:1093–1096
Kirillova FM, Gabasov R, Kostyukova OI (1979) A method of solving general linear programming problems. Doklady AN BSSR (in Russian) 23:197–200
Klee V (1965) A class of linear problemminng problems requiring a larger number of iterations. Numer Math 7:313–321
Klee V, Minty GJ (1972) How good is the simplex algorithm? In: Shisha O (ed) Inequalities-III. Academic, New York, pp 159–175
Koberstein A (2008) Progress in the dual simplex algorithm for solving large scale LP problems: techniques for a fast and stable implementation. Comput Optim Appl 41:185–204
Koberstein A, Suhl UH (2007) Progress in the dual simplex method for large scale LP problems: practical dual phase 1 algorithms. Comput Optim Appl 37:49–65
Kojima M, Mizuno S, Yoshise A (1989) A primal-dual interior point algorithm for linear programming. In: Megiddo N (ed) Progress in mathematical programming. Springer, New York, pp 29–47
Kojima M, Megiddo N, Mizuno S (1993) A primal-dual infeasible-interior-point algorithm for linear programming. Math Program 61:263–280
Kortanek KO, Shi M (1987) Convergence results and numerical experiments on a linear programming hybrid algorithm. Eur J Oper Res 32:47–61
Kostina E (2002) The long step rule in the bounded-variable dual simplex method: numerical experiments. Math Methods Oper Res 55:413–429
Kotiah TCT, Steinberg DI (1978) On the possibility of cycling with the simplex method. Oper Res 26:374–376
Kuhn HW, Quandt RE (1953) An experimental study of the simplex method. In: Metropolis NC et al (eds) Eperimental arithmetic, high-speed computing and mathematics. Proceedings of symposia in applied mathematics XV. American Mathematical Society, Providence, pp 107–124
Land AH, Doig AG (1960) An automatic method of solving discrete programming problems. Econometrica 28:497–520
Leichner SA, Dantzig GB, Davis JW (1993) A strictly, improving linear programming phase I algorithm. Ann Oper Res 47:409–430
Lemke CE (1954) The dual method of solving the linear programming problem. Nav Res Logist Q 1:36–47
Li W (2004) A note on two direct methods in linear programming. Eur J Oper Res 158:262–265
Li C, Pan P-Q, Li W (2002) A revised simplex algorithm based on partial pricing pivotrule (in Chinese). J Wenzhou Univ 15:53–55
Li W, Guerrero-Garcia P, Santos-Palomo A (2006a) A basis-deficiency-allowing primal phase-1 algorithm using the most-obtuse-angle column rule. Comput Math Appl 51:903–914
Li W, Pan P-Q, Chen G (2006b) A combined projected gradient algorithm for linear programming. Optim Methods Softw 21:541–550
Llewellyn RW (1964) Linear programming. Holt, Rinehart and Winston, New York
Luo Z-Q, Wu S (1994) A modified predictor-corrector method for linear programming. Comput Optim Appl 3:83–91
Lustig IJ (1990) Feasibility issues in a prinal-dual interior-point method for linear programming. Math Program 49:145–162
Lustig IJ, Marsten RE, Shanno DF (1991) Computational exeperience with a primal-dual interior point method for linear programming. Linear Algebra Appl 152:191–222
Lustig IJ, Marsten RE, Shanno DF (1992) On implementing Mehrotras’s predictor-corrector interior-point for linear programming. SIAM J Optim 2:435–449
Lustig IJ, Marsten R, Shanno D (1994) Interior point methods for linear programming: computational state of the art. ORSA J Comput 6:1–14
Markowitz HM (1957) The elimination form of the inverse and its application to linear programming. Manag Sci 3:255–269
Maros I (1986) A general phase-1 method in linear programming. Eur J Oper Res 23:64–77
Maros I (2003a) A generalied dual phase-2 simplex algorithm. Eur J Oper Res 149:1–16
Maros I (2003b) Computational techniques of the simplex method. International series in operations research and management, vol 61. Kluwer, Boston
Maros I, Khaliq M (2002) Advances in design and implementation of optimization software. Eur J Oper Res 140:322–337
Marshall KT, Suurballe JW (1969) A note on cycling in the simplex method. Nav Res Logist Q 16:121–137
Martin RK (1999) Large scale linear and integer optimization: a unified approach. Kluwer, Boston
Mascarenhas WF (1997) The affine scaling algorithm fails for λ = 0. 999. SIAM J Optim 7:34–46
McShane KA, Monma CL, Shanno DF (1989) An implementation of a primal-dual method for linear programming. ORSA J Comput 1:70–83
Megiddo N (1986a) Introduction: new approaches to linear programming. Algorithmca 1:387–394. (Special issue)
Megiddo N (1986b) A note on degenerach in linear programming. Math Program 35:365–367
Megiddo N (1989) Pathways to the optimal set in linear programming. In: Megiddo N (ed) Progress in mathematical programming. Springer, New York, pp 131–158
Megiddo N, Shub M (1989) Boundary behavior of interior point algorithm in linear programming. Math Oper Res 14:97–146
Mehrotra S (1991) On finding a vertex solution using interior point methods. Linear Algebra Appl 152:233–253
Mehrotra S (1992) On the implementation of a primal-dual interior point method. SIAM J Optim 2:575–601
Mizuno S, Todd MJ, Ye Y (1993) On adaptive-step primal-dual interior-point algorithms for linear programming. Math Oper Res 18:964–981
Monteiro RDC, Adler I (1989) Interior path following primal-dual algorithms: Part I: linear programming. Math Program 44:27–41
Murtagh BA (1981) Advances in linear programming: computation and practice. McGraw-Hill, New York/London
Murtagh BA, Saunders MA (1978) Large-scale linearly constrained optimization. Math Program 14:41–72
Murtagh BA, Saunders MA (1998) MINOS 5.5 user’s guid. Technical report SOL 83-20R, Department of Engineering Economics Systems & Operations Research, Stanford University, Stanford
Murty KG (1983) Linear programming. Wiley, New York
Nazareth JL (1987) Computer solutions of linear programs. Oxford University Press, Oxford
Nazareth JL (1996) The implementation of linear programming algorithms based on homotopies. Algorithmica 15:332–350
Nemhauser GL (1994) The age of optimizaiton: solving large-scale real-world problems. Oper Res 42:5–13
Nemhauser GL, Wolsey LA (1999) Integer and combinatorial optimization. Wiley, New York
Nocedal J, Wright SJ (1999) Numerical optimization. Springer, Berlin
Ogryczak W (1988) The simplex method is not always well behaved. Linear Algebra Appl 109:41–57
Orchard-Hays W (1954) Background development and extensions of the revised simplex method. Report RM 1433, The Rand Corporation, Santa Monica
Orchard-Hays W (1956) Evolution of computer codes for linear programming. Paper P-810, The RAND Corporation, p 2224
Orchard-Hays W (1971) Advanced linear programming computing techniques. McGraw-Hill, New York
Padberg MW (1995) Linear optimization and extensions. Springer, Berlin
Pan P-Q (1982) Differential equaltion methods for unconstarained optimization (in Chinese). Numer Math J Chin Univ 4:338–349
Pan P-Q (1990) Practical finite pivoting rules for the simplex method. OR Spektrum 12:219–225
Pan P-Q (1991) Simplex-like method with bisection for linear programming. Optimization 22:717–743
Pan P-Q (1992a) New ODE methods for equality constrained optimization (I) – equations. J Comput Math 10:77–92
Pan P-Q (1992b) New ODE methods for equality constrained optimization (II) – algorithms. J Comput Math 10:129–146
Pan P-Q (1992c) Modification of Bland’s pivoting rule (in Chinese). Numer Math 14:379–381
Pan P-Q (1994a) A variant of the dual pivot rule in linear programming. J Inf Optim Sci 15:405–413
Pan P-Q (1994b) Composite phase-1 methods without measuring infeasibility. In: Yue M-Y (ed) Theory of optimization and its applications. Xidian University Press, Xian, pp 359–364.
Pan P-Q (1994c) Ratio-test-free pivoting rules for the bisection simplex method. In: Proceedings of national conference on decision making science, Shangrao, pp 24–29
Pan P-Q (1994d) Ratio-test-free pivoting rules for a dual phase-1 method. In: Xiao S-T, Wu F (eds) Proceeding of the third conference of Chinese SIAM. Tsinghua University press, Beijing, pp 245–249
Pan P-Q (1995) New non-monotone procedures for achieving dual feasibility. J Nanjing Univ Math Biquarterly 12:155–162
Pan P-Q (1996a) A modified bisection simplex method for linear programming. J Comput Math 14:249–255
Pan P-Q (1996b) New pivot rules for achieving dual feasibility. In: Wei Z (ed) Theory and applications of OR. Proceedings of the fifth conference of Chinese OR society, Xian, 10–14 Oct 1996. Xidian University Press, Xian, pp 109–113
Pan P-Q (1996c) Solving linear programming problems via appending an elastic constraint. J Southeast Univ (English edn) 12:253–265
Pan P-Q (1997) The most-obtuse-angle row pivot rule for achieving dual feasibility in linear programming: a computational study. Eur J Oper Res 101:164–176
Pan P-Q (1998a) A dual projective simplex method for linear programming. Comput Math Appl 35:119–135
Pan P-Q (1998b) A basis-deficiency-allowing variation of the simplex method. Comput Math Appl 36:33–53
Pan P-Q (1999a) A new perturbation simplex algorithm for linear programming. J Comput Math 17:233–242
Pan P-Q (1999b) A projective simplex method for linear programming. Linear Algebra Appl 292:99–125
Pan P-Q (2000a) A projective simplex algorithm using LU decomposition. Comput Math Appl 39:187–208
Pan P-Q (2000b) Primal perturbation simplex algorithms for linear programming. J Comput Math 18:587–596
Pan P-Q (2000c) On developments of pivot algorithms for linear programming. In: Proceedings of the sixth national conference of operations research society of China, Changsha, 10–15 Oct 2000. Global-Link Publishing, Hong Kong, pp 120–129
Pan P-Q (2004) A dual projective pivot algorithm for linear programming. Comput Optim Appl 29:333–344
Pan P-Q (2005) A revised dual projective pivot algorithm for linear programming. SIAM J Optim 16:49–68
Pan P-Q (2008a) A largest-distance pivot rule for the simplex algorithm. Eur J Oper Res 187:393–402
Pan P-Q (2008b) A primal deficient-basis algorithm for linear programming. Appl Math Comput 198:898–912
Pan P-Q (2008c) Efficient nested pricing in the simplex algorithm. Oper Res Lett 38:309–313
Pan P-Q (2010) A fast simplex algorithm for linear programming. J Comput Math 28(6):837–847
Pan P-Q (2013) An affine-scaling pivot algorithm for linear programming. Optimization 62: 431–445
Pan P-Q, Li W (2003) A non-monotone phase-1 method in linear programming. J Southeast Univ (English edn) 19:293–296
Pan P-Q, Ouiang Z-X (1993) Two variants of the simplex algorithm (in Chinese). J Math Res Expo 13(2):274–275
Pan P-Q, Ouyang Z-X (1994) Moore-Penrose inverse simplex algorithms based on successive linear subprogramming approach. Numer Math 3:180–190
Pan P-Q, Pan Y-P (2001) A phase-1 approach to the generalized simplex algorithm. Comput Math Appl 42:1455–1464
Pan P-Q, Li W, Wang Y (2004) A phase-1 algorithm using the most-obtuse-angle rule for the basis-deficiency-allowing dual simplex method. OR Trans 8:88–96
Pan P-Q, Li W, Cao J (2006a) Partial pricing rule simplex method with deficient basis. Numer Math J Chin Univ (English series) 15:23–30
Pan P-Q, Hu J-F, Li C (2006b) Feasible region contraction interior point algorithm. Appl Math Comput 182:1361–1368
Papadimitriou CH, Steiglitz K (1982) Combinatorial optimization: algorithms and complexity. Prentice-Hall, New Jersey
Perold AF (1980) A degeneracy exploiting LU factorization for the simplex method. Math Program 19:239–254
Powell MJD (1989) A tolerant algorithm for linearly constrained optimization calculations. Math Program 45:547–566
Reid JK (1982) A sparsity-exploiting variant of the Bartels-Golub decomposition for linear programming bases. Math Program 24:55–69
Rockafellar RT (1997) Convext analysis. Princeton University Press, Princeton
Roos C (1990) An exponential example for Terlaky’s pivoting rule for the criss-cross simplex method. Math Program 46:79–84
Roos C, Vial J-Ph (1992) A polynomial method of approximate centers for linear programming. Math Program 54:295–305
Roos C, Terlaky T, Vial J-P (1997) Theory and algorithms for linear programming. Wiley, Chichester
Rothenberg RI (1979) Linear programming. North-Holland, New York
Ryan D, Osborne M (1988) On the solution of highly degenerate linear problems. Math Program 41:385–392
Saigal R (1995) Linear programming. Kluwer, Boston
Santos-Palomo A (2004) The sagitta method for solving linear problems. Eur J Oper Res 157:527–539
Saunders MA (1972) Large scale linear programming using the Cholesky factorization. Technical report STAN-CS-72-152, Stanford University
Saunders MA (1973) The complexity of LU updating in the simplex method. In: Andersen R, Brent R (eds) The complexity of computational problem solving. University Press, St. Lucia, pp 214–230
Saunders MA (1976) A fast and stable implementation of the simplex method using Bartels-Golub updating. In: Bunch J, Rose D (eds) Sparse matrix computation. Academic, New York, pp 213–226
Schrijver A (1986) Theory of linear and integer programming. Wiley, Chichester
Shanno DF, Bagchi A (1990) A unified view of interior point methods for linear programming. Ann Oper Res 22:55–70
Shen Y, Pan P-Q (2006) Dual besiction simplex algorithm (in Chinese). In: Preceedings of the national conference of operatios research society of China, Shenzhen. Globa-Link Informatics, Hong Kong, pp 168–174
Shi Y, Pan P-Q (2011) Higher order iteration schemes for unconstrained optimization. Am J Oper Res 1:73–83
Smale S (1983a) On the average number of steps of the simplex method of linear programming. Math Program 27:241–262
Smale S (1983b) The problem of the average speed of the simplex method. In: Bachem A, Grotschel M, Korte B, (eds) Mathematical programming, the state of the art. Springer, Berlin, pp 530–539
Srinath LS (1982) Linear programming: principles and applications. Affiliated East-West Press, New Delhi
Suhl UH (1994) Mathematical optimization system. Eur J Oper Res 72:312–322
Suhl LM, Suhl UH (1990) Computing sparse LU factorization for large-scale linear programming bases. ORSA J Comput 2:325–335
Suhl LM, Suhl UH (1993) A fast LU-update for linear programming. Ann Oper Res 43:33–47
Sun W, Yuan Y-X (2006) Optimization theory an methods: nonlinear programming. Springer, New York
Swietanowaki A (1998) A new steepest edge approximation for the simplex method for linear programming. Comput Optim Appl 10:271–281
Taha H (1975) Integer programming: theory, applications, and computations. Academic, Orlando
Talacko J V, Rockefeller RT (1960) A Compact Simplex Algorithm and a Symmetric Algorithm for General Linear Programs. Unpublished paper, Marquette University
Tanabe K (1977) A geometric method in non-linear programming. Technical Report 23343-AMD780, Brookhaven National Laboratory, New York
Tanabe K (1990) Centered Newton method for linear prgramming: interior and ‘exterior’ point method, (in Japannese). In: Tone K (ed) New methods for linear programming 3. The Institute of Statistical Mathematics, Tokeo, pp 98–100
Tapia RA, Zhang Y (1991) An optimal-basis identification technique for interior-point linear programming algorithms. Linear Algebra Appl 152:343–363
Terlaky T (1985) A covergent criss-cross method. Math Oper Stat Ser Optim 16:683–690
Terlaky T (1993) Pivot rules for linear programming: a survey on recent theoretical developments. Ann Oper Res 46:203–233
Terlaky T (ed) (1996) Interior point methods of mathematical programming. Kluwer, Dordrecht
Todd MJ (1982) An implementation of the simplex method for linear programming problems with variable upper bounds. Math Program 23:23–49
Todd MJ (1983) Large scale linear programming: geometry, working bases and factorizations. Math Program 26:1–23
Tomlin JA (1972) Modifying trangular factors of the basis in the simplex method. In: Rose DJ, Willoughby RA (eds) Sparse matrices and applications. Plenum, New York
Tomlin JA (1974) On pricing and backward transformation in linear programming. Math Program 6:42–47
Tomlin JA (1975) On scaling linear programming problems. Math Program Study 4:146–166
Tomlin JA (1987) An experimental approach to Karmarkar’s projective method, for linear programming. Math Program 31:175–191
Tsuchiya T (1992) Global convergence property of the affine scaling method for primal degenerate linear programming problems. Math Oper Res 17:527–557
Tsuchiya T, Muramatsu M (1995) Global convergence of a long-step affine-scaling algorithm for degenrate linear programming problems. SIAM J Optim 5:525–551
Tucker AW (1956) Dual systems of homegeneous linear relations. In: Kuhn HW, Tucker AW, Dantzig GB (eds) Linear inequalities and related systems. Princeton University Press, Princeton, pp 3–18
Turner K (1991) Computing projections for the Karmarkar algorithtm. Linear Algebra Appl 152:141–154
Vanderbei RJ, Lagarias JC (1990) I.I. Dikin’s convergence result for the affine-scaling algorithm. Contemp Math 114:109–119
Vanderbei RJ, Meketon M, Freedman B (1986) A modification of Karmarkars linear programming algorithm. Algorithmica 1:395–407
Vemuganti RR (2004) On gradient simplex methods for linear programs. J Appl Math Decis Sci 8:107–129
Wang Z (1987) A conformal elimination-free algorithm for oriented matroid programming. Chin Ann Math 8(B1):16–25
Wilkinson JH (1971) Moden error analysis. SIAM Rev 13:548–568
Wolfe P (1963) A technique for resolving degeneracy in linear programming. J Oper Res Soc 11:205–211
Wolfe P (1965) The composite simplex algorithm. SIAM Rev 7:42–54
Wolsey L (1998) Integer programming. Wiley, New York
Wright SJ (1997) Primal-dual interior-point methods. SIAM, Philadelphia
Xu X, Ye Y (1995) A generalized homogeneous and self-dual algorithm for linear prgramming. Oper Res Lett 17:181–190
Xu X, Hung P-F, Ye Y (1996) A simplified homogeneous and self-dual linear programming algorithm and its implementation. Ann Oper Res 62:151–171
Yan W-L, Pan P-Q (2001) Improvement of the subproblem in the bisection simplex algorithm (in Chinese). J Southeast Univ 31:324–241
Yan A, Pan P-Q (2005) Variation of the conventional pivot rule and the application in the deficient basis algorithm (in Chinese). Oper Res Manag Sci 14:28–33
Yan H-Y, Pan P-Q (2009) Most-obtuse-angle criss-cross algorithm for linear programming (in Chinese). Numer Math J Chin Univ 31:209–215
Yang X-Y, Pan P-Q (2006) Most-obtuse-angle dual relaxation algorithm (in Chinese). In: Preceedings of the national conference of operatios research society of China, Shenzhen. Globa-Link Informatics, Hong Kong, pp 150–155
Ye Y (1987) Eliminating columns in the simplex method for linear programming. Technical report SOL 87–14, Department of Operations Research, Stanford Univesity, Stanford
Ye Y (1990) A ‘build-down’ scheme for linear probleming. Math Program 46:61–72
Ye Y (1997) Interior point algorithms: theory and analysis. Wiley, New York
Ye Y, Todd MJ, Mizuno S (1994) An \(o(\sqrt{n}l)\)-iteration homogeneous and selfdual linear programming algorithm. Math Oper Res 19:53–67
Zhang H, Pan P-Q (2008) An interior-point algorithm for linear programming (in Chinese). In: Preceedings of the national conference of operatios research society of China, Nanjing. Globa-Link Informatics, Hong Kong, pp 183–187
Zhang J-Z, Xu S-J (1997) Linear programming (in Chinese). Science Press, Bejing
Zhang L-H, Yang W-H, Liao L-Z (2013) On an efficient implementation of the face algorithm for linear programming. J Comp Math 31:335–354
Zhou Z-J, Pan P-Q, Chen S-F (2009) Most-obtuse-angle relaxation algorithm (in Chinese). Oper Res Manag Sci 18:7–10
Zionts S (1969) The criss-cross method for solving linear programming problems. Manag Sci 15:420–445
Zlatev Z (1980) On some pivotal strategies in Gaussian elimination by sparse technique. SIAM J Numer Anal 17:12–30
Zörnig P (2006) Systematic construction of examples for cycling in the simplex method. Comput Oper Res 33:2247–2262
Zoutendijk G (1960) Methods of feasible directions. Elsevier, Amsterdam
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2014 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
PAN, PQ. (2014). Variants of the Simplex Method. In: Linear Programming Computation. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-40754-3_7
Download citation
DOI: https://doi.org/10.1007/978-3-642-40754-3_7
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-40753-6
Online ISBN: 978-3-642-40754-3
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)