Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Besides the simplex method and dual simplex method, a number of their variants have been proposed in the past. To take advantages of both types, attempts were made to combine them. At first, two important variants will be presented in the following two sections respectively, both of which prefixed by “primal-dual” because they execute primal as well as dual simplex steps, though they are based on different ideas. More recent variants of such type will be presented later in Chap. 18.

In the other sections, the primal and dual simplex methods are generalized to handle bounded-variable LP problems, which are commonly used in practice.

1 Primal-Dual Simplex Method

The primal-dual method (Dantzig et al. 1956) will be presented in this section, which is an extension of the same named method (Ford and Fulkerson 1956) for solving transportation problems.

Just like the dual simplex method, this method proceeds toward primal feasibility while maintaining dual feasibility and complementarity. However, they pursue primal feasibility in different ways. The former attempts to fulfil x ≥ 0 while maintaining Ax = b, whereas the latter attempts to get rid of artificial variables in the auxiliary Phase-I program to fulfil Ax = b while keeping x ≥ 0.

We are concerned with the standard LP problem (1.8), whose dual problem is (4.2). Let \((\bar{y},\bar{z})\) be the current dual feasible solution, satisfying \({A}^{T}\bar{y} +\bar{ z} \leq c\).

To obtain a primal solution matching \((\bar{y},\bar{z})\), consider the auxiliary program (3.16), written as

$$\displaystyle{ \begin{array}{l@{\quad }l} \min \quad &\zeta = {e}^{T}x_{a}, \\ \mathrm{s.t.}\quad &Ax + x_{a} = b,\qquad x,x_{a} \geq 0,\\ \quad \end{array} }$$
(7.1)

where \(x_{a} = {(x_{n+1},\ldots,x_{n+m})}^{T}\) is an artificial variable vector. It would be well to assume b ≥ 0. Introducing index set

$$\displaystyle{ Q =\{ j \in A\ \vert \ \bar{z}_{j} = 0\}, }$$
(7.2)

define the so-called “restricted program”:

$$\displaystyle{ \begin{array}{l@{\quad }l} \min \quad &\zeta = {e}^{T}x_{a}, \\ \mathrm{s.t.}\quad &Ax + x_{a} = b,\qquad x_{a} \geq 0, \\ \quad &x_{j} \geq 0,\qquad j \in Q, \\ \quad &x_{j} = 0,\qquad j\notin Q.\\ \quad \end{array} }$$
(7.3)

Since b ≥ 0, it is clear that the feasible region of the preceding program is nonempty, and hence there is an optimal solution to it. The restricted program may be viewed as one formed by all artificial columns and columns indexed by j belonging to Q.

Assume that \((\bar{x},\bar{x}_{a})\) is an optimal solution to (7.3) with optimal value \(\bar{\zeta }\), and that \(\bar{w}\) is the associated optimal simplex multiplier.

Theorem 7.1.1.

If the optimal value \(\bar{\zeta }\) vanishes, \(\bar{x}\) and \((\bar{y},\bar{z})\) are a pair of primal and dual optimal solutions.

Proof.

\(\bar{\zeta }= {e}^{T}\bar{x}_{a} = 0\) and \(\bar{x}_{a} \geq 0\) together imply that \(\bar{x}_{a} = 0\). Thus, \(\bar{x}\) is a feasible solution to the original problem (4.1). By the definition of Q, moreover, it holds that \(\bar{{x}}^{T}\bar{z} = 0\), as exhibits complementarity. Therefore, \(\bar{x}\) and \((\bar{y},\bar{z})\) are a pair of primal and dual optimal solutions. □ 

When \(\bar{\zeta }> 0\), otherwise, \(\bar{x}\) could be regarded as the closest one to feasibility among all those complementary with \((\bar{y},\bar{z})\). Nevertheless, the \(\bar{x}\) is not feasible to the original problem because it does not satisfy Ax = b, but x ≥ 0 only. In other words, it should be possible to improve \((\bar{y},\bar{z})\) by increasing the associated dual objective value. To do so, consider the dual program of (7.3) in the form

$$\displaystyle{ \begin{array}{l@{\quad } l} \min \quad &{b}^{T}w, \\ \mathrm{s.t.}\quad &a_{j}^{T}w + s_{j} = 0,\quad s_{j} \geq 0,\quad j \in Q, \\ \quad &w \leq e.\\ \quad \end{array} }$$
(7.4)

Since the simplex multiplier vector \(\bar{w}\) is just an optimal solution to the preceding program, it follows from the duality that

$$\displaystyle{{b}^{T}\bar{w} =\bar{\zeta }> 0,}$$

which implies that \(\bar{w}\) is an uphill with respect to the objective b T y of the dual problem (4.2). This leads to the following line search scheme for updating \((\bar{y},\bar{z})\):

$$\displaystyle{ \hat{y} =\bar{ y} +\beta \bar{ w},\qquad \hat{z} = c - {A}^{T}\hat{y}. }$$
(7.5)

For being an improved dual feasible solution, it must satisfy the dual constraints for some β > 0, i.e.,

$$\displaystyle{ \hat{z} = c - {A}^{T}(\bar{y} +\beta \bar{ w}) =\bar{ z} +\beta \bar{ s} \geq 0,\qquad \bar{s} = -\beta {A}^{T}\bar{w}. }$$
(7.6)

Since \(\bar{z} \geq 0\), and \(\bar{w}\) satisfies the constrains of (7.4), it is known that

$$\displaystyle{\bar{s}_{j} = -a_{j}^{T}\bar{w} \geq 0,\qquad \forall j \in Q.}$$

Therefore, if index set

$$\displaystyle{ J =\{ j \in A\setminus Q\ \vert \ \bar{s}_{j} = -a_{j}^{T}\bar{w} < 0\} }$$
(7.7)

is empty, then (7.6) holds for all β ≥ 0, giving a class of dual feasible solutions. Since \(\bar{\zeta }> 0\), the associated dual objective value

$$\displaystyle{{b}^{T}\hat{y} = {b}^{T}\bar{y} +\beta {b}^{T}\bar{w} = {b}^{T}\bar{y}+\beta \bar{\zeta }}$$

tends to + as β infinitely increases. This implies dual unboundedness or primal infeasibility.

If, otherwise, there is some j ∈ AQ such that \(\bar{s}_{j} = -a_{j}^{T}\bar{w} < 0\), then (7.6) holds for the largest possible stepsize β such that

$$\displaystyle{ \beta = -\frac{\bar{z}_{q}} {\bar{s}_{q}} =\min \left \{-\frac{\bar{z}_{j}} {\bar{s}_{j}}\ \vert \ \bar{s}_{j} < 0,\ j \in A\setminus Q\right \} > 0. }$$
(7.8)

Thus, the resulting dual solution is feasible, corresponding to a strictly larger dual objective value. It is then used for the next iteration.

Let B be the optimal basis of the restricted program. If a column of B is not artificial, it must be indexed by some j ∈ Q such that \(\bar{z}_{j} = 0\). Since the associated reduced cost is zero, i.e., \(\bar{s}_{j} = 0 - a_{j}^{T}\bar{w} = 0\), it holds that

$$\displaystyle{\hat{z}_{j} =\bar{ z}_{j} +\beta \bar{ s}_{j} = 0,}$$

implying that the j also belongs to the next Q. Therefore, the optimal basis of the restricted program can be used as a starting basis for the next iteration. In addition, it is seen from (7.8) that there is at least one index (e.g., q) in AQ belongs to the next Q, and the associated reduced cost is negative, i.e., \(\bar{s}_{q} < 0\). In other words, there exist new candidates to enter the basis in the next iteration. Therefore, the restricted program in each iteration can be solved by applying primal simplex method to the original auxiliary program (7.1) itself, except the choice of columns entering the basis is restricted to those indexed by j ∈ QN. Once an artificial variable leaves the basis, it is dropped from the auxiliary program immediately.

It is clear that optimality of the restricted program is achieved if \(Q \cap N = \varnothing \). In case when the initial set Q is empty, for instance, all the artificial columns just form an optimal basis and the optimal multiplier is \(\bar{w} = e\); so, no any simplex step is needed for the first iteration.

The steps can be summarized into the following algorithm, the meanings of whose exists are clear.

Algorithm 7.1.1 (Primal-dual simplex algorithm).

Initial: a dual feasible solution \((\bar{y},\bar{z})\), and associated Q defined by (7.2). \(B =\{ n + 1,\ldots,n + m\},\ N =\{ 1,\ldots,n\}\). This algorithm solves the standard LP problem (1.8).

  1. 1.

    Carry out simplex steps to solve the restricted auxiliary program (7.1).

  2. 2.

    Stop if the optimal value of the restricted program vanishes (optimality achieved).

  3. 3.

    Stop if J defined by 7.7 is empty. (infeasible problem)

  4. 4.

    Compute β by (7.8).

  5. 5.

    Update \((\bar{y},\bar{z})\) by (7.5).

  6. 6.

    Update Q by (7.2)

  7. 7.

    Go to step 1.

Although the simplex method was used to solve the restricted program, any method for solving it will apply. The primal-dual simplex method seems to be amenable to certain network flow problems, in particular, since the labeling method solves the restricted program more efficiently and an initial dual feasible solution is easy to obtain (Papadimitriou and Steiglitz 1982)

It is noted that the objective value, corresponding to the dual feasible solution, increases monotonically iteration by iteration. Therefore, the primal-dual method will terminate if each restricted program encountered is solved in finitely many subiterations, It is however not the case as the simplex method is utilized.

Example 7.1.1.

Solve the following problem by Algorithm 7.1.1:

$$\displaystyle{\begin{array}{l@{\;\;}rrrrrrrrrrrr} \min \;\;&\multicolumn{11}{l}{2x_{1} + 5x_{2} + x_{3} + 4x_{4} + 8x_{5},} \\ \mathrm{s.t.}\;\;&& - x_{1} & +&4x_{2} & -&2x_{3} & +&2x_{4} & -&6x_{5} & =& - 1, \\ \;\;&& x_{1} & +&2x_{2} & +&2x_{3} & & & -&4x_{5} & =& 8, \\ \;\;&& - x_{1} & +& x_{2} & & & +&2x_{4} & +&2x_{5} & =& 2, \\ \;\;&\multicolumn{10}{r} {x_{j} \geq 0,\ j = 1,\ldots,5.}\end{array} }$$

Answer Construct the auxiliary program below:

$$\displaystyle{\begin{array}{l@{\;\;}rrrrrrrrrrrrrrrrrr} \min \;\;&\multicolumn{18}{l}{\zeta = x_{6} + x_{7} + x_{8},} \\ \mathrm{s.t.}\;\;&& x_{1} & -&4x_{2} & +&2x_{3} & -&2x_{4} & +&6x_{5} & +&x_{6} & & & & & =&1, \\ \;\;&& x_{1} & +&2x_{2} & +&2x_{3} & & & -&4x_{5} & & & +&x_{7} & & & =&8, \\ \;\;&& - x_{1} & +& x_{2} & +& & &2x_{4} & +&2x_{5} & & & & & +&x_{8} & =&2, \\ \;\;&\multicolumn{16}{r} {x_{j} \geq 0,\ j = 1,\ldots,8.}\end{array} }$$

Initial: \(B =\{ 6,7,8\},\ N =\{ 1,\ldots,5\}\). Since the costs of the original problem are positive, a feasible dual solution \((\bar{y} = {(0,0,0)}^{T},\bar{z} = {(2,5,1,4,8)}^{T})\) is available, with Q = .

Iteration 1:

  1. 1.

    Since Q = , no simplex step is needed.

  2. 2.

    The optimal value of the restricted program is positive, and the optimal simplex multiplier is \(\bar{w} = {(1,1,1)}^{T}\).

  3. 3.

    \(\;\,\bar{s}_{J} = {(-1,-4,-4)}^{T},\ J =\{ 1,3,5\}\neq \varnothing.\)

  4. 4.

    \(\;\,\bar{z}_{J} = {(2,1,8)}^{T},\ \theta =\min \{ 2/1,1/4,8/4\} = 1/4,\ q = 3.\)

  5. 5.

    \(\;\,\bar{y} = {(0,0,0)}^{T} + 1/4{(1,1,1)}^{T} = {(1/4,1/4,1/4)}^{T},\)

    \(\;\bar{z}_{N} = \left (\begin{array}{c} 2\\ 5 \\ 1\\ 4 \\ 8\\ \end{array} \right )-{\left (\begin{array}{r@{\;\;\;}r@{\;\;\;}c@{\;\;\;}r@{\;\;\;}r} 1\;\;\;& - 4\;\;\;&2\;\;\;& - 2\;\;\;& 6\\ 1\;\;\; & 2\;\;\; &2\;\;\; & \;\;\; & - 4 \\ - 1\;\;\;& 1\;\;\;& \;\;\;& 2\;\;\;& 2\\ \;\;\;\end{array} \right )}^{T}\left (\begin{array}{l} 1/4 \\ 1/4 \\ 1/4\\ \end{array} \right ) = \left (\begin{array}{l} 7/4 \\ 21/4 \\ 0\\ 4 \\ 7\\ \end{array} \right ).\)

  6. 6.

      Q = { 3}. 

Iteration 2:

  1. 1.

    Carry out restricted simplex steps of Algorithm 3.5.1:

    Subiteration 1:

    1. (2)

      Column selection is restricted to \(Q \cap N =\{ 3\}.x_{3}\) enters the basis.

    2. (4)

      \(\;\,\bar{a}_{3} = a_{3} = {(2,2,0)}^{T}\not\leq 0.\)

    3. (6)

      \(\;\,\bar{x}_{B} = {(1,8,2)}^{T},\ \alpha =\min \{ 1/2,8/2\} = 1/2,\ p = 1,\ x_{6}\ \mathrm{leaves\ the\ basis,\ and\ is\ dropped.}\)

    4. (7)

      \(\;\,\bar{x}_{B} = {(1,8,2)}^{T} - 1/2{(2,2,0)}^{T} = {(0,7,2)}^{T}.\bar{x}_{3} = 1/2.\)

    5. (8)

      \(\;\,\!\!\!\!\!\!\!{B}^{-1} = \left (\begin{array}{@{}l@{\;\;\;}c@{\;\;\;}c@{}} 1/2\;\;\;& \;\;\;& \\ - 1 \;\;\;&1\;\;\;& \\ \;\;\; & \;\;\; &1\\ \;\;\;\end{array} \right ).\)

    6. (9)

      \(\;\,B =\{ 3,7,8\},\ N =\{ 1,2,4,5\}.\)

    Subiteration 2:

    1. (1)

      \(\;\,\bar{w}\;\quad ={ \left (\begin{array}{@{}l@{\;\;\;}c@{\;\;\;}c@{}} 1/2\;\;\;& \;\;\;& \\ - 1 \;\;\;&1\;\;\;& \\ \;\;\; & \;\;\; &1\\ \;\;\;\end{array} \right )}^{T}\left (\begin{array}{c} 0\\ 1 \\ 1\\ \end{array} \right ) = \left (\begin{array}{r} - 1\\ 1 \\ 1\\ \end{array} \right ),\)

      \(\;\,\bar{s}_{N}\; = -{\left (\begin{array}{r@{\;\;\;}r@{\;\;\;}r@{\;\;\;}r} 1\;\;\;& - 4\;\;\;& - 2\;\;\;& 6\\ 1\;\;\; & 2\;\;\; & \;\;\; & - 4 \\ - 1\;\;\;& 1\;\;\;& 2\;\;\;& 2\\ \;\;\;\end{array} \right )}^{T}\left (\begin{array}{r} - 1\\ 1 \\ 1\\ \end{array} \right ) = \left (\begin{array}{@{}r@{}} 1\\ - 7 \\ - 4\\ 8\\ \end{array} \right ).\qquad \qquad \qquad \qquad \quad \;\;\)

    2. (2)

      \(\;\,Q \cap N =\{ 3\} \cap \{ 1,2,4,5\} = \varnothing.\)

  2. 2.

    The optimal value of the restricted program is positive.

  3. 3.

    \(\;\,\bar{s}_{J} =\{ -7,-4{)}^{T},\ J =\{ 2,4\}\neq \varnothing.\)

  4. 4.

    \(\;\,\bar{z}_{J} = {(21/4,4)}^{T},\ \beta =\min \{ (21/4)/7,4/4\} = 21/28,\ q = 2.\)

  5. 5.

    \(\;\,\bar{y} = {(1/4,1/4,1/4)}^{T} + 21/28{(-1,1,1)}^{T} = {(-1/2,1,1)}^{T}.\)

    \(\;\,\bar{z}_{N} = \left (\begin{array}{c} 2\\ 5 \\ 4\\ 8\\ \end{array} \right )-{\left (\begin{array}{r@{\;\;\;}r@{\;\;\;}r@{\;\;\;}r} 1\;\;\;& - 4\;\;\;& - 2\;\;\;& 6\\ 1\;\;\; & 2\;\;\; & \;\;\; & - 4 \\ - 1\;\;\;& 1\;\;\;& 2\;\;\;& 2\\ \;\;\;\end{array} \right )}^{T}\left (\begin{array}{l} - 1/2 \\ 1\\ 1 \\ \end{array} \right ) = \left (\begin{array}{l} 5/2 \\ 0\\ 1 \\ 13\\ \end{array} \right ).\qquad \qquad \qquad \;\)

  6. 6.

      Q = { 3, 2}. 

Iteration 3:

  1. 1.

    Carry out simplex steps of Algorithm 3.5.1 restricted:

    Subiteration 1:

    1. (2)

      Column selection is restricted to \(Q \cap N =\{ 2\}\). x 2 enters the basis.

    2. (4)

      \(\;\,\bar{a}_{2} = \left (\begin{array}{l@{\;\;\;}c@{\;\;\;}c} 1/2\;\;\;& \;\;\;& \\ - 1 \;\;\;&1\;\;\;& \\ \;\;\; & \;\;\; &1\\ \;\;\;\end{array} \right )\left (\begin{array}{r} - 4\\ 2 \\ 1\\ \end{array} \right ) = \left (\begin{array}{r} - 2\\ 6 \\ 1\\ \end{array} \right )\not\leq 0.\)

    3. (6)

      \(\;\,\bar{x}_{B} = {(1/2,7,2)}^{T},\ \alpha =\min \{ 7/6,2/1\} = 7/6,\ p = 2,\ x_{7}\ \mathrm{leaves\ the\ basis,and\ is\ dropped.}\)

    4. (7)

      \(\;\,\bar{x}_{B} = {(1/2,7,2)}^{T} - 7/6{(-2,6,1)}^{T} = {(17/6,0,5/6)}^{T},\bar{x}_{2} = 7/6.\)

    5. (8)

      \(\;\,\!\!\!\!\!\!\!\!{B}^{-1} = \left (\begin{array}{c@{\;\;\;}r@{\;\;\;}c} 1\;\;\;& 1/3\;\;\;& \\ \;\;\;& 1/6\;\;\;& \\ \;\;\;& - 1/6\;\;\;&1\\ \;\;\;\end{array} \right )\left (\begin{array}{l@{\;\;\;}c@{\;\;\;}c} 1/2\;\;\;& \;\;\;& \\ - 1 \;\;\;&1\;\;\;& \\ \;\;\; & \;\;\; &1\\ \;\;\;\end{array} \right ) = \left (\begin{array}{r@{\;\;\;}r@{\;\;\;}c} 1/6\;\;\;& 1/3\;\;\;& \\ - 1/6\;\;\;& 1/6\;\;\;& \\ 1/6\;\;\;& - 1/6\;\;\;&1\\ \;\;\;\end{array} \right ).\)

    6. (9)

      \(\;\,B =\{ 3,2,8\},\ N =\{ 1,4,5\}.\)

    Subiteration 2:

    1. (1)

      \(\;\,\bar{w}\;\quad ={ \left (\begin{array}{r@{\;\;\;}r@{\;\;\;}c} 1/6\;\;\;& 1/3\;\;\;& \\ - 1/6\;\;\;& 1/6\;\;\;& \\ 1/6\;\;\;& - 1/6\;\;\;&1\\ \;\;\;\end{array} \right )}^{T}\left (\begin{array}{c} 0\\ 0 \\ 1\\ \end{array} \right ) = \left (\begin{array}{l} 1/6 \\ - 1/6 \\ 1\\ \end{array} \right ),\)

      \(\;\,\bar{s}_{N} = -{\left (\begin{array}{r@{\;\;\;}r@{\;\;\;}r} 1\;\;\;& - 2\;\;\;& 6\\ 1\;\;\; & \;\;\; & - 4 \\ - 1\;\;\;& 2\;\;\;& 2\\ \;\;\;\end{array} \right )}^{T}\left (\begin{array}{l} 1/6 \\ - 1/6 \\ 1\\ \end{array} \right ) = \left (\begin{array}{r} 1 \\ - 5/3 \\ - 11/3\\ \end{array} \right ).\qquad \qquad \qquad \qquad \;\;\,\)

    2. (2)

      \(\;\,Q \cap N =\{ 3,2\} \cap \{ 1,4,5\} = \varnothing.\)

  2. 2.

    The optimal value of the restricted program is positive.

  3. 3.

    \(\;\,s_{J} = {(-5/3,-11/3)}^{T},\ J =\{ 4,5\}\neq \varnothing.\)

  4. 4.

    \(\;\,\bar{z}_{J} = {(1,13)}^{T},\ \beta =\min \{ 1/(5/3),13/(11/3)\} = 3/5,\ q = 4.\)

  5. 5.

    \(\;\,\bar{y} = {(-1/2,1,1)}^{T} + 3/5{(1/6,-1/6,1)}^{T} = {(-2/5,9/10,8/5)}^{T},\qquad \qquad \quad \;\)

    \(\;\,\bar{z}_{N} = \left (\begin{array}{c} 2\\ 4 \\ 8\\ \end{array} \right )-{\left (\begin{array}{r@{\;\;\;}r@{\;\;\;}r} 1\;\;\;& - 2\;\;\;& 6\\ 1\;\;\; & \;\;\; & - 4 \\ - 1\;\;\;& 2\;\;\;& 2\\ \;\;\;\end{array} \right )}^{T}\left (\begin{array}{l} - 2/5 \\ 9/10 \\ 8/5\\ \end{array} \right ) = \left (\begin{array}{l} 31/10 \\ 0 \\ 54/5\\ \end{array} \right ).\)

  6. 6.

      Q = { 3, 2, 4}. 

Iteration 4:

  1. 1.

    Carry out simplex steps of Algorithm 3.5.1 restricted:

    Subiteration 1:

    1. (2)

      Column selection is restricted to \(Q \cap N =\{ 3,2,4\} \cap \{ 1,4,5\}\). x 4 enters the basis.

    2. (4)

      \(\bar{a}_{4} = \left (\begin{array}{r@{\;\;\;}r@{\;\;\;}c} 1/6\;\;\;& 1/3\;\;\;& \\ - 1/6\;\;\;& 1/6\;\;\;& \\ 1/6\;\;\;& - 1/6\;\;\;&1\\ \;\;\;\end{array} \right )\left (\begin{array}{r} - 2\\ 0 \\ 2\\ \end{array} \right ) = \left (\begin{array}{r} - 1/3 \\ 1/3 \\ 5/3\\ \end{array} \right )\not\leq 0\).

    3. (6)

      \(\bar{x}_{B} = {(17/6,7/6,5/6)}^{T},\ \alpha =\min \{ (7/6)/(1/3),(5/6)/(5/3)\} = 1/2,\)

      p = 3,  x 8 leaves the basis, and dropped.

    4. (7)

      \(\bar{x}_{B} = {(17/6,7/6,5/6)}^{T} - 1/2{(-1/3,1/3,5/3)}^{T} = {(3,1,0)}^{T}\), \(\bar{x}_{4} = 1/2\).

    5. (8)

      \({B}^{-1} =\! \left (\begin{array}{c@{\;\;\;}c@{\;\;\;}r} 1\;\;\;& \;\;\;& 1/5 \\ \;\;\;& \;\;\;& - 1/5 \\ \;\;\;&1\;\;\;& 3/5\\ \;\;\;\end{array} \right )\!\left (\begin{array}{r@{\;\;\;}r@{\;\;\;}c} 1/6\;\;\;& 1/3\;\;\;& \\ - 1/6\;\;\;& 1/6\;\;\;& \\ 1/6\;\;\;& - 1/6\;\;\;&1\\ \;\;\;\end{array} \right )\! =\! \left (\begin{array}{l@{\;\;\;}l@{\;\;\;}r} 1/5 \;\;\;& 3/10 \;\;\;& 1/5 \\ - 1/5\;\;\;& 1/5 \;\;\;& - 1/5 \\ 1/10\;\;\;& - 1/10\;\;\;& 3/5\\ \;\;\;\end{array} \right )\).

    6. (9)

      \(B =\{ 3,2,4\},\ N =\{ 1,5\}\).

  2. 2.

    The optimal value of the restricted program is zero, optimality achieved.

    The optimal solution and objective value are

    $$\displaystyle{\bar{x} = {(0,1,3,1/2,0)}^{T},\qquad \bar{f} = 10.}$$

2 Self-Dual Parametric Simplex Method

Based on discussions made in Sect. 6.4, it is not difficult to go over to a method for solving problems with the costs and the right-hand side both parameterized, i.e.,

$$\displaystyle{ \begin{array}{ll} \min &f = {(c +\theta c')}^{T}x, \\ \mathrm{s.t.}&Ax = b +\theta b',\qquad x \geq 0.\\ \end{array} }$$
(7.9)

In this section, we will solve the standard LP program via handling the preceding parametric program. This method is closely related to Orchard-Hays’ work (1956), and has been used by Smale (1983b) for investigating the worst-case complexity of the simplex method.

The method belongs to a more general approach, so-called “homotopy”, which generates a continuous deformation, converting a given problem to a related but trivially solved one, and then proceeds backwards from the latter to the original by solving all the problems in between. It is seen that the standard problem (1.8) is just the parametric program (7.9) with θ = 0.

Assume availability of a simplex tableau to the standard LP problem, which is neither primal nor dual feasible. It is a simple matter to determine a value θ = θ 2 > 0 such that the objective row and the right-hand side both become nonnegative after adding it relevantly. Such doing amounts to adding some terms θ c′ and θ b′ respectively to the costs and the right-hand side of the original problem, corresponding to \(\theta =\theta _{1} = 0\). Then, θ is decreased from θ 2 down to 0 while maintaining optimality. If primal feasibility is violated first in this process, a row index p and a new θ 2 are determined; then a column index q is determined by the dual simplex ratio test. If, otherwise, dual feasibility violated first, a column index q and a new θ 2 are determined; a row index p is determined by the primal simplex ratio test. Subsequent operations in the iteration are just for a normal basis change.

Assume that the current simplex tableau is optimal to θ = θ 2, i.e.,

$$\displaystyle{ \begin{array}{cc|cc} \quad x_{B}^{T}\quad & x_{N}^{T} \quad &\mathrm{RHS} \\ \hline I & \quad \bar{N} \quad &\bar{b} +\bar{b}'\theta \\ \hline &\bar{z}_{N}^{T} +\bar{}(z')_{N}^{T}\theta \quad & -\bar{ f} \\ \hline \end{array} }$$
(7.10)

The procedure is put into the following algorithm, where the parametric program with θ = 0 corresponds to the original problem.

Algorithm 7.2.1 (Self-dual parametric algorithm: tableau form).

Given θ 2 > 0. Initial: a simplex tableau of the form (7.10), which is optimal for θ = θ 2. This algorithm solves the standard LP problem.

  1. 1.

    If \(\bar{z}'_{N} \leq 0\), set β = 0; else, determine q and β such that

    $$\displaystyle{\alpha = -\bar{z}_{q}/\bar{c}'_{q} =\max \{ -\bar{z}_{j}/\bar{z}'_{j}\ \vert \ \bar{z}'_{j} > 0,\ j \in N\}.}$$
  2. 2.

    If \(\bar{b}' \leq 0\), set α = 0; else, determine p and α such that

    $$\displaystyle{\beta = -\bar{b}_{p}/\bar{b}'_{p} =\max \{ -\bar{b}_{i}/\bar{b}'_{i}\ \vert \ \bar{b}'_{i} > 0,\ i = 1,\ldots,m\}.}$$
  3. 3.

    If α ≥ β, do the following

    1. (1)

      If α ≤ 0, set θ 2 = 0 and stop ( optimality achieved);

    2. (2)

      Stop if \(\bar{a}_{q} \leq 0\) ( unbounded );

    3. (3)

      Determine row index p such that

      $$\displaystyle{(\bar{b}_{p} +\bar{ b}'_{p}\theta )/\bar{a}_{pq} =\min \{ (\bar{b}_{i} +\bar{ b}'_{i}\theta )/\bar{a}_{iq}\ \mid \ \bar{a}_{iq} > 0,\ i = 1,\ldots,m\},}$$

      where θ is close to θ 2;

      else

    4. (4)

      If β ≤ 0, set θ 2 = 0, and stop ( optimality achieved);

    5. (5)

      Stop if \(J =\{ j \in N\ \mid \ \bar{a}_{pj} < 0\}\) ( infeasible );

    6. (6)

      Determine column index q such that

      $$\displaystyle{-(\bar{z}_{q} +\bar{ z}'_{q}\theta )/\bar{a}_{pq} =\min _{j\in J} - (\bar{z}_{j} +\bar{ z}'_{j}\theta )/\bar{a}_{pj},}$$

      where θ is close to θ 2.

  4. 4.

    If α ≥ β, set θ 2 = α else set θ 2 = β.

  5. 5.

    Convert \(\bar{a}_{p\,q}\) to 1, and eliminate the other nonzeros in the column by elementary transformations.

  6. 6.

    Go to step 1.

An advantage of the preceding Algorithm is that it can solve problems in a single phase by starting from any basis. It is sometimes describe as “criss-cross” because of its shuttling between primal and dual sides, depending on which of α and β is larger (see step 3). Therefore, it seems critical to scale the costs and the right-hand side for equilibrium of their magnitudes before hand. On the other hand, the algorithm requires more computational effort per iteration, compared with the simplex algorithm. As a homotopy algorithm, it seems to be more suitable for solving hard problems. At least, it stands good as a tool for handling the parametric program (7.9) itself.

Discussions concerning the preceding Algorithm can be made similarly to Algorithms 6.4.1 and 6.4.2. The revised version of it is omitted.

Example 7.2.1.

Solve the following problem by Algorithm 7.2.1:

$$\displaystyle{\begin{array}{l@{\;\;}rrrrrrrrrrrr} \min \;\;&\multicolumn{12}{l}{ - 2x_{1} - 3x_{2},} \\ \mathrm{s.t.}\;\;&& x_{1} & +&2x_{2} & & & +&x_{4} & & & =& 2, \\ \;\;&& - 2x_{1} & -& x_{2} & +&x_{3} & & & & & =& - 1, \\ \;\;&& - 3x_{1} & +&4x_{2} & & & & & & x_{5} & =& - 3, \\ \;\;&\multicolumn{10}{r} {x_{j} \geq 0,j = 1,\ldots,4.}\\ \;\;\end{array} }$$

Answer Put the program into the following tableau with the costs and the right-hand side both parameterized

x 1

x 2

x 3

x 4

x 5

RHS

1

2

 

1

 

2

− 2

− 1

1

  

\(-1+\theta\)

− 3

4*

  

1

\(-3+\theta\)

\(-2+\theta\)

\(-3+\theta\)

    

Given θ 2 = 4 > 0.

Iteration 1:

  1. 1.

    \(\alpha =\max \{ -(-2)/1,-(-3)/1\} = 3,\ q = 2\).

  2. 2.

    \(\beta =\max \{ -(-1)/1,-(-3)/1\} = 3,\ p = 3\).

  3. 3.

    α ≥ β.

    1. (3)

      \(\min \{(-3+\theta )/4\} = (-3+\theta )/4,\ p = 3\), where θ is close to 4.

  4. 4.

    Set θ 2 = 3.

  5. 5.

    Taking \(q = 2,\ p = 3\), according basis change leads to

x 1

x 2

x 3

x 4

x 5

RHS

      5∕2

  

1

\(-1/2\)

   \(7/2 - 1/2\theta\)

\(-11/4\)

 

1

 

 1∕4

\(-7/4 + 5/4\theta\)

    \(-3/4\)*

1

  

 1∕4

\(-3/4 + 1/4\theta\)

\(-17/4 + 7/4\theta\)

   

\(3/4 - 1/4\theta\)

\(-9/4 + 3/2\theta - 1/{4\theta }^{2}\)

Iteration 2:

  1. 1.

    \(\alpha =\max \{ -(-17/4)/(7/4)\} = 17/7,\ q = 1\).

  2. 2.

    \(\beta =\max \{ -(-7/4)/(5/4),-(-3/4)/(1/4)\} = 3,\ p = 3\).

  3. 3.

    α ≱ β.

    1. (6)

      \(\min \{-(-17/4 + 7/4\theta )/(-3/4))\} = -17/3 + 7/3\theta,\ q = 1\), where θ is close to 3.

  4. 4.

    Set θ 2 = 3 (a degenerate step).

  5. 5.

    Taking \(p = 3,\ q = 1\), according basis change leads to

x 1

x 2

x 3

x 4

x 5

RHS

 

10∕3*

 

1

1∕3

\(1 + 1/3\theta\)

 

\(-11/3\)

1

 

\(-2/3\)

\(1 + 1/3\theta\)

1

\(-4/3\)

  

\(-1/3\)

\(1 - 1/3\theta\)

 

\(-17/3 + 7/3\theta\)

  

\(-2/3 + 1/3\theta\)

\(2 - 5/3\theta + 1/{3\theta }^{2}\)

Iteration 3:

  1. 1.

    \(\alpha =\max \{ -(-17/3)/(7/3),-(-2/3)/(1/3)\} = 17/7,\ q = 2\).

  2. 2.

    \(\beta =\max \{ -1/(1/3),-1/(1/3)\} = -3,\ p = 1\).

  3. 3.

    α > β.

    1. (3)

      \(\min \{(1 + 1/3\theta )/(10/3))\},\ p = 1\), where θ is close to 3.

  4. 4.

    Set \(\theta _{2} = 17/7\).

  5. 5.

    Taking \(q = 2,\ p = 1\), according basis change leads to

x 1

x 2

x 3

x 4

x 5

RHS

 

1

 

3∕10

1∕10*

\(3/10 + 1/10\theta\)

  

1

11∕10

\(-3/10\)

\(21/10 + 7/10\theta\)

1

  

2∕5

\(-1/5\)

\(7/5 - 1/5\theta\)

   

\(17/10 - 7/10\theta\)

\(-1/10 + 1/10\theta\)

\(37/10 - 9/5\theta + 1/1{0\theta }^{2}\)

Iteration 4:

  1. 1.

    \(\alpha =\max \{ -(-1/10)/(1/10)\} = 1,\ q = 5\).

  2. 2.

    \(\beta =\max \{ -(3/10)/(1/10),-(21/10)/(7/10)\} = -3,\ p = 1\).

  3. 3.

    α > β.

    1. (3)

      \(\min \{(3/10 + 1/10\theta )/(10/3))\},\ p = 1\), where θ is close to 17∕7.

  4. 4.

    Set θ 2 = 1.

  5. 5.

    Taking \(q = 5,\ p = 1\) as pivot, according basis change leads to

x 1

x 2

x 3

x 4

x 5

RHS

 

10

 

3

1

3 +θ

 

3

1

2

 

3 +θ

1

2

 

1

 

2

 

1 −θ

 

2 −θ

 

4 − 2θ

Iteration 5:

  1. 1.

    α = 0.

  2. 2.

    \(\beta =\max \{ -3/1,-3/1\} = -3,\ p = 1\).

  3. 3.

    α > β.

    1. (1)

      θ 2 = 0. The basic optimal solution and associated objective value:

      $$\displaystyle{\bar{x} = {(2,0,3,0,3)}^{T},\qquad \bar{f} = -4.}$$

3 General LP Problems

Sa far we have presented methods for solving standard LP problems. Nevertheless, models from practice are various, as can be put in a more general from below:

$$\displaystyle{ \begin{array}{l@{\quad }l} \min \quad &f = {c}^{\mathrm{T}}x, \\ \mathrm{s.t.}\quad &a \leq Ax \leq b, \\ \quad &l \leq x \leq u,\end{array} }$$
(7.11)

where \(A \in {\mathcal{R}}^{m\times n},\,c,l,u \in {\mathcal{R}}^{n},\,a,b \in {\mathcal{R}}^{m},\,m < n,\,\mathrm{rank}\ A = m\), and a, b, l, u are all given vectors. Such type of problems have not only upper and lower bounds on variables, but also ranges, i.e., variation range of Ax. This type of problems are usually referred to as problems with ranges and bounds.

Ranges involved in the problems can be eliminated by introducing new variables. Setting w = Ax, in fact, the preceding problem is converted to

$$\displaystyle{ \begin{array}{l@{\quad }l} \min \quad &f = {c}^{\mathrm{T}}x, \\ \mathrm{s.t.}\quad &Ax - w = 0, \\ \quad &l \leq x \leq u, \\ \quad &a \leq w \leq b. \end{array} }$$
(7.12)

Components of x are said to be structural variables, whereas those of w said to be logical variables.

We will focus on the following bounded-variable problem:

$$\displaystyle{ \begin{array}{l@{\quad }l} \min \quad &f = {c}^{\mathrm{T}}x, \\ \mathrm{s.t.}\quad &Ax = b,\quad l \leq x \leq u,\\ \quad \end{array} }$$
(7.13)

where \(A \in {\mathcal{R}}^{m\times n},\,c,l,u \in {\mathcal{R}}^{n},\,b \in {\mathcal{R}}^{m},\,\mathrm{rank}\ A = m,\,m < n\). Unless indicated otherwise, it is assume that l, u are finite, and l j  < u j . Infinite upper or lower bounds can be represented by sufficiently large or small reals. Thereby, the standard LP problems (1.8) can be regarded as a special case of the preceding problem.

Clearly, such a problem can be converted to the standard form by variable transformations though such doing increases problem’s scale. In the following sections, we will generalize the simplex method and dual simplex method to solve the bounded-variable problem directly.

In the sequel, the following sign function will be useful:

$$\displaystyle{ \mathrm{sign}(t) = \left \{\begin{array}{l@{\quad }l} 1, \quad &\text{if}\ t > 0, \\ - 1,\quad &\text{if}\ t < 0, \\ 0, \quad &\text{if}\ t = 0.\\ \quad \end{array} \right. }$$
(7.14)

Assume that the current basis and nonbasis are

$$\displaystyle{ B =\{ j_{1},\cdots \,,j_{m}\},\quad N = A\setminus B. }$$
(7.15)

4 Generalized Simplex Method

Almost all terms for the standard LP problem are applicable for the bounded-variable problem. A solution to Ax = b is said to be basic if nonbasic components of it attain one of the associated upper and lower bounds. It is clear that basic solution, associated with a basis, is not necessarily unique, in contrast to basic solution in the standard LP problem context.

The following results are similar to those for the standard problem, as are stated in the sequel without proofs.

Lemma 7.4.1.

If there exists a feasible solution to the bounded-variable problem, so does a basic feasible solution; if there exists an optimal solution to it, so does a basic optimal solution.

Therefore, it is possible to find a basic optimal solution among basic feasible solutions, as is a basis for the generalized simplex algorithm.

Let \(\bar{x}\) be a basic feasible solution, associated with B:

$$\displaystyle\begin{array}{rcl} & & \bar{x}_{j} = l_{j}\ \mbox{ or }\ u_{j},\quad j \in N,{}\end{array}$$
(7.16)
$$\displaystyle\begin{array}{rcl} & & l_{B} \leq \bar{ x}_{B} = {B}^{-1}b - {B}^{-1}N\bar{x}_{ N} \leq u_{B}.{}\end{array}$$
(7.17)

The according reduced costs and objective value are

$$\displaystyle{ \bar{z}_{N} = c_{N} - {N}^{\mathrm{T}}{B}^{-T}c_{ B},\quad \bar{f} = c_{B}^{\mathrm{T}}{B}^{-1}b +\bar{ z}_{ N}^{\mathrm{T}}\bar{x}_{ N}. }$$
(7.18)

Define index set

$$\displaystyle{ \Gamma =\{\ j \in N\ \vert \ \bar{x}_{j} = l_{j}\},\quad \Pi =\{\ j \in N\ \vert \ \bar{x}_{j} = u_{j}\}. }$$
(7.19)

So it holds that

$$\displaystyle{\Gamma \cup \Pi = N,\quad \Gamma \cap \Pi = \varnothing.}$$

Without confusion, thereafter \(\Gamma \) and \(\Pi \) are also used to respectively denote submatrices, consisting of columns indexed by their elements.

Lemma 7.4.2.

A feasible solution \(\bar{x}\) is optimal if the following set is empty:

$$\displaystyle{ J =\{ j \in \Gamma \ \vert \ \bar{z}_{j} < 0\} \cup \{ j \in \Pi \ \vert \ \bar{z}_{j} > 0\}. }$$
(7.20)

Proof.

Let x′ be any feasible solution. Thus it holds that

$$\displaystyle{l_{j} \leq x'_{j} \leq u_{j},\quad j \in N.}$$

It is known by the assumption that

$$\displaystyle{\bar{z}_{j} \geq 0,\ j \in \Gamma;\quad \bar{z}_{j} \leq 0,\ j \in \Pi.}$$

Hence for any j ∈ N, there two cases arising:

  1. (i)

    \(j \in \Gamma \). It follows from \(x'_{j} \geq l_{j} =\bar{ x}_{j}\) that

    $$\displaystyle{ \bar{z}_{j}x'_{j} \geq \bar{ z}_{j}\bar{x}_{j}; }$$
    (7.21)
  2. (ii)

    \(j \in \Pi \). From \(x'_{j} \leq u_{j} =\bar{ x}_{j}\) again (7.21) follows. Therefore

    $$\displaystyle{\sum _{j\in N}\bar{z}_{j}x'_{j} \geq \sum _{j\in N}\bar{z}_{j}\bar{x}_{j},}$$

    which implies that

    $$\displaystyle{c_{B}^{\mathrm{T}}{B}^{-1}b +\bar{ z}_{ N}^{\mathrm{T}}x'_{ N} \geq c_{B}^{\mathrm{T}}{B}^{-1}b +\bar{ z}_{ N}^{\mathrm{T}}\bar{x}_{ N}.}$$

    The preceding indicates that the objective value at x′ is no less than that at \(\bar{x}\), therefore \(\bar{x}\) is optimal. □ 

Assume now that J is nonempty. Thus, a column index q can be determined by

$$\displaystyle{q \in \arg \max _{j\in J}\ \vert \bar{z}_{j}\vert.}$$

Assuming q = j t , define vector

$$\displaystyle{ \Delta x\stackrel{\bigtriangleup }{=}\left (\begin{array}{@{}c@{}} \Delta x_{B} \\ \Delta x_{N}\\ \end{array} \right ) = \mathrm{sign}(\bar{z}_{q})\left (\begin{array}{@{}c@{}} - {B}^{-1}a_{ q} \\ e_{t-m} \\ \end{array} \right ), }$$
(7.22)

where e qm is the (nm)-dimensional unit vector with the (qm)th component 1.

Proposition 7.4.1.

\(\Delta x\) satisfies

$$\displaystyle{A\Delta x = 0,\quad {c}^{\mathrm{T}}\Delta x > 0.}$$

Proof.

It is known by (7.22) that

$$\displaystyle{A\Delta x = B\Delta x_{B} + N\Delta x_{N} = \mathrm{sign}(\bar{z}_{q})(-a_{q} + a_{q}) = 0.}$$

From the first formula of (7.18) together with (7.4) and (7.22), it follows that

$$\displaystyle{ -{c}^{\mathrm{T}}\Delta x = \mathrm{sign}(\bar{z}_{ g})(a_{q}^{\mathrm{T}}{B}^{-1}c_{ B} - c_{q}) = -\mathrm{sign}(\bar{z}_{q})\bar{z}_{q} =\mu (q)\bar{z}_{q} = -\vert \bar{z}_{q}\vert < 0. }$$
(7.23)

 □ 

The preceding proposition says that \(-\Delta x\) is a descent direction with respect to the objective c T x.

Let α ≥ 0 be a stepsize from \(\bar{x}\) along the direction. The new iterate is then

$$\displaystyle{ \hat{x} =\bar{ x} -\alpha \Delta x. }$$
(7.24)

Thus, since \(\bar{x}\) is feasible, it holds for any α ≥ 0 that

$$\displaystyle{A\hat{x} = A\bar{x} -\alpha (B,N)\Delta x = A\bar{x} = b.}$$

The value of stepsize α should be such that \(\hat{x}\) satisfies \(l \leq \hat{ x} \leq u\). Thereby the largest possible stepsize is

$$\displaystyle{ \alpha =\min \{ u_{q} - l_{q},\min \{\alpha _{i}\ \vert \ i = 1,\cdots \,,m\}\}, }$$
(7.25)

where

$$\displaystyle{ \alpha _{i} = \left \{\begin{array}{lc} (\bar{x}_{j_{i}} - u_{j_{i}})/\Delta x_{j_{i}},&\quad \text{if}\ \Delta x_{j_{i}} < 0, \\ (\bar{x}_{j_{i}} - l_{j_{i}})/\Delta x_{j_{i}}, &\quad \text{if}\ \Delta x_{j_{i}} > 0, \\ \infty, &\quad \text{if}\ \Delta x_{j_{i}} = 0,\\ \end{array} \right.\qquad i = 1,\cdots \,,m. }$$
(7.26)

There are the following two cases arising:

  1. (i)

    \(\alpha = u_{q} - l_{q}\). In this case, if \(\bar{x}_{q} = l_{q}\), then \(\hat{x}_{q} = u_{q}\); and if \(\bar{x}_{q} = u_{q}\), then \(\hat{x}_{q} = l_{q}\). The new solution \(\hat{x}\) is basic feasible, corresponding to the same basis. Therefore, there is no need for any basis change.

  2. (ii)

    α < u q l q . Determine row index p ∈ { 1, ⋯ , m} such that

    $$\displaystyle{ \alpha =\alpha _{p}. }$$
    (7.27)

    Then \(\hat{x}_{j_{p}}\) attains its lower bound \(l_{j_{p}}\) or upper bound \(u_{j_{p}}\). In this case, the new basis and nonbasis follows from B and N by exchanging j p and q. In addition, it is verified that the new solution \(\hat{x}\) is a basic solution, corresponding to the new basis.

It is known from (7.23) and (7.24) that the new objective value is

$$\displaystyle{\hat{f} = {c}^{T}\hat{x} = {c}^{\mathrm{T}}\bar{x} -\alpha {c}^{\mathrm{T}}\Delta x = {c}^{\mathrm{T}}\bar{x} -\alpha \vert \bar{z}_{ q}\vert \leq {c}^{\mathrm{T}}\bar{x},}$$

which strictly decreases if α > 0. The preceding expression leads to the recurrence formula of the objective value, i.e.,

$$\displaystyle{\hat{f} =\bar{ f} -\alpha \vert \bar{z}_{q}\vert,}$$

The preceding formula will not be used in each iteration in the following algorithm, however; instead, the objective value will be computed at the end from the final basis and original data.

Definition 7.4.1.

A feasible solution is degenerate (with respective to a basis) if a basic component of it is on one of its bounds.

Concerning stepsize α, the following two points should be noted.

  1. (i)

    In case when a basic solution is degenerate, α value would vanish, and the basic solution remains unchanged even if the basis changes.

  2. (ii)

    In practice, the problem should be deemed unbounded if the value of α exceeds some sufficiently large number.

From the discussions made above, the following conclusions are attained.

Lemma 7.4.3.

Let \(\bar{x}\) be a basic feasible solution. Then the new solution, determined by (7.22), (7.24), (7.25) and  (7.26) , is a basis feasible solution. The corresponding objective value does not increase, while strictly decreases if nondegeneracy is assumed.

The overall steps are put in the following algorithm.

Algorithm 7.4.1 (Generalized simplex algorithm).

Initial: (B, N), B −1 and associated basic feasible solution \(\bar{x}\). This algorithm solves bounded-variable problem (7.13).

  1. 1.

    Compute \(\bar{z}_{N} = c_{N} - {N}^{\mathrm{T}}\bar{y}\), where \(\bar{y} = {B}^{-T}c_{B}\).

  2. 2.

    Compute \(\bar{f} = {c}^{\mathrm{T}}\bar{x}\), and stop if set J defined by (7.20) is empty.

  3. 3.

    Select column index q such that \(q \in \max _{j\in J}\vert \bar{z}_{j}\vert \).

  4. 4.

    Compute \(\Delta x_{B} = -\mathrm{sign}(\bar{z}_{q}){B}^{-1}a_{q}\).

  5. 5.

    Determine stepsize α by (7.25) and (7.26).

  6. 6.

    Update \(\bar{x}\) by (7.24) and (7.22).

  7. 7.

    Go to step 1 if \(\alpha = u_{q} - l_{q}\); else determine row index p ∈ { 1, ⋯ , m} such that α = α p .

  8. 8.

    Update B −1 by (3.23).

  9. 9.

    Update (B, N) by exchanging j p and q.

  10. 10.

    Go to step 1.

Theorem 7.4.1.

Algorithm  7.4.1 generates a sequence of basic feasible solutions. Assuming nondegeneracy throughout the solution process, it terminates at step 2, giving a basic optimal solution.

Proof.

Its validity comes from Lemmas 7.4.2 and 7.4.3.

Example 7.4.1.

Solve the following problem by Algorithm 7.4.1:

$$\displaystyle{\begin{array}{l@{\;\;}rrrrrrrr} \min \;\;&\multicolumn{7}{l}{f = -x_{1} + 3x_{2},} \\ \mathrm{s.t.}\;\;&2& \leq &2x_{1} & -&3x_{2} & \leq &10, \\ \;\;&1& \leq & x_{1} & -& x_{2} & \leq & 5, \\ \;\;& & -& x_{1} & +&2x_{2} & \leq & 0, \\ \;\;&\multicolumn{8}{l}{0 \leq x_{1} \leq 6,-2 \leq x_{2}.}\\ \;\;\end{array} }$$

Answer Introduce x 3, x 4, x 5 to convert the preceding to

$$\displaystyle{\begin{array}{l@{\quad }rrrrrrl} \min \quad &\multicolumn{6}{l}{f = x_{1} - 3x_{2},} \\ \mathrm{s.t.}\quad & - 2x_{1} & + 3x_{2} & + x_{3} & & = 0, \\ \quad & - x_{1} & +\;\; x_{2} & & + x_{4} & = 0, \\ \quad & x_{1} & - 2x_{2} & & & + x_{5} = 0, \\ \quad &\multicolumn{6}{l}{0 \leq x_{1} \leq 6,\ \ -2 \leq x_{2} \leq \infty,\ \ 2 \leq x_{3} \leq 10,\ \ 1 \leq x_{4} \leq 5,\ \ -\infty \leq x_{5} \leq 0.} \end{array} }$$

In the following, the unbounded variables will be handled as − or , upon which only the determination of stepsize touches.

Initial: \(B =\{ 3,4,5\},\ N =\{ 1,2\},\ {B}^{-1} = I,\ \bar{x}_{N} = {(0_{(-)},-2_{(-)})}^{\mathrm{T}},\ \bar{x}_{B} = {(6,2,-4)}^{\mathrm{T}}\), \(\bar{f} = 6\). The initial solution is basic feasible (with subscript “(−)” to denote on the lower bound, and superscript “(+)” on the upper bound. The same below).

Iteration 1:

  1. 1.

    \(y\;\; = {B}^{-\mathrm{T}}c_{B} = {(0,0,0)}^{\mathrm{T}},\,\bar{z}_{N} = {(1,-3)}^{\mathrm{T}}\).

  2. 2.

    J   = { 2}.

  3. 3.

    \(\max _{J}\vert \bar{z}_{j}\vert = 3,q = 2\), x 2 enters the basis.

  4. 4.

    \(\bar{a}_{2}\; = {B}^{-1}a_{2} = {(3,1,-2)}^{\mathrm{T}}.\)

  5. 5.

    \(\alpha _{1}\;\hspace{-0.6pt} = (6 - 2)/3 = 4/3,\alpha _{2} = (2 - 1)/1 = 1,\)

    \(\alpha _{3}\;\hspace{-0.6pt} = (-4 - 0)/ - 2 = 2\); \(\alpha =\min \{ \infty,4/3,1,2\} = 1\).

  6. 6.

    \(\bar{x}_{B}\hspace{0.3pt} = {(6,2,-4)}^{\mathrm{T}} - 1 \times {(3,1,-2)}^{\mathrm{T}} = {(3,1,-2)}^{\mathrm{T}},\)

    \(\bar{x}_{N}\hspace{-0.5pt} = {(0_{(-)},-2)}^{\mathrm{T}} - 1 \times (0,-1) = {(0_{(-)},-1)}^{\mathrm{T}}\).

  7. 7.

    \(p\;\;\hspace{-0.3pt} = 2\), x 4 leaves the basis.

  8. 8.

    \({B}^{-1}\hspace{-0.1pt} = \left (\begin{array}{c@{\;\;\;}r@{\;\;\;}c} 1\;\;\;& - 3\;\;\;& \\ \;\;\; & 1\;\;\; & \\ \;\;\;& 2\;\;\;&1\\ \;\;\;\end{array} \right )\).

  9. 9.

    \(B\;\,\hspace{-0.5pt} =\{ 3,2,5\},\,N =\{ 1,4\};\bar{x}_{B} = {(3,-1,-2)}^{\mathrm{T}},\,\bar{x}_{N} = (0_{(-)},1_{(-)}^{\mathrm{T}})\).

Iteration 2:

  1. 1.

    \(y\;\;\hspace{0.2pt} = {B}^{-\mathrm{T}}c_{B} = {(0,-3,0)}^{\mathrm{T}},\,\bar{z}_{N} = c_{N} - {N}^{\mathrm{T}}y = {(1,0)}^{\mathrm{T}} - {(3,-3)}^{\mathrm{T}} = {(-2,3)}^{\mathrm{T}}\).

  2. 2.

    \(J\;\;\hspace{-0.7pt} =\{ 1\}\).

  3. 3.

    \(\max _{J}\vert \bar{z}_{j}\vert = 2,q = 1\), x 1 enters the basis.

  4. 4.

    \(\bar{a}_{1}\;\hspace{-0.2pt} = {B}^{-1}a_{1} = {(1,-1,-1)}^{\mathrm{T}}.\)

  5. 5.

    \(\alpha _{1}\;\hspace{-0.5pt} = (3 - 2)/1 = 1,\ \alpha _{2} = (-1 -\infty )/ - 1 = \infty,\)

    \(\alpha _{3}\;\hspace{-0.5pt} = (-2 - 0)/ - 1 = 2\), \(\alpha =\min \{ 6 - 0,1,\infty,2\} = 1\).

  6. 6.

    \(\bar{x}_{B}\,\hspace{-1.2pt}= {(3,-1,-2)}^{\mathrm{T}} - 1 \times {(1,-1,-1)}^{\mathrm{T}} = {(2,0,-1)}^{\mathrm{T}}\),

    \(\bar{x}_{N}\hspace{-0.4pt} = {(0,1)}^{\mathrm{T}} - 1 \times (-1,0) = {(1,1)}^{\mathrm{T}}\).

  7. 7.

    \(p\;\;\hspace{-0.4pt} = 1\), x 3 leaves the basis.

  8. 8.

    \({B}^{-1}\hspace{-0.1pt} = \left (\begin{array}{c@{\;\;\;}c@{\;\;\;}c} 1\;\;\;& \;\;\;& \\ 1\;\;\; &1\;\;\; & \\ 1\;\;\;& \;\;\;&1\\ \;\;\;\end{array} \right )\left (\begin{array}{c@{\;\;\;}r@{\;\;\;}c} 1\;\;\;& - 3\;\;\;& \\ \;\;\; & 1\;\;\; & \\ \;\;\;& 2\;\;\;&1\\ \;\;\;\end{array} \right ) = \left (\begin{array}{c@{\;\;\;}c@{\;\;\;}c} 1\;\;\;& - 3\;\;\;& \\ 1\;\;\; & - 2\;\;\; & \\ 1\;\;\;& - 1\;\;\;&1\\ \;\;\;\end{array} \right )\).

  9. 9.

    \(B\;\;\hspace{-1.7pt}=\{ 1,2,5\},\,N =\{ 3,4\};\ \bar{x}_{B} == {(1,0,-1)}^{\mathrm{T}},\ \bar{x}_{N} = {(2_{(-)},1_{(-)})}^{\mathrm{T}}\).

Iteration 3:

  1. 1.

    \(y\;\;\hspace{0.2pt} = {B}^{-\mathrm{T}}c_{B} = {(-2,3,0)}^{\mathrm{T}},\ \bar{z}_{N} = c_{N} - {N}^{\mathrm{T}}y = {(0,0)}^{\mathrm{T}} - {(-2,3)}^{\mathrm{T}} = {(2,-3)}^{\mathrm{T}}\).

  2. 2.

    \(J\;\;\hspace{-0.7pt} =\{ 4\}\).

  3. 3.

    \(\max _{J}\vert \bar{z}_{j}\vert = 3,q = 4\), x 4 enters the basis.

  4. 4.

    \(\bar{a}_{4}\;\hspace{-0.2pt} = {B}^{-1}a_{4} = {(-3,-2,-1)}^{\mathrm{T}}\).

  5. 5.

    \(\alpha _{1}\;\hspace{-0.5pt} = (1 - 6)/ - 3 = 5/3,\ \alpha _{2} = (0 -\infty )/ - 2 = \infty \),

    \(\alpha _{3}\;\hspace{-0.5pt} = (-1 - 0)/ - 1 = 1;\ \alpha =\min \{ 5 - 1,5/3,\infty,1\} = 1\).

  6. 6.

    \(\bar{x}_{B}\,\hspace{-1.2pt}= {(1,0,-1)}^{\mathrm{T}} - 1 \times {(-3,-2,-1)}^{\mathrm{T}} = {(4,2,0)}^{\mathrm{T}}\),

    \(\bar{x}_{N}\hspace{-0.4pt} = {(2,1)}^{\mathrm{T}} - 1 \times (0,-1) = {(2,2)}^{\mathrm{T}}\).

  7. 7.

    \(p\;\;\hspace{-0.4pt} = 3\), x 5 leaves the basis.

  8. 8.

    \({B}^{-1}\hspace{-0.1pt} = \left (\begin{array}{c@{\;\;\;}c@{\;\;\;}c} 1\;\;\;& \;\;\;& - 3\\ \;\;\; &1\;\;\; & - 2 \\ \;\;\;& \;\;\;& - 1\\ \;\;\;\end{array} \right )\left (\begin{array}{c@{\;\;\;}c@{\;\;\;}c} 1\;\;\;& - 3\;\;\;& \\ 1\;\;\; & - 2\;\;\; & \\ 1\;\;\;& - 1\;\;\;&1\\ \;\;\;\end{array} \right ) = \left (\begin{array}{c@{\;\;\;}c@{\;\;\;}c} - 2\;\;\;& \;\;\;& - 3\\ - 1\;\;\; & \;\;\; & - 2 \\ - 1\;\;\;&1\;\;\;& - 1\\ \;\;\;\end{array} \right )\).

  9. 9.

    \(B\;\;\hspace{-1.7pt}=\{ 1,2,4\},\,N =\{ 3,5\};\ \bar{x}_{B} = {(4,2,2)}^{\mathrm{T}},\ \bar{x}_{N} = {(2_{(-)},{0}^{(+)})}^{\mathrm{T}}\).

Iteration 4:

  1. 1.

    \(y\;\;\hspace{0.1pt} = {B}^{\mathrm{-T}}c_{B} = {(1,0,3)}^{\mathrm{T}},\,\bar{z}_{N} = c_{N} - {N}^{\mathrm{T}}y = {(0,0)}^{\mathrm{T}} - {(1,3)}^{\mathrm{T}} = {(-1,-3)}^{\mathrm{T}}\).

  2. 2.

    \(J\;\;\hspace{-0.8pt} =\{ 3\}\).

  3. 3.

    \(\max _{J}\vert \bar{z}_{j}\vert = 1,q = 3\), x 3 enters the basis.

  4. 4.

    \(\bar{a}_{3}\;\hspace{-0.2pt} = {B}^{-1}a_{3} = {(-2,-1,-1)}^{\mathrm{T}}.\)

  5. 5.

    \(\alpha _{1}\;\hspace{-0.5pt} = (4 - 6)/ - 2 = 1,\alpha _{2} = (2 -\infty )/ - 1 = \infty \),

    \(\alpha _{3}\;\hspace{-0.5pt} = (2 - 5)/ - 1 = 3;\alpha =\min \{ 10 - 2,1,\infty,3\} = 1\).

  6. 6.

    \(\bar{x}_{B}\,\hspace{-1.2pt}= {(4,2,2)}^{\mathrm{T}} - 1 \times {(-2,-1,-1)}^{\mathrm{T}} = {(6,3,3)}^{\mathrm{T}}\),

    \(\bar{x}_{N}\hspace{-0.4pt} = {(2,0)}^{\mathrm{T}} - 1 \times (-1,0) = {(3,0)}^{\mathrm{T}}\).

  7. 7.

    \(p\;\;\hspace{-0.2pt} = 1\), x 1 leaves the basis.

  8. 8.

    \({B}^{-1} = \left (\begin{array}{c@{\;\;\;}c@{\;\;\;}c} - 1/2\;\;\;& \;\;\;& \\ - 1/2\;\;\;&1\;\;\;& \\ - 1/2\;\;\;& \;\;\;&1\\ \;\;\;\end{array} \right )\left (\begin{array}{c@{\;\;\;}c@{\;\;\;}c} - 2\;\;\;& \;\;\;& - 3\\ - 1\;\;\; & \;\;\; & - 2 \\ - 1\;\;\;&1\;\;\;& - 1\\ \;\;\;\end{array} \right ) = \left (\begin{array}{c@{\;\;\;}c@{\;\;\;}r} 1\;\;\;& \;\;\;& 3/2 \\ \;\;\;& \;\;\;& - 1/2 \\ \;\;\;&1\;\;\;& 1/2\\ \;\;\;\end{array} \right )\).

  9. 9.

    \(\bar{x}_{B}\,\hspace{-1.2pt}= {(3,3,3)}^{\mathrm{T}},\,B =\{ 3,2,4\};\ \bar{x}_{N} = {({6}^{(+)},{0}^{(+)})}^{\mathrm{T}},\ N =\{ 1,5\}.\)

Iteration 5:

  1. 1.

    \(y\;\,\hspace{0.2pt} = {B}^{-T}c_{B} = {(0,0,3/2)}^{\mathrm{T}},\)

    \(\bar{z}_{N} = c_{N} - {N}^{\mathrm{T}}y = {(1,0)}^{\mathrm{T}} - {(3/2,3/2)}^{\mathrm{T}} = {(-1/2,-3/2)}^{\mathrm{T}}\).

  2. 2.

    J  = . The basic optimal solution and associated objective value:

    $$\displaystyle{\bar{x} = {(6,3,3,3,0)}^{\mathrm{T}},\quad \bar{f} = 6 - 3 \times 3 = -3.}$$

As for the tableau version of Algorithm 7.4.1, the associated simplex tableau is the same as the conventional, except there is no need for RHS column to display the corresponding basic solution. We add three additional rows \((u,\bar{x},l)\), respectively, listing upper bounds, variable values and lower bounds. The simplex tableau is of the form below:

 

x B T

x N T

 

I

\(\bar{N}\)

  

\(\bar{z}_{ N}\)

u

u B T

u N T

\(\bar{x}\)

\(\bar{x}_{B}^{\mathrm{T}}\)

\(\bar{x}_{N}^{\mathrm{T}}\)

l

l B T

l N T

Based on Table 3.1, Algorithm 7.4.1 can be revised to a tableau form. As \(\bar{a}_{q} = {B}^{-1}a_{q}\), (7.26) should be replaced by

$$\displaystyle{ \alpha _{i} = \left \{\begin{array}{l@{\quad }l} (u_{j_{i}} -\bar{ x}_{j_{i}})/\mathrm{sign}(\bar{z}_{q})\bar{a}_{i\,q},\quad &\mathrm{if}\ \mathrm{sign}(\bar{z}_{q})\bar{a}_{i\,q} > 0, \\ (l_{j_{i}} -\bar{ x}_{j_{i}})/\mathrm{sign}(\bar{z}_{q})\bar{a}_{i\,q}, \quad &\mathrm{if}\ \mathrm{sign}(\bar{z}_{q})\bar{a}_{i\,q} < 0, \\ \infty, \quad &\mathrm{if}\ \bar{a}_{i\,q} = 0,\\ \quad \end{array} \right.\qquad i = 1,\cdots \,,m. }$$
(7.28)

Algorithm 7.4.2 (Generalized simplex algorithm: tableau form).

Initial: feasible tableau of form (7.4), associated with \(\bar{x}\). This algorithm solves the bounded-variable problem (7.13).

  1. 1.

    Compute \(\bar{f} = {c}^{\mathrm{T}}\bar{x}\), and stop (optimality achieved) if J defined by (7.20) is empty.

  2. 2.

    Select column index q such that \(q \in \max _{j\in J}\vert \bar{z}_{j}\vert \).

  3. 3.

    Determine stepsize α by (7.25), where α i defined by (7.28).

  4. 4.

    Set \(\bar{x}_{q} = -\mathrm{sign}(\bar{z}_{q})\alpha\), and update \(\bar{x}_{B} =\bar{ x}_{B} +\alpha \mathrm{sign}(\bar{z}_{q})\bar{a}_{q}\) if α ≠ 0.

  5. 5.

    If \(\alpha = u_{q} - l_{q}\), go to step 1; else, determine row index p ∈ { 1, ⋯ , m} such that α = α p .

  6. 6.

    Convert \(\bar{a}_{p\,q}\) to 1, and eliminate the other nonzeros in the column by elementary transformations.

  7. 7.

    Go to step 1.

Note The last three rows in the tableau should be updated in each iteration.

4.1 Generalized Phase-I

The following is devoted to generate an initial feasible tableau to Algorithm 7.4.2,

Assume that B and N are respectively basis and nonbasis at the current iteration, associated with basic solution \(\bar{x}\). Introduce index set

$$\displaystyle\begin{array}{rcl} I_{1}& =& \{i = 1,\cdots \,,m\ \vert \ \bar{x}_{j_{i}} < l_{j_{i}}\}, {}\\ I_{2}& =& \{i = 1,\cdots \,,m\ \vert \ \bar{x}_{j_{i}} > u_{j_{i}}\}, {}\\ I& =& \{1,\cdots \,,m\}\setminus (I_{1} \cup I_{2}). {}\\ \end{array}$$

If I 1I 2 = , then \(\bar{x}\) is feasible. In the other case, construct the following auxiliary program:

$$\displaystyle{\begin{array}{l@{\quad }l} \min \quad &w = -\sum _{i\in I_{1}}x_{j_{i}} +\sum _{i\in I_{2}}x_{j_{i}}, \\ \mathrm{s.t.}\quad &Bx_{B} = b - Nx_{N}, \\ \quad &l_{I} \leq x_{I} \leq u_{I},\quad l_{N} \leq x_{N} \leq u_{N},\end{array} }$$

where the objective function is termed “infeasible-sum”.

The according tableau of the auxiliary program is manipulated by one iteration of Algorithm 7.4.1 (in which the row pivot rule should be modified slightly, see below). Then a new auxiliary program is formed, and so on, until I 1I 2 becomes empty, or infeasibility is detected.

Related discussions are similar to those with the infeasible-sum Phase-I method for the standard LP problem (for details, see Sect. 13.1).

5 Generalized Dual Simplex Method: Tableau Form

Let B and N are given by (7.15) and let (7.4) be the according simplex tableau. Assume that the associated basic solution \(\bar{x}\) are valued by

$$\displaystyle{\bar{x}_{j} = l_{j}\ \mbox{ or }\ u_{j},\quad j \in N,}$$

and

$$\displaystyle{\bar{x}_{B} =\bar{ b} -\bar{ N}\bar{x}_{N},\quad \bar{f} = {c}^{\mathrm{T}}\bar{x}.}$$

Index sets \(\Gamma \) and \(\Pi \) are defined by (7.19). If the following conditions hold:

$$\displaystyle{ \bar{z}_{\Gamma } \geq 0,\quad \bar{z}_{\Pi } \leq 0, }$$
(7.29)

the simplex tableau is said to be dual feasible. If, further, l B  ≤ x B  ≤ u B holds, \(\bar{x}\) is clearly a basic optimal solution.

Whether a simplex tableau of a bounded-variable problem is dual feasible dependents on the values taken by nonbasic components of the solution. In principle, in the case when components of l and u are finite, it is always possible to have nonbasic components valued, such that the resulting solution be dual feasible, though l B  ≤ x B  ≤ μ B does not hold in general.

Introduce “bound-violation” quantities

$$\displaystyle{ \rho _{i} = \left \{\begin{array}{l@{\quad }l} l_{j_{i}} -\bar{ x}_{j_{i}}, \quad &\mathrm{if}\ \bar{x}_{j_{i}} < l_{j_{i}}, \\ u_{j_{i}} -\bar{ x}_{j_{i}},\quad &\mathrm{if}\ \bar{x}_{j_{i}} > u_{j_{i}}, \\ 0, \quad &\mathrm{if}\ l_{j_{i}} \leq \bar{ x}_{j_{i}} \leq u_{j_{i}}, \end{array} \right.\quad i = 1,\cdots \,,m, }$$
(7.30)

and determine row index p by the following rule:

$$\displaystyle{ p \in \arg \max \{\vert \rho _{i}\vert \ \vert \ i = 1,\cdots \,,m\}. }$$
(7.31)

If ρ p  = 0, optimality is achieved. Now assume that ρ p ≠ 0: ρ p  > 0 indicates that \(\bar{x}_{p}\) violates the lower bound while ρ p  < 0 indicates that it violates the upper bound. Introduce index set

$$\displaystyle{ J =\{ j \in \Gamma \ \vert \ \mathrm{sign}(\rho _{p})\bar{a}_{pj} < 0\} \cup \{ j \in \Pi \ \vert \ \mathrm{sign}(\rho _{p})\bar{a}_{pj} > 0\}. }$$
(7.32)

It is not difficult to show that the original problem is infeasible if J = ; else, a column index q and a step size β are determined such that

$$\displaystyle{ \beta = -\bar{z}_{q}/(\mathrm{sign}(\rho _{p})\bar{a}_{pq}) =\min _{j\in J} -\bar{ z}_{j}/(\mathrm{sign}(\rho _{p})\bar{a}_{pj}) \geq 0. }$$
(7.33)

Takin \(\bar{a}_{pq}\) as the pivot, convert the simplex tableau by relevant elementary transformations. Then the resulting simplex tableau corresponds to the new basis and nonbasis below:

$$\displaystyle{B =\{ j_{1},\cdots \,,j_{p-1},q,j_{p+1},\cdots \,,j_{m}\},\quad N = N\setminus q \cup \{ j_{p}\}.}$$

It might be well to still use (7.4) to denote the new simplex tableau, \(\hat{x}\) denote the associated basic solution. As new tableau is equivalent to the old, \(\hat{x}\) and \(\bar{x}\) satisfy

$$\displaystyle{ \bar{x}_{B} = -\bar{N}\bar{x}_{N},\quad \hat{x}_{B} = -\bar{N}\hat{x}_{N}. }$$
(7.34)

Now set the new nonbasic component \(\hat{x}_{p}\) to the violated bound, i.e.,

$$\displaystyle{ \hat{x}_{j_{p}} =\bar{ x}_{j_{p}} +\rho _{p}, }$$
(7.35)

and maintain other nonbasic components unchanged, i.e.,

$$\displaystyle{\hat{x}_{j} =\bar{ x}_{j},\quad j \in N,\ j\neq j_{p}.}$$

Then from subtraction of the two equalities of (7.34), the updating formula of \(\bar{x}_{B}\) follows:

$$\displaystyle{ \hat{x}_{B} =\bar{ x}_{B} -\rho _{p}\bar{a}_{j_{p}}. }$$
(7.36)

It is not difficulty to show that the new simplex tableau with such a \(\hat{x}\) is still dual feasible. The β is actually the largest possible stepsize maintaining dual feasibility.

Noting that \(\bar{z}_{j_{p}} = \mathrm{sign}(\rho _{p})\beta\) holds for the new tableau, the following recurrence formula of the objective value can be derived from \(\hat{x}\) and \(\bar{x}\) satisfying (7.35) and the equality associated with the bottom row of the tableau:

$$\displaystyle{\hat{f} =\bar{ f} +\bar{ z}_{j_{p}}(\hat{x}_{j_{p}} -\bar{ x}_{j_{p}}) =\bar{ f} +\rho _{p}\bar{z}_{j_{p}} =\bar{ f}+\beta \geq \bar{ f},}$$

which indicates that the objective value increases. If all components of \(\bar{z}_{N}\) are nonzero, the simplex tableau is said to be dual nondegenerate, and hence β > 0, so that the objective value strictly increases.

The overall steps are put into the following algorithm, in which the objective value is calculated at the end.

Algorithm 7.5.1 (Generalized dual simplex algorithm: tableau form).

Initial: a dual feasible tableau of form (7.4), corresponding to \(\bar{x}\). This algorithm solves the bounded-variable problem (7.13).

  1. 1.

    Select a row index p by (7.31) together with (7.30).

  2. 2.

    If ρ p  = 0, compute \(\bar{f} = {c}^{\mathrm{T}}\bar{x}\), and stop (optimality achieved).

  3. 3.

    Stop if J defined by (7.32) is empty (infeasible problem).

  4. 4.

    Determine a column index q by (7.33).

  5. 5.

    Convert \(\bar{a}_{p\,q}\) to 1, and eliminate the other nonzeros in the column by elementary transformations.

  6. 6.

    Update \(\bar{x}\) by (7.35) and (7.36).

  7. 7.

    Go to step 1.

Note The last three rows in the simplex tableau should be updated in each iteration.

The proof regrading meanings of the algorithm’s exits are delayed to the derivation of its revised version.

Example 7.5.1.

Solve the following problem by Algorithm 7.5.1:

$$\displaystyle{\begin{array}{l@{\;\;}rrrrrrrrrrrrrrrrr} \min \;\;&\multicolumn{16}{l}{f = 2x_{1} - x_{2} + 3x_{3} - 6x_{4},} \\ \mathrm{s.t.}\;\;&& - 2x_{1} & +&3x_{2} & -&4x_{3} & +&2x_{4} & +&x_{5} & & & & & =& 14, \\ \;\;&& - 3x_{1} & +&4x_{2} & -&5x_{3} & +&6x_{4} & & & +&x_{6} & & & =& 16, \\ \;\;&& x_{1} & -&2x_{2} & +&2x_{3} & -&7x_{4} & & & & & +&x_{7} & =& - 15, \\ \;\;&\multicolumn{16}{r} { - 15 \leq x_{1} \leq 30,\quad - 12 \leq x_{2} \leq 20,\quad - 17 \leq x_{3} \leq 10,} \\ \;\;&\multicolumn{16}{r} { - 8 \leq x_{4} \leq 15,\quad - 10 \leq x_{5} \leq 26,\quad - 13 \leq x_{6} \leq 34,} \\ \;\;&\multicolumn{16}{l}{\quad \;0 \leq x_{7} \leq 19.}\\ \;\; \end{array} }$$

Answer Initial tableau:

 

x 1

x 2

x 3

x 4

x 5

x 6

x 7

 

− 2

3  

− 4

2

1

  
 

− 3

4*

− 5

6

 

1

 
 

1

− 2  

2

− 7

  

1

 

2

− 1  

3

− 6

   

u

30

20  

10

15

26

34

19

\(\bar{x}\)

− 15

20  

− 17

15

− 174

− 284

179

l

− 15

− 12  

− 17

− 8

− 10

− 13

0

Take

$$\displaystyle{\begin{array}{l} \bar{x}_{N} = {(-15_{(-)},2{0}^{(+)},-17_{(-)},1{5}^{(+)})}^{\mathrm{T}}(N =\{ 1,2,3,4\}), \\ \bar{x}_{B} =\bar{ b} -\bar{ N}\bar{x}_{N} = {(-174,-284,179)}^{\mathrm{T}}(B =\{ 5,6,7\}),\ \bar{f} = -191. \end{array} }$$

Iteration 1:

  1. 1.

    \(\rho _{1} = -10 - (-174) = 164,\rho _{2} = -13 - (-284) = 271,\)

    \(\rho _{3} = 19 - 179 = -160\). \(\max \{\vert 164\vert,\vert 271\vert,\vert - 160\vert \} = 271\neq 0,p = 2,j_{2} = 6\).

  2. 3.

    \(J\;\hspace{-0.7pt} =\{ 1,2,3,4\}\neq \varnothing \).

  3. 4.

    \(\min \{-2/(-3),-(-1)/4,-3/(-5),-(-6)/6\} = 1/4,q = 2\).

  4. 5.

    Multiply row 2 by 1∕4, and then add − 3, 2, 1 times of row 2 to rows 1,3,4, respectively.

  5. 6.

    \(\bar{x}_{6}\;\hspace{-0.6pt} = -284 + 271 = -13\).

    \(\bar{x}_{B} = {(-174,20,179)}^{\mathrm{T}} - 271{(-3/4,1/4,1/2,-1/4)}^{\mathrm{T}}\).

      \(\hspace{-0.6pt}= {(117/4,-191/4,87/2)}^{\mathrm{T}},\ B =\{ 5,2,7\}\).

 

x 1

x 2

x 3

x 4

x 5

x 6

x 7

 

1∕4

 

\(-1/4\)

\(-5/2\)

1

\(-3/4\)*

 
 

\(-3/4\)

1

\(-5/4\)*

3∕2

 

1∕4

 
 

\(-1/2\)

 

\(-1/2\)

− 4

 

1∕2

1

 

5∕4

 

7∕4

\(-9/2\)

 

1∕4

 

u

30

20

10

15

26

34

19

\(\bar{x}\)

− 15

\(-191/4\)

− 17

15

117∕4

− 13

87∕2

l

− 15

− 12

− 17

− 8

− 10

− 13

0

Iteration 2:

  1. 1.

    \(\;\,\rho _{1} = 26 - 117/4 = -13/4,\rho _{2} = -12 - (-191/4) = 143/4,\)

    \(\;\,\rho _{3} = 19 - 87/2 = -49/2.\max \{\vert - 13/4\vert,\vert 143/4\vert,\vert - 49/2\vert \} = 143/4\neq 0,\quad\)

    \(\;\,p = 2,j_{2} = 2.\)

  2. 3.

    \(\;\,J =\{ 1,3,4\}\neq \varnothing.\)

  3. 4.

    \(\min \{-(5/4)/(-3/4),-(7/4)/(-5/4),-(-9/2)/(3/2)\} = 7/5,\ q = 3\).

  4. 5.

    Multiply row 2 by \(-4/5\), and then add \(1/4,1/2,-7/4\) times of row 2 to rows 1,3,4, respectively.

  5. 6.

    \(\;\,\bar{x}_{2} = -191/4 + 143/4 = -12,\)

    \(\;\,\bar{x}_{B} = {(117/4,-17,87/2)}^{\mathrm{T}} - (143/4){(-1/5,-4/5,-2/5)}^{\mathrm{T}}.\)

    \(\;\,= {(182/5,58/5,289/5)}^{\mathrm{T}},\ B =\{ 5,3,7\}.\)

 

x 1

x 2

x 3

x 4

x 5

x 6

x 7

 

2∕5

\(-1/5\)

 

\(-14/5\)

1

\(-4/5\)

 
 

3∕5

\(-4/5\)

1

\(-6/5\)

 

\(-1/5\)

 
 

\(-1/5\)

\(-2/5\)

 

\(-23/5\)*

 

2∕5

1

 

1∕5

7∕5

 

\(-12/5\)

 

3∕5

 

u

30

20

10

15

26

34

19

\(\bar{x}\)

− 15

− 12

58∕5

15

182∕5

− 13

289∕5

l

− 15

− 12

− 17

− 8

− 10

− 13

0

Iteration 3:

  1. 1.

    \(\rho _{1} = 26 - 182/5 = -52/5,\rho _{2} = 10 - 58/5 = -8/5,\)

    \(\rho _{3} = 19 - 289/5 = -194/5\). \(\max \{\vert - 52/5\vert,\vert - 8/5\vert,\vert - 194/5\vert \}\)

    \(= 194/5\neq 0,p = 3,j_{3} = 7\).

  2. 3.

    \(J\ \hspace{-0.4pt} =\{ 4,6\}\neq \varnothing \).

  3. 4.

    \(\min \{-(-12/5)/(-23/5),-(3/5)/(2/5)\} = 12/23,\ q = 4\).

  4. 5.

    Multiply row 3 by \(-5/23\), and then add \(14/5,6/5,12/5\) times of row 3 to rows 1,2,4, respectively.

  5. 6.

    \(\bar{x}_{7}\;\hspace{-0.7pt} = 289/5 - 194/5 = 19\).

    \(\bar{x}_{B} = {(182/5,58/5,15)}^{\mathrm{T}} - (-194/5){(-14/23,-6/23,-5/23)}^{\mathrm{T}}\).

    \(= {(294/23,34/23,151/23)}^{\mathrm{T}},\ B =\{ 5,3,4\}\).

 

x 1

x 2

x 3

x 4

x 5

x 6

x 7

 

12∕23

1∕23

  

1

\(-24/23\)

\(-14/23\)

 

15∕23

\(-16/23\)

1

  

\(-7/23\)

\(-6/23\)

 

1∕23

2∕23

 

1

 

\(-2/23\)

\(-5/23\)

 

7∕23

37∕23

   

9∕23

\(-12/23\)

u

30

20

10

15

26

34

19

\(\bar{x}\)

− 15

− 12

34∕23

151∕23

294∕23

− 13

19

l

− 15

− 12

− 17

− 8

− 10

− 13

0

Iteration 4:

  1. 1.

    \(\rho _{1} =\rho _{2} =\rho _{3} = 0\). The basic optimal solution and optimal value are

    $$\displaystyle\begin{array}{rcl} \bar{x}& =& {(-15,-12,34/23,151/23,294/23,-13,19)}^{\mathrm{T}}, {}\\ \bar{f}& =& (2,-1,3,-6){(-15,-12,34/23,151/23)}^{\mathrm{T}} = -1,218/23. {}\\ \end{array}$$

5.1 Generalized Dual Phase-I

It is not difficult to generalize dual Phase-I methods (Chap. 14) for standard problems to initiate the generalized dual simplex algorithm.

Using a generalized version of the most-obtuse-angle row rule (14.3), Koberstein and Suhl (2007) designed a dual Phase-I procedure, named by PAN, for solving generale problems. Taking MOPSFootnote 1 as a platform, they tested several main dual Phase-1 methods on 46 typical large-scale sparse problems, the largest among which involves more than 500,000 constraints and 1,000,000 variables. The numerical results show that for most of the tested problem, PAN required a small number of iterations; only for few most difficult problems, the required iterations exceeded an acceptable amount. In the latter cases, they turned to a version of the dual infeasibility-sum Phase-I, named by SDI. It turned out that such a combination, PAN + SDI, is the best among four commonly used Phase-I methods. Therefore, PAN+SDI was taken as the default option for MOPS dual simplex algorithm.

In view of the preceding facts, the author suggests generalizing Rule 14.3.2 by replacing (7.25) with

$$\displaystyle\begin{array}{rcl} \alpha =\min \{ u_{q} - l_{q},& & \min \{\alpha _{i}\ \vert \ \vert \bar{a}_{i\,q}\vert \ \geq \ \tau \theta,\ i = 1,\cdots \,,m\}\}, \\ & & \theta =\max \{ \vert \bar{a}_{i\,q}\vert \ \vert \ i = 1,\cdots \,,m\},{}\end{array}$$
(7.37)

where 0 < τ ≤ 1, α i ,  i = 1, ⋯ , m, are defined by (7.28). The basic idea of such doing is to restrict stepsizes to some extent.

This consideration leads to the following algorithm, yielding from modifying Algorithm 7.4.2.

Algorithm 7.5.2 (Tableau generalized dual Phase-I: the most-obtuse-angle rule).

Given 0 < τ ≤ 1. Initial: a dual feasible simplex tableau of form (7.4), associated with \(\bar{x}\). This algorithm solves the bounded-variable problem (7.13).

  1. 1.

    If J defined by (7.20) is empty, compute \(\bar{f} = {c}^{\mathrm{T}}\bar{x}\), and stop (optimality achieved).

  2. 2.

    Select column index q such that \(q \in \max _{j\in J}\vert \bar{z}_{j}\vert \).

  3. 3.

    Determine stepsize α by (7.37).

  4. 4.

    Set \(\bar{x}_{q} = -\mathrm{sign}(\bar{z}_{q})\alpha\), and update \(\bar{x}_{B} =\bar{ x}_{B} +\alpha \mathrm{sign}(\bar{z}_{q})\bar{a}_{q}\) if α ≠ 0.

  5. 5.

    If \(\alpha = u_{q} - l_{q}\), go to step 1; else, determine row index p ∈ { 1, ⋯ , m} such that α = α p .

  6. 6.

    Convert \(\bar{a}_{p\,q}\) to 1, and eliminate the other nonzeros in the column by elementary transformations.

  7. 7.

    Go to step 1.

6 Generalized Dual Simplex Method

According to Table 3.1, which gives the correspondence between entries of the simplex tableau and the revised simplex tableau, it is easy to formulate the revised version of Algorithm 7.5.1. However, we will not do so, but derive it based on local duality (Sect. 25.5), revealing that such an algorithm actually solves the dual bounded-variable problem.

Let B = { j 1, ⋯ , j m } and N = AB be the current basis and nonbasis, respectively, associated with primal basic solution \(\bar{x}\), i.e.,

$$\displaystyle{ \begin{array}{ll} \bar{x}_{s} & = l_{s}\ \mathrm{or}\ \ u_{s},\quad s \in N, \\ \bar{x}_{B}& = {B}^{-1}b - {B}^{-1}N\bar{x}_{N}.\end{array} }$$
(7.38)

Notation \(\Gamma,\Pi,\) are again defined by (7.19), and ρ i is defined by (7.30). Assume that row index p has already been determined by (7.31).

Consider the following local problem at \(\bar{x}\) (25.5):

$$\displaystyle{ \begin{array}{l@{\quad }lllll} \min \quad &f & =&{c}^{\mathrm{T}}x,& \\ \mathrm{s.t.}\quad &Ax & =&b, & \\ \quad &l_{\Gamma } & \leq &x_{\Gamma }, & & \\ \quad &x_{\Pi } & \leq &u_{\Pi }, & \\ \quad &l_{j_{p}} & \leq &\bar{x}_{j_{p}}, &\mathrm{if}\ \rho _{p} > 0, \\ \quad &\bar{x}_{j_{p}} & \leq &u_{j_{p}}, &\mathrm{if}\ \rho _{p} < 0.\\ \quad \end{array} }$$
(7.39)

Using notation

$$\displaystyle{ h_{p} = \left \{\begin{array}{rrl} l_{j_{p}},&\quad \mathrm{If}\ &\rho _{p} > 0, \\ u_{j_{p}},&\quad \mathrm{If}\ &\rho _{p} < 0,\\ \end{array} \right. }$$
(7.40)

the local dual problem can be written

$$\displaystyle\begin{array}{rcl} \begin{array}{l@{\quad }l} \max \quad &{b}^{\mathrm{T}}y - u_{\Pi }^{\mathrm{T}}v_{\Pi } + l_{\Gamma }^{\mathrm{T}}w_{\Gamma } + h_{p}z_{j_{p}}, \\ \mathrm{s.t.}\quad &{B}^{\mathrm{T}}y + z_{j_{p}}e_{p}\,\ = c_{B}, \\ \quad &{\Gamma }^{\mathrm{T}}y + w_{\Gamma }\ \ \ \ \ \ \hspace{-0.5pt} = c_{\Gamma }, \\ \quad &{\Pi }^{\mathrm{T}}y - v_{\Pi }\ \ \ \ \ \, = c_{\Pi }, \\ \quad &\rho _{p}z_{j_{p}},v_{\Pi },\ w_{\Gamma } \geq 0.\end{array} & & {}\\ \end{array}$$

Based on the equality constraints, eliminate variable \(v_{\Pi },w_{\Gamma }\), and combine (7.19) and (7.20) to reduce the objective function to

$$\displaystyle\begin{array}{rcl}{ (b - \Pi u_{\Pi } - \Gamma l_{\Gamma })}^{\mathrm{T}}y + h_{ p}z_{p}& =& c_{\Pi }^{\mathrm{T}}\bar{x}_{ \Pi } + c_{\Gamma }^{\mathrm{T}}\bar{x}_{ \Gamma } + {(b - \Pi \bar{x}_{\Pi } - \Gamma \bar{x}_{\Gamma })}^{\mathrm{T}}y + h_{ p}z_{j_{p}} \\ & =& c_{N}^{\mathrm{T}}\bar{x}_{ N} + {(b - N\bar{x}_{N})}^{\mathrm{T}}y + h_{ p}z_{j_{p}}. {}\end{array}$$
(7.41)

Then setting \(z_{\Gamma } = w_{\Gamma },z_{\Pi } = -v_{\Pi }\), transform the local dual problem to the following equivalent form:

$$\displaystyle\begin{array}{rcl} \begin{array}{l@{\quad }ll} \max \quad &g(y,z) = c_{N}^{\mathrm{T}}\bar{x}_{N} + {(b - N\bar{x}_{N})}^{\mathrm{T}}y + h_{p}z_{j_{p}},\quad \\ \mathrm{s.t.}\quad &{B}^{\mathrm{T}}y + z_{j_{p}}e_{p} = c_{B}, \\ \quad &{N}^{\mathrm{T}}y + z_{N}\ \ \ \ \hspace{-0.4pt} = c_{N}, \\ \quad &\rho _{p}z_{j_{p}} \geq 0,\quad z_{\Pi } \leq 0,\quad z_{\Gamma } \geq 0.\end{array} & &{}\end{array}$$
(7.42)

Now, define

$$\displaystyle\begin{array}{rcl} \bar{y}& =& {B}^{-\mathrm{T}}c_{ B},{}\end{array}$$
(7.43)
$$\displaystyle\begin{array}{rcl} \bar{z}_{N}& =& c_{N} - {N}^{\mathrm{T}}\bar{y},\;\bar{z}_{ B} = 0.{}\end{array}$$
(7.44)

and assume that the following conditions hold:

$$\displaystyle{ \bar{z}_{\Pi } \leq 0,\quad \bar{z}_{\Gamma } \geq 0, }$$
(7.45)

under which it is not difficult to verify that the primal objective value at \(\bar{x}\) and the dual objective value at \((\bar{y},\bar{z})\) are equal, i.e., \(\bar{f} =\bar{ g}\). Using the preceding notation, moreover, the following is valid.

Lemma 7.6.1.

\((\bar{y},\bar{z})\) is a basic feasible solution to the local dual problem, which exhibits complementarity with \(\bar{x}\) .

Proof.

It is clear that \((\bar{y},\bar{z})\) is the basic solutio to (7.42), satisfying the sign constraints at the bottom. So, it is only needed to show

$$\displaystyle\begin{array}{rcl}{ B}^{\mathrm{T}}\bar{y} +\bar{ z}_{ j_{p}}e_{p}& \ \ = c_{B},& {N}^{\mathrm{T}}\bar{y} +\bar{ z}_{ N}\ \ \ \ \ \ = c_{N},{}\end{array}$$
(7.46)
$$\displaystyle\begin{array}{rcl}{ (\bar{x}_{\Gamma } - l_{\Gamma })}^{\mathrm{T}}\bar{z}_{ \Gamma }& = 0,& {(u_{\Pi } -\bar{ x}_{\Pi })}^{\mathrm{T}}\bar{z}_{ \Pi } = 0,{}\end{array}$$
(7.47)
$$\displaystyle\begin{array}{rcl} (\bar{x}_{j_{p}} - l_{j_{p}})\bar{z}_{j_{p}}& = 0,& \mbox{ if}\ \rho _{p} > 0,{}\end{array}$$
(7.48)
$$\displaystyle\begin{array}{rcl} (\bar{x}_{j_{p}} - u_{j_{p}})\bar{z}_{j_{p}}& = 0,& \mbox{ if}\ \rho _{p} < 0.{}\end{array}$$
(7.49)

From (7.43) and the second expression of (7.44), the first expression of (7.45) follows. By the first expression of (7.44), it holds that

$$\displaystyle{ {\Pi }^{\mathrm{T}}\bar{y} +\bar{ z}_{ \Pi } = {\Pi }^{\mathrm{T}}\bar{y} + c_{ \Pi } - {\Pi }^{\mathrm{T}}\bar{y} = c_{ \Pi }. }$$
(7.50)

Similarly that

$$\displaystyle{ {\Gamma }^{\mathrm{T}}\bar{y} +\bar{ z}_{ \Gamma } = c_{\Gamma }. }$$
(7.51)

Therefore, (7.46) is valid.

By (7.19), on the other hand, it is clear that (7.47) holds; and it is known from the second expression of (7.44) that (7.48) or (7.49) holds. □ 

Setting

$$\displaystyle\begin{array}{rcl} & \bar{v}_{B} = 0,& \bar{w}_{B} = 0,{}\end{array}$$
(7.52)
$$\displaystyle\begin{array}{rcl} & \bar{v}_{\Pi } = -\bar{z}_{\Pi },& \bar{w}_{\Pi } = 0,{}\end{array}$$
(7.53)
$$\displaystyle\begin{array}{rcl} & \bar{v}_{\Gamma } = 0,& \bar{w}_{\Gamma } =\bar{ z}_{\Gamma },{}\end{array}$$
(7.54)

it is not difficult to verify that \((\bar{y},\bar{v},\bar{w})\) is a basic feasible solution to dual problem of (7.13), i.e.,

$$\displaystyle{\begin{array}{l@{\quad }l} \max \quad &{b}^{\mathrm{T}}y - {u}^{\mathrm{T}}v + {l}^{\mathrm{T}}w, \\ \mathrm{s.t.}\quad &{A}^{\mathrm{T}}y - v + w = c,\quad v,w,\geq 0,\\ \quad \end{array} }$$

(see the last paragraph of Sect. 25.5). It and \(\bar{x}\) satisfy complementarity condition. In this sense, \((\bar{y},\bar{z})\) is called a dual feasible solution.

Lemma 7.6.2.

If \(l_{B} \leq \bar{ x}_{B} \leq u_{B}\) holds, then \(\bar{x}\) is a basic optimal solution.

Proof.

When \(l_{B} \leq \bar{ x}_{B} \leq u_{B}\) holds, \(\bar{x}\) is clearly a basic feasible solution to the (full) problem (7.13), hence the same to the local problem (7.39). By Lemma 7.6.1, it is known that \((\bar{y},\bar{z})\) is local dual feasible, exhibiting complementarity with \(\bar{x}\). Therefore, the two are local primal and dual basic optimal solutions, respectively. By Proposition 25.4.2, it is known that \(\bar{x}\) is a basic optimal solution to (7.13). □ 

Now we will find a new dual solution to improve the objective value. To this end, define search direction

$$\displaystyle\begin{array}{rcl} h = -\mathrm{sign}(\rho _{p}){B}^{-\mathrm{T}}e_{ p},\quad \sigma _{j_{p}}& =& \mathrm{sign}(\rho _{p}),{}\end{array}$$
(7.55)
$$\displaystyle\begin{array}{rcl} \sigma _{N}& =& -{N}^{\mathrm{T}}h.{}\end{array}$$
(7.56)

Lemma 7.6.3.

Under the preceding definition, the search direction satisfies the following conditions:

$$\displaystyle\begin{array}{rcl} & & {B}^{\mathrm{T}}h +\sigma _{ j_{p}}e_{p} = 0,\quad {N}^{\mathrm{T}}h +\sigma _{ N} = 0,\ {}\end{array}$$
(7.57)
$$\displaystyle\begin{array}{rcl} & & {(b - N\bar{x}_{N})}^{\mathrm{T}}h + h_{ p}\sigma _{j_{p}} > 0.{}\end{array}$$
(7.58)

Proof.

Its first half is easily verified, it is only needed to show (7.58).

From (7.55) and the second expression of (7.38), it follows that

$$\displaystyle\begin{array}{rcl}{ h}^{\mathrm{T}}(b - N\bar{x}_{ N}) + h_{p}\sigma _{j_{p}}& =& -\mathrm{sign}(\rho _{p})e_{p}^{\mathrm{T}}({B}^{-1}b - {B}^{-1}N\bar{x}_{ N}) + \mathrm{sign}(\rho _{p})h_{p} {}\\ & =& -\mathrm{sign}(\rho _{p})(e_{p}^{\mathrm{T}}\bar{x}_{ B} - h_{p}). {}\\ \end{array}$$

Then from (7.14) and (7.40) it follows that the right-hand side of the preceding equals

$$\displaystyle{l_{j_{p}} -\bar{ x}_{j_{p}} > 0,}$$

when ρ p  > 0, while equals

$$\displaystyle{\bar{x}_{j_{p}} - u_{j_{p}} > 0.}$$

when ρ p  < 0 □ .

Consider the following line search scheme:

$$\displaystyle\begin{array}{rcl} \hat{y} =\bar{ y} +\beta h,\quad & & \hat{z}_{j_{p}} =\bar{ z}_{j_{p}} +\beta \sigma _{j_{p}} = \mathrm{sign}(\rho _{p})\beta,{}\end{array}$$
(7.59)
$$\displaystyle\begin{array}{rcl} & & \hat{z}_{N} =\bar{ z}_{N} +\beta \sigma _{N}.{}\end{array}$$
(7.60)

Introduce index set

$$\displaystyle{ J =\{ j \in \Gamma \ \vert \ \sigma _{j} < 0\} \cup \{ j \in \Pi \ \vert \ \sigma _{j} > 0\}. }$$
(7.61)

Assume that J. Then from (7.60) and sign conditions \(\hat{z}_{\Pi } \leq 0\) and \(\hat{z}_{\Gamma } \geq 0\), it is known that the largest possible stepsize β and pivot column index q satisfy the minimum-ratio test

$$\displaystyle{ \beta = -\bar{z}_{q}/\sigma _{q} =\min _{j\in J} -\bar{ z}_{j}/\sigma _{j} \geq 0. }$$
(7.62)

If all components of \(\bar{z}_{N}\) are nonzero, then the solution is dual nondegenerate, hence the determined stepsize is positive.

Lemma 7.6.4.

If J ≠ ∅, the new solution, determined by (7.59) and  (7.60) together with (7.62) , is a basic feasible solution to the local dual problem. The according objective value increases, and strictly increases if dual nondegeneracy is assumed.

Proof.

From (7.59), the first expression of (7.57) and (7.43), it is known for any β ≥ 0 that

$$\displaystyle{ {B}^{\mathrm{T}}\hat{y} +\hat{ z}_{ j_{p}}e_{p} = {B}^{\mathrm{T}}\bar{y} +\beta ({B}^{\mathrm{T}}h +\sigma _{ j_{p}}e_{p}) = {B}^{\mathrm{T}}\bar{y} = c_{ B}. }$$
(7.63)

From the first expression of (7.59), (7.60), the second expression of (7.57) and (7.44), it follows that

$$\displaystyle{ {N}^{\mathrm{T}}\hat{y} +\hat{ z}_{ N} = {N}^{\mathrm{T}}\bar{y} +\beta {N}^{\mathrm{T}}h +\bar{ z}_{ N} +\beta \sigma _{N} = ({N}^{\mathrm{T}}\bar{y} +\bar{ z}_{ N}) +\beta ({N}^{\mathrm{T}}h +\sigma _{ N}) = c_{N}. }$$
(7.64)

In addition, by (7.59), (7.60) and (7.57) is is known that \(\hat{z}_{j_{p}},\hat{z}\) satisfies the sign condition at the bottom of problem (7.42), hence the new solution is basic feasible solution, associated with the objective value increasing to

$$\displaystyle\begin{array}{rcl} \hat{g}& =& {(b - N\bar{x}_{N})}^{\mathrm{T}}\hat{y} + h_{ p}\hat{z}_{j_{p}} \\ & =& {(b - N\bar{x}_{N})}^{\mathrm{T}}\bar{y} +\beta ({(b - N\bar{x}_{ N})}^{\mathrm{T}}h + h_{ p}\sigma _{j_{p}}) \\ & \geq & \bar{g}, {}\end{array}$$
(7.65)

where the inequality comes from (7.58) and β ≥ 0. In the dual nondegeneracy case, β > 0, and hence the strict inequality holds, as implies strict increase of the the objective value. □ 

When J is empty, (7.57) is not well-defined but the following is valid.

Lemma 7.6.5.

If J = ∅, then the original problem (7.13) is infeasible.

Proof.

J =  implies that

$$\displaystyle{ \sigma _{\Pi } \leq 0,\quad \sigma _{\Gamma } \geq 0. }$$
(7.66)

Combining the preceding two expressions together with \(\bar{z}_{\Pi } \leq 0\) and \(\bar{z}_{\Gamma } \geq 0\) leads to

$$\displaystyle{\hat{z}_{\Pi } =\bar{ z}_{\Pi } +\beta \sigma _{\Pi } \leq 0,\quad \hat{z}_{\Gamma } =\bar{ z}_{\Gamma } +\beta \sigma _{\Gamma } \geq 0,\quad \forall \ \beta > 0.}$$

Similarly to the proof of Theorem 7.6.4, it can be shown that \((\hat{y},\hat{z})\) satisfies the other constraints, with the objective value denoted again by (7.65). Thus, noting (7.58), it is known that

$$\displaystyle{\hat{g} \rightarrow \infty \qquad \mathrm{as}\quad \beta \rightarrow \infty,}$$

Therefore, the local dual problem is unbounded. By Proposition 25.4.2, the original problem is infeasible. □ 

Now we need to determine a primal solution that is complementary with the dual solution, based on the local problem (7.39). For the value of basic variable x p to change from \(\bar{x}_{p}\) to the violated bound, it is necessary to let the value of nonbasic variable x q change from \(\bar{x}_{q}\) accordingly by a range, i.e.,

$$\displaystyle{ \Delta x_{q} = \left \{\begin{array}{ll} -\rho _{p}/\vert \sigma _{q}\vert,&\mathrm{if}\ \ \bar{x}_{q} = l_{q}, \\ \ \ \ \rho _{p}/\vert \sigma _{q}\vert, &\mathrm{if}\ \ \bar{x}_{q} = u_{q}.\\ \end{array} \right. }$$
(7.67)

Therefore, the new values are

$$\displaystyle{ \hat{x}_{B} =\bar{ x}_{B} - \Delta x_{q}\bar{a}_{q},\quad \hat{x}_{q} =\bar{ x}_{q} + \Delta x_{q},\quad \hat{x}_{j} =\bar{ x}_{j},\ j \in N,\ j\neq q, }$$
(7.68)

where \(\bar{a}_{q} = {B}^{-1}a_{q}\), associated with the new objective value

$$\displaystyle{ \hat{f} =\bar{ f} + \vert \Delta x_{q}\bar{z}_{q}\vert \geq \bar{ f}. }$$
(7.69)

Note that all components of \(\hat{x}_{N}\) are the same as those of \(\bar{x}_{N}\), except for \(\hat{x}_{q}\). From the first expression of (7.68) and the second expression of (7.59), it is known that if ρ p  > 0, then \(\hat{x}_{j_{p}} = l_{j_{p}}\) and \(\hat{z} \geq 0\) hold, while if ρ p  < 0, then \(\hat{x}_{j_{p}} = u_{j_{p}}\) and \(\hat{z}_{j_{p}} \leq 0\) hold. Therefore, after updating basis and nonbasis by exchanging p and q, \(\hat{x}\) and \((\hat{y},\hat{z})\) exhibit complementarity, and the latter satisfies according dual feasible conditions, so that we are ready to go on the next iteration.

The overall steps are summarized into the following algorithm, a revision of Algorithm 7.5.1.

Algorithm 7.6.1 (Generalized dual simplex algorithm).

Initial: (B, N), B −1; \(\bar{y},\bar{z},\bar{x}\) satisfying (7.38), (7.43) and (7.44). This algorithm solves the bounded-variable problem (7.13).

  1. 1.

    Select a pivot row index \(p \in \arg \max \{\vert \rho _{i}\vert \ \vert \ i = 1,\cdots \,,m\}\), where ρ i is defined by (7.30).

  2. 2.

    If ρ p  = 0, compute \(\bar{f} = {c}^{\mathrm{T}}\bar{x}\), and stop.

  3. 3.

    Compute \(\sigma _{N} = -{N}^{\mathrm{T}}h\), where \(h = -\mathrm{sign}(\rho _{p}){B}^{-\mathrm{T}}e_{p}\).

  4. 4.

    Stop if J defined by (7.61) is empty.

  5. 5.

    Determine β and pivot column index q by (7.62).

  6. 6.

    Compute \(\Delta x_{q}\) by (7.67).

  7. 7.

    Compute \(\bar{a}_{q} = {B}^{-1}a_{q}\).

  8. 8.

    Update \(\bar{x}\) by (7.68).

  9. 9.

    Update \(\bar{y},\,\bar{z}_{N},\,\bar{z}_{j_{p}}\) by (7.59) and (7.60).

  10. 10.

    Update B −1 by (3.23).

  11. 11.

    Update (B, N) by exchanging j p and q.

  12. 12.

    Go to step 1.

Theorem 7.6.1.

Algorithm  7.6.1 generates a sequence of primal and of dual basic solutions. Assuming nondegeneracy, it terminates either at

  1. (i)

    Step 2, giving a pair of primal and dual basic optimal solutions; or at

  2. (ii)

    Step 4, detecting infeasibility of the problem.

Proof.

The validity comes from Lemmas 7.6.2, 7.6.4 and 7.6.5, and related discussions, made preceding Algorithm 7.6.1. □ 

Example 7.6.1.

Solve the following problem by Algorithm 7.6.1:

$$\displaystyle{\begin{array}{l@{\quad }ll} \min \quad &f = x_{1} + 2x_{2} - 2x_{3}, \\ \mathrm{s.t.}\quad & - 2x_{1}\, +\, x_{2}\ +\,\,\,\, x_{3} + x_{4}\qquad \ \qquad = 0, \\ \quad &\,\,\, - x_{1}\, -\, x_{2}\ +\,\,\,\ x_{3}\qquad + x_{5}\ \qquad = 0, \\ \quad &\quad \;x_{1}\, -\, x_{2}\ -\,\, 2x_{3}\qquad \qquad + x_{6}\ = 0, \\ \quad &1 \leq x_{1} \leq 5,\;\;\;-2 \leq x_{2} \leq \infty,\quad - 3 \leq x_{3} \leq 0, \\ \quad &2 \leq x_{4} \leq 5,\;\;\;\;\;\,0 \leq x_{5} \leq 6,\;\;\quad - 3 \leq x_{6} \leq 0.\end{array} }$$

Answer Initial:\(B =\{ 4,5,6\},\ N =\{ 1,2,3\},\ {B}^{-1} = I,\ \bar{x}_{N} = {(1_{(-)},-2_{(-)},{0}^{(+)})}^{\mathrm{T}}\), \(\bar{x}_{B} = {(4,-1,-3)}^{\mathrm{T}},\ \bar{y} = (0,0,0),\ \bar{z}_{N} = {(1,2,-2)}^{\mathrm{T}},\ \bar{f} = -3\).

Iteration 1:

  1. 1.

    \(\max \{0,\vert 0 - (-1)\vert,0\} = 1,p = 2\), x 5 leaves the basis.

  2. 3.

    \(\;\,h = -\mathrm{sign}(\rho _{2}){B}^{-\mathrm{T}}e_{2} = {(0,-1,0)}^{\mathrm{T}},\quad \sigma _{N} = -{N}^{\mathrm{T}}h = {(-1,-1,1)}^{\mathrm{T}}.\qquad \quad \;\,\)

  3. 4.

      J = { 1, 2, 3} ≠ . 

  4. 5.

    \(\;\,\beta =\min \{ -1/(-1),-2/(-1),-(-2)/1\} = 1,\ q = 1.\)

  5. 6.

    \(\;\,\!\!\!\!\!\!\!\!\Delta x_{1} =\rho _{2}/\vert \sigma _{1}\vert = 1.\)

  6. 7.

    \(\;\,\bar{a}_{1} = {B}^{-1}a_{1} = {(-2,-1,1)}^{\mathrm{T}}.\)

  7. 8.

    \(\;\,\bar{x}_{B} =\bar{ x}_{B} - \Delta x_{1}\bar{a}_{1} = {(4,-1,-3)}^{\mathrm{T}} - {(-2,-1,1)}^{\mathrm{T}} = {(6,0,-4)}^{\mathrm{T}}.\)

    \(\bar{x}_{N} =\bar{ x}_{N} + \Delta x_{1}e_{1} = {(1,-2,0)}^{\mathrm{T}} + {(1,0,0)}^{\mathrm{T}} = {(2,-2,0)}^{\mathrm{T}}.\)

  8. 9.

    \(\;\,\bar{y} =\bar{ y} +\beta h = {(0,-1,0)}^{\mathrm{T}} + 1 \times {(0,-1,0)}^{\mathrm{T}} = {(0,-2,0)}^{\mathrm{T}},\)

    \(\bar{z}_{N} =\bar{ z}_{N} +\beta \sigma _{N} = {(1,2,-2)}^{\mathrm{T}} + 1{(-1,-1,1)}^{\mathrm{T}} = {(0,1,-1)}^{\mathrm{T}},\)

    \(\bar{z}_{5} = \mathrm{sign}(\rho _{2})\beta = 1.\)

  9. 10.

    Update \({B}^{-1} = \left (\begin{array}{c@{\;\;\;}r@{\;\;\;}c} 1\;\;\;& - 2\;\;\;& \\ \;\;\; & - 1\;\;\; & \\ \;\;\;& 1\;\;\;&1\\ \;\;\;\end{array} \right )\).

  10. 11.

    \(\bar{x}_{B} = {(6,2,-4)}^{\mathrm{T}},\ B =\{ 4,1,6\}\); \(\bar{x}_{N} = {(0_{(-)},-2_{(-)},{0}^{(+)})}^{\mathrm{T}},\bar{z}_{N} = {(1,1,-1)}^{\mathrm{T}},\)

    \(N\;\hspace{-0.4pt} =\{ 5,2,3\}\).

Iteration 2:

  1. 1.

    \(\max \{\vert 5 - 6\vert,0,(-3) - (-4)\} = 1,p = 1\), x 4 leaves the basis.

  2. 3.

    \(\;\,h = -\mathrm{sign}(\rho _{1}){B}^{-\mathrm{T}}e_{1} = {(1,-2,0)}^{\mathrm{T}},\sigma _{N} = -{N}^{\mathrm{T}}h = {(2,-3,1)}^{\mathrm{T}}.\)

  3. 4.

    \(\;\,J =\{ 2,3\}\neq \varnothing.\)

  4. 5.

    \(\;\,\beta =\min \{ -1/(-3),-(-1)/1\} = 1/3,\ q = 2.\)

  5. 6.

    \(\;\,\!\!\!\!\!\!\!\!\Delta x_{2} = -\rho _{1}/\vert \sigma _{2}\vert = 1/3.\)

  6. 7.

    \(\;\,\bar{a}_{2} = {B}^{-1}a_{2} = {(3,1,-2)}^{\mathrm{T}}.\)

  7. 8.

    \(\;\,\bar{x}_{B} = {(6,2,-4)}^{\mathrm{T}} - (1/3){(3,1,-2)}^{\mathrm{T}} = {(5,5/3,-10/3)}^{\mathrm{T}},\)

    \(\bar{x}_{N} =\bar{ x}_{N} + \Delta x_{2}e_{2} = {(0,-2,0)}^{\mathrm{T}} + {(0,1/3,0)}^{\mathrm{T}} = {(0,-5/3,0)}^{\mathrm{T}}.\)

  8. 9.

    \(\;\,\bar{y} =\bar{ y} +\beta h = {(0,-2,0)}^{\mathrm{T}},\)

    \(\bar{z}_{N} =\bar{ z}_{N} +\beta \sigma _{N} = {(1,1,-1)}^{\mathrm{T}} + (1/3){(2,-3,1)}^{\mathrm{T}} = {(5/3,0,-2/3)}^{\mathrm{T}},\qquad \;\;\)

    \(\bar{z}_{4} = \mathrm{sign}(\rho _{1})\beta = -1/3.\)

  9. 10.

    Update \({B}^{-1} = \left (\begin{array}{r@{\;\;\;}c@{\;\;\;}c} 1/3\;\;\;& \;\;\;& \\ - 1/3\;\;\;&1\;\;\;& \\ 2/3\;\;\;& \;\;\;&1\\ \;\;\;\end{array} \right )\left (\begin{array}{c@{\;\;\;}r@{\;\;\;}c} 1\;\;\;& - 2\;\;\;& \\ \;\;\; & - 1\;\;\; & \\ \;\;\;& 1\;\;\;&1\\ \;\;\;\end{array} \right ) = \left (\begin{array}{r@{\;\;\;}c@{\;\;\;}c} 1/3\;\;\;& - 2/3\;\;\;& \\ - 1/3\;\;\;& - 1/3\;\;\;& \\ 2/3\;\;\;& - 1/3\;\;\;&1\\ \;\;\;\end{array} \right )\).

  10. 11.

    B  = { 2, 1, 6}, N = { 5, 4, 3}, \(\bar{x}_{B} = {(-5/3,5/3,-10/3)}^{\mathrm{T}}\),

    \(\bar{x}_{N}\,\hspace{-2.6pt}= {(0_{(-)},{5}^{(+)},{0}^{(+)})}^{\mathrm{T}},\)

    \(\bar{z}_{N}\,\hspace{-1.1pt} = {(5/3,-1/3,-2/3)}^{\mathrm{T}}\).

Iteration 3:

  1. 1.

    \(\max \{0,0,\vert (-3) - (-10/3)\vert \} = 1/3,p = 3\), x 6 leaves the basis.

  2. 3.

    \(\;\,h = -\mathrm{sign}(\rho _{3}){B}^{-\mathrm{T}}e_{3} = {(-2/3,1/3,-1)}^{\mathrm{T}},\sigma _{N} = -{N}^{\mathrm{T}}h\)

    \(= {(-1/3,2/3,-5/3)}^{\mathrm{T}}.\)

  3. 4.

    \(\;\,J =\{ 1,2\}\neq \varnothing.\)

  4. 5.

    \(\;\,2 = q \in \min \{-(5/3)/(-1/3),-(-1/3)/(2/3)\},\beta = 1/2,\)

    \(\quad \;x_{4}\ \mathrm{enters\ the\ basis.}\)

  5. 6.

    \(\;\,\!\!\!\!\!\!\!\!\Delta x_{2} =\rho _{3}/\vert \sigma _{2}\vert = -(1/3)/(2/3) = -1/2.\)

  6. 7.

    \(\;\,\bar{a}_{2} = {B}^{-1}a_{2} = {(1/3,-1/3,2/3)}^{\mathrm{T}}.\)

  7. 8.

    \(\;\,\bar{x}_{B} = {(-5/3,5/3,-10/3)}^{\mathrm{T}} - (-1/2){(1/3,-1/3,2/3)}^{\mathrm{T}}\! =\! {(-3/2,3/2,-3)}^{\mathrm{T}},\)

    \(\bar{x}_{N} =\bar{ x}_{N} + \Delta x_{2}e_{2} = {(0,5,0)}^{\mathrm{T}} + {(0,-1/2,0)}^{\mathrm{T}} = {(0,9/2,0)}^{\mathrm{T}}.\)

  8. 9.

    \(\;\,\bar{y} = {(0,-2,0)}^{\mathrm{T}} + (1/2){(-2/3,1/3,-1)}^{\mathrm{T}} = {(-1/3,-11/6,-1/2)}^{\mathrm{T}},\)

    \(\bar{z}_{N} =\bar{ z}_{N} +\beta \sigma _{N} = {(5/3,-1/3,-2/3)}^{\mathrm{T}} + (1/2){(-1/3,2/3,-5/3)}^{\mathrm{T}}\)

    \(= {(3/2,0,-3/2)}^{\mathrm{T}},\)

    \(\bar{z}_{6} = \mathrm{sign}(\rho _{3})\beta = 1/2.\)

  9. 10.

    Update \({B}^{-1}\! =\! \left (\begin{array}{c@{\;\;\;}c@{\;\;\;}r} 1\;\;\;& \;\;\;& - 1/2 \\ \;\;\;&1\;\;\;& 1/2 \\ \;\;\;& \;\;\;& - 3/2\\ \;\;\;\end{array} \right )\!\left (\begin{array}{r@{\;\;\;}c@{\;\;\;}c} 1/3\;\;\;& - 2/3\;\;\;& \\ - 1/3\;\;\;& - 1/3\;\;\;& \\ 2/3\;\;\;& - 1/3\;\;\;&1\\ \;\;\;\end{array} \right )\! =\! \left (\begin{array}{c@{\;\;\;}r@{\;\;\;}r} \;\;\;& - 1/2\;\;\;& - 1/2 \\ \;\;\;& - 1/2\;\;\;& 1/2 \\ - 1\;\;\;& 1/2\;\;\;& - 3/2\\ \;\;\;\end{array} \right )\).

  10. 11.

    It is satisfied that \(l_{B} \leq \bar{ x}_{B} \leq u_{B}\). The optimal solutio and value are

    $$\displaystyle{\bar{x} = {(3/2,-3/2,0,9/2,0,-3)}^{\mathrm{T}},\quad \bar{f} = 3/2 + 2(-3/2) = -3/2.}$$

7 Bound Flipping

The so-called “bound-flipping” technique can improve the effect of the generalized dual simplex method significantly. In fact, it might be the main cause for the dual simplex method to outperform its primal counterpart at present (Kirillova et al. 1979; Koberstein and Suhl 2007; Kostina 2002; Maros 2003a).

Let \((\bar{y},\bar{z})\) be the current dual basic feasible solution and let \(\bar{x}\) be the associate primal solution. Assume that a row index p has been determined by (7.31) and that a column index q determined by the minimum-ratio test (7.62). Let the nonbasic variable x q change from the current value \(\bar{x}_{q}\) (going up or down) toward the other bound, while keeping the other nonbasic variables unchanged. For the basic variable \(x_{j_{p}}\) to attain the violated bound, the value of x q could fall either within the range between the lower and upper bounds, or beyond the other bound. In the latter case, it is favorable to adopt the “bound-flipping”: fix the value of x q on the other bound and update values of basic variables accordingly; then find a new column index q that attains the second minimum-ratio, and do the same thing again, until the value of \(x_{j_{p}}\) will attain the violated bound if the current value of x q falls within the range between its lower and upper bounds. Then, a normal dual step is taken by dropping \(x_{j_{p}}\) from and enter x q to the basis, and updating the primal and dual solutions. It is seen that the dual feasibility still maintains.

The bound-flipping technique is embedded in the following subalgorithm, which is called in step 10 of Algorithm 7.7.2.

Algorithm 7.7.1 (Bound-flipping subalgorithm).

This algorithm provide the pivot column index q, dual stepsize β, and carries out related computations.

  1. 1.

    Set \(j = 0,v = 0\), and compute \(r_{j} = -\bar{z}_{j}/\sigma _{j},\quad \forall j \in J\).

  2. 2.

    Set \(j = j + 1\).

  3. 3.

    Set \(v = v +\delta a_{q}\).

  4. 4.

    Set \(\bar{x}_{q} = \left \{\begin{array}{l@{\quad }l} u_{q},\quad &\mathrm{if}\ \ \bar{x}_{q} = l_{q}, \\ l_{q},\qquad \quad &\mathrm{if}\ \ \bar{x}_{q} = u_{q}.\\ \quad \end{array} \right.\)

  5. 5.

    Update: ρ p  = ρ p − | δ σ q  | .

  6. 6.

    Determine q and r q such that r q  = min j ∈ J r j .

  7. 7.

    Compute \(\Delta x_{q}\) by (7.67).

  8. 8.

    Update: J = J∖{q}.

  9. 9.

    Go to step 13 if J = .

  10. 10.

    Compute \(\delta = \left \{\begin{array}{ll} u_{q} - l_{q},&\mathrm{if}\ \ \bar{x}_{q} = l_{q}, \\ l_{q} - u_{q},&\mathrm{if}\ \ \bar{x}_{q} = u_{q}.\\ \end{array} \right.\)

  11. 11.

    Go to step 2 if \(\vert \Delta x_{q}\vert \geq \vert \delta \vert \).

  12. 12.

    Set β = r q .

  13. 13.

    Compute \(u = {B}^{-1}v\), and update: \(\bar{x}_{B} =\bar{ x}_{B} - u\).

  14. 14.

    Return.

The following master algorithm is a slight modification of Algorithm 7.6.1.

Algorithm 7.7.2 (Generalized dual simplex algorithm: bound-flipping).

Initial: (B, N), B −1, \(\bar{y},\bar{z},\bar{x}\) satisfying (7.38), (7.43) and (7.44). This algorithm solves bounded-variable problem (7.13).

  1. 1.

    Select row index p by (7.31) together with (7.30).

  2. 2.

    Stop if ρ p  = 0 (optimality achieved).

  3. 3.

    Compute σ N by (7.56) together with (7.55).

  4. 4.

    Stop if J defined by (7.61) is empty (dual unbounded or primal infeasible).

  5. 5.

    Determine column index q by (7.62).

  6. 6.

    Compute \(\Delta x_{q}\) by (7.67).

  7. 7.

    Set J = J∖{q}.

  8. 8.

    If J = , go to step 12.

  9. 9.

    Compute \(\delta = \left \{\begin{array}{ll} u_{q} - l_{q}, &\mathrm{if}\ \ \bar{x}_{q} = l_{q}, \\ l_{q} - u_{q},\qquad &\mathrm{if}\ \ \bar{x}_{q} = u_{q}.\\ \end{array} \right.\)

  10. 10.

    If \(\vert \Delta x_{q}\vert \geq \vert \delta \vert \), call Algorithm 7.7.1.

  11. 11.

    Compute \(\bar{a}_{q} = {B}^{-1}a_{q}\).

  12. 12.

    Update \(\bar{x}\) by (7.68).

  13. 13.

    Update \(\bar{y},\,\bar{z}_{N},\,\bar{z}_{j_{p}}\) by (7.59) together with (7.60).

  14. 14.

    Update B −1 by (3.23).

  15. 15.

    Update (B, N) by exchanging j p and q.

  16. 16.

    Go to step 1.

The bound-flipping increases computational work associated therewith, in particular, involving an additional linear system (in step 13 of Algorithm 7.7.1). This is inappreciable, however, if compared with profitable return. Since the associated dual stepsize is usually much larger than that without bound-flipping, so is the increment in objective value, especially when ρ p is large. As a result, the number of iterations are usually decreased significantly. In fact, the bound-flipping has been unable to be omitted in current dual simplex codes.

Example 7.7.1.

Solve the following problem by Algorithm 7.7.2:

$$\displaystyle{\begin{array}{l@{\;\;}rrrrrrrrrrrrrrr} \min \;\;&\multicolumn{15}{l}{f = -x_{1} + 2x_{3} + 3x_{4},} \\ \mathrm{s.t.}\;\;&-&2x_{1} & +&x_{2} & +& x_{3} & +& x_{4} & & & & & =& - 2, \\ \;\;& & x_{1} & & & -& x_{3} & +& x_{4} & +&x_{5} & & & =& 1, \\ \;\;& & x_{1} & & & -&2x_{3} & -&3x_{4} & & & & + x_{6} & =& 0, \\ \;\;&\multicolumn{15}{l}{0 \leq x_{1} \leq 2,\;\;\;-6 \leq x_{2} \leq 10,\quad \,0 \leq x_{3} \leq 7,} \\ \;\;&\multicolumn{15}{l}{1 \leq x_{4} \leq 5,\;\;\;\;\;\;2 \leq x_{5} \leq 6,\quad - 1 \leq x_{6} \leq 6.}\end{array} }$$

Answer Initial: \(B =\{ 2,5,6\},\ N =\{ 1,3,4\},\,{B}^{-1} = I,\ \bar{x}_{N} = {({2}^{(+)},0_{(-)},1_{(-)})}^{\mathrm{T}}\), \(\bar{x}_{B} = {(1,-2,1)}^{\mathrm{T}},\ \bar{y} = (0,0,0),\ \bar{z}_{N} = {(-1,2,3)}^{\mathrm{T}},\ \bar{f} = 1\).

Iteration 1:

  1. 1.

    \(\max \{0,2 - (-2),0\} = 4,p = 2,\ x_{5}\) leaves the basis.

  2. 3.

    \(\;\,h = -\mathrm{sign}(\rho _{2}){B}^{-\mathrm{T}}e_{2} = {(0,-1,0)}^{\mathrm{T}},\ \sigma _{N} = -{N}^{\mathrm{T}}h = {(1,-1,1)}^{\mathrm{T}}.\qquad \qquad \;\;\)

  3. 4.

      J = { 1, 2}. 

  4. 5.

    \(\;\,\beta =\min \{ -(-1)/1,-2/ - 1\} = 1,\ q = 1.\)

  5. 6.

    \(\;\,\Delta x_{1} = -\rho _{2}/\vert \sigma _{1}\vert = -4/1 = -4.\)

  6. 7.

    \(\;\,J = J\setminus \{1\} =\{ 2\}\neq \varnothing.\)

  7. 9.

    \(\;\,\delta = l_{1} - u_{1} = 0 - 2 = -2.\)

  8. 10.

    \(\vert \Delta x_{1}\vert > \vert \delta \vert \), so call Algorithm 7.7.1.

    1. (1)

      \(\;\,j = 0,v = 0,\,r_{2} = -2/ - 1 = 2;\)

    2. (2)

      \(\;\,j = j + 1 = 1;\)

    3. (3)

      \(\;\,v = v +\delta a_{1} = (-2){(-2,1,1)}^{\mathrm{T}} = {(4,-2,-2)}^{\mathrm{T}};\)

    4. (4)

      \(\;\,\bar{x}_{1} = l_{1} = 0;\)

    5. (5)

      \(\;\,\rho _{2} =\rho _{2} -\vert \delta \sigma _{1}\vert = 4 - 2 \times 1 = 2;\)

    6. (6)

        q = 2; 

    7. (7)

      \(\;\,\Delta x_{2} = -\rho _{2}/\vert \sigma _{2}\vert = -2/(-1) = 2;\)

    8. (8)

      \(\;\,J = J\setminus \{2\} = \varnothing;\)

    9. (12)

      \(\;\,\beta = r_{2} = 2;\)

    10. (13)

      \(\;\,u = {B}^{-1}v = {(4,-2,-2)}^{\mathrm{T}};\)

      \(\bar{x}_{B} =\bar{ x}_{B} - u = {(1,-2,1)}^{\mathrm{T}} - {(4,-2,-2)}^{\mathrm{T}} = {(-3,0,3)}^{\mathrm{T}};\)

    11. (14)

      Return.

  9. 11.

    \(\;\,\bar{a}_{2} = {B}^{-1}a_{2} = {(1,-1,-2)}^{\mathrm{T}}.\)

  10. 12.

    \(\;\,\bar{x}_{B} = {(-3,0,3)}^{\mathrm{T}} - 2{(1,-1,-2)}^{\mathrm{T}} = {(-5,2,7)}^{\mathrm{T}},\)

    \(\bar{x}_{N} = {(0,0,1)}^{\mathrm{T}} + {(0,2,0)}^{\mathrm{T}} = {(0,2,1)}^{\mathrm{T}}.\)

  11. 13.

    \(\;\,\bar{y} = {(0,0,0)}^{\mathrm{T}} + 2{(0,-1,0)}^{\mathrm{T}} = {(0,-2,0)}^{\mathrm{T}},\)

    \(\bar{z}_{N} =\bar{ z}_{N} +\beta \sigma _{N} = {(-1,2,3)}^{\mathrm{T}} + 2{(1,-1,1)}^{\mathrm{T}} = {(1,0,5)}^{\mathrm{T}},\)

    \(\bar{z}_{j_{2}} = \mathrm{sign}(\rho _{2})\beta = 2.\)

  12. 14.

    \(\;\,{B}^{-1} = \left (\begin{array}{c@{\;\;\;}r@{\;\;\;}c} 1\;\;\;& 1\;\;\;& \\ \;\;\; & - 1\;\;\; & \\ \;\;\;& - 2\;\;\;&1\\ \;\;\;\end{array} \right ).\)

  13. 16.

    \(\;\,\bar{x}_{B} = {(-5,2,7)}^{\mathrm{T}},\,\bar{x}_{N} = {(0_{(-)},2_{(-)},1_{(-)})}^{\mathrm{T}},\bar{z}_{N} = {(1,2,5)}^{\mathrm{T}},\)

    \(B =\{ 2,3,6\},\ N =\{ 1,5,4\}.\)

Iteration 2:

  1. 1.

    \(\max \{0,0,7 - 6\} = 1,p = 3\), x 6 leaves the basis.

  2. 3.

    \(h\, = -\mathrm{sign}(\rho _{3}){B}^{-\mathrm{T}}e_{3} = {(0,-2,1)}^{\mathrm{T}}\), \(\sigma _{N} = -{N}^{\mathrm{T}}h = {(1,2,5)}^{\mathrm{T}}\).

  3. 4.

    J = , hence dual unbounded or primal infeasible.

 If bound-flipping had not been used, solving the preceding problem would have required much more iterations.