Keywords

84.1 Introduction

In plain English one can say that a linear optimization (LO) problem consists of optimizing, i.e., minimizing or maximizing, a linear function over a certain domain. The domain is given by a set of linear constraints. The constraints can be either equalities or inequalities.

The simplex method for linear programming problems was first proposed by Dantzig in 1947 (Dantzig 1948), which can be described as follow:

Supposing that the given standard linear programming problem is

$$ \begin{gathered} \hbox{min} \varvec{s} = \varvec{cx} \hfill \\ \left\{ {\begin{array}{*{20}c} {\varvec{Ax} = \varvec{b}} \\ {\varvec{x} \ge 0} \\ \end{array} } \right. \hfill \\ \end{gathered} $$

where \( A = \left( {\begin{array}{*{20}c} {a_{11} } & \cdots & {a_{1n} } \\ \cdots & \cdots & \cdots \\ {a_{m1} } & \cdots & {a_{mn} } \\ \end{array} } \right) \), \( \varvec{x} = \left( {\begin{array}{*{20}c} {\varvec{x}_{1} } \\ \vdots \\ {\varvec{x}_{n} } \\ \end{array} } \right) \), \( \varvec{b} = \left( {\begin{array}{*{20}c} {\varvec{b}_{1} } \\ \vdots \\ {\varvec{b}_{m} } \\ \end{array} } \right) \),

$$ c = \left( {\begin{array}{*{20}c} {\lambda_{1} } & \cdots & {\lambda_{n} } \\ \end{array} } \right) $$

The rank of \( A = (a_{ij} )_{m \times n} \) is m, \( n \ge m \ge 1 \). The steps of the simplex method can be summarized as follow:

  • The first step: \( \varvec{B} = (\varvec{p}_{{j_{1} }} ,\varvec{p}_{{j_{2} }} , \ldots ,\varvec{p}_{{j_{m} }} ) \) is the known feasible basis, and the canonical form and the basic feasible solution \( \varvec{x}_{\varvec{B}}^{(0)} = \varvec{B}^{ - 1} \varvec{b} = \left( {\begin{array}{*{20}c} {\varvec{b}_{10} } & \cdots & {\varvec{b}_{{\varvec{m}0}} } \\ \end{array} } \right)^{\varvec{T}} \) o

  • The second step: Check the testing number. If all testing numbers satisfy \( \lambda_{j}\, \le\, 0,(j = 1,2, \cdots n) \), the corresponding basic feasible solution \( \varvec{x}_{{}}^{(0)} \) is the optimal solution. All the process is ended, otherwise go to next step;

  • The third step: If some testing number \( \lambda_{r} > 0 \) and \( \varvec{B}^{ - 1} \varvec{p}_{\varvec{r}} = (b_{1r} ,b_{2r} , \cdots ,b_{mr} )^{T} \le 0 \), there is no optimal solution for this problem. All the process is ended, otherwise go to next step;

  • The forth step: If some testing number \( \lambda_{r} > 0 \) and there is a positive number in \( (b_{1r} ,b_{2r} , \cdots ,b_{mr} )^{T} \), make \( x_{r} \) be the entering-basis variable (if there are a few of positive testing numbers, choose the largest one in order to improve the iterative efficiency. This method is named as the largest testing number method), and the minimum ratio is \( \hbox{min} \left. {\left\{ {\left. {\frac{{b_{i0} }}{{b_{ir} }}} \right|} \right.b_{ir} > 0} \right\} = \frac{{b_{s0} }}{{b_{sr} }} \) Hence the leaving-basis variable \( x_{js} \) can be determined (if there are a few same minimum ratios, choose the minimum-subscript variable as the leaving- basis variable). Substitute \( {\user2{p}}_{\user2{r}} \) for \( p_{js} \), obtain the new basis \( {\bar{\user2{B}}} \), and then go to next step;

  • The fifth step: Obtain the canonical form and the basic feasible \( x_{{\bar{B}}}^{(1)} = \bar{B}^{ - 1} b \), corresponding to new basis \( {\bar{\user2{B}}} \) (which can be realized directly by elementary row transformation of the corresponding simplex tableau in manual calculation). Afterwards, substitute \( {\bar{\user2{B}}} \) for B, substitute \( x_{{\bar{B}}}^{(1)} \) for \( x_{{}}^{(0)} \), and then return to the second step.

For the non-degenerate linear programming problems, using the largest testing number simplex method in iteration, after finite iterative steps, the optimal solution must be obtained or not existed. But for degenerate linear programming problems, this method may not be valid because basis cycling may appear. In 1951, A. J. Hoffman first designed one example where appears cycling in iterations. In 1955, E. M. L. Beale designed a simpler example to show the possible cycling problem (Beale 1955; Tang and Qin 2004; Zhang and Xu 1990).

To avoid infinite cycling, R. G. Bland proposed a new method in 1976 (Bland 1977). In the Bland method the cycling can be avoided in calculation if abiding by two rules which are shown as following (Andersen et al. 1996; Nelder and Mead 1965; Lagarias et al. 1998; Bixby 1994; Herrera et al. 1993; Wright 1996; Han et al. 1994; Hapke and lowinski 1996; Zhang 1999; Terlaky 1985; Terlaky 2000; Terlaky and Zhang 1993; Wagner 1958; Ward and Wendell 1990; Wolfe 1963; Wright 1998; Elsner et al. 1991; Han 2000):

  • Rule 1: Once there are a few positive testing numbers, choose the corresponding minimum-subscript basic variable as the entering-basis variable;

  • Rule 2: Once a few ratios \( \frac{{b_{i0} }}{{b_{ir} }} \), in different rows reach the minimum at the same time, choose the corresponding minimum-subscript basic variable as the leaving-basis variable.

Rule 2 determines the leaving-basis variable, and it is same as the forth step of the simplex method. However, the entering-basis variable is determined by Rule 1, but the largest testing number method. The advantage of the Bland method is simple. However because it only considers the minimum subscript, but the decreasing speed of the target function, its iteration times are often much more than those of the largest testing number method. In this paper, we will first prove a theorem, and then use this theorem to propose an improved Bland method with much more computation efficiency.

84.2 The Improvement of Bland’s Method

Theorem 1

If the linear programming problem has an optimal solution, there appears degenerate basic feasible solution in some iterative step with the simplex method, but it is not optimal, and only one basic variable is zero in the degenerate basic feasible solution, the degenerate basic feasible solution will not appear again after this iterative step (even if the entering-basis variable is determined by largest testing number method).

Proof

First suppose that the corresponding basis is \( B = (p_{{j_{1} }} ,p_{{j_{2} }} , \ldots ,p_{{j_{m} }} ) \), the corresponding basic feasible solution is \( x_{{}}^{(0)} \), the corresponding simplex tableau is \( \varvec{T(B)} = \left[ {\left. {\begin{array}{*{20}c} {\varvec{c}_{\varvec{B}} \varvec{B}^{ - 1} \varvec{b}} & {\varvec{c}_{\varvec{B}} \varvec{B}^{ - 1} \varvec{A} - \varvec{c}} \\ {\varvec{B}^{ - 1} \varvec{b}} & {\varvec{B}^{ - 1} \varvec{A}} \\ \end{array} } \right]} \right. \) o in this iterative step. The corresponding canonical form is

$$\begin{aligned}& \hbox{min}\, s = s^{(0)} - \sum\limits_{{j \ne j_{1} ,j_{2} \cdots j_{m} }} {\lambda_{j} x_{j} } \hfill \\ & \left\{ {\begin{aligned}&{x_{{j_{i} }} + \sum\limits_{{j \ne j_{1} ,j_{2} \cdots j_{m}}}{b_{ij} x_{j} = b_{i0} } } & {(i = 1,2,\cdots m)}\\ & {x_{j} \ge 0} & {(j = 1,2, \cdots m)} \\ \end{aligned} }\right.\end{aligned}$$

There is only one zero in \( b_{i0} (i = 1,2, \cdots m) \), and now assume that \( b_{s0} = 0 \) and \( b_{i0} > 0 \). After this iterative step, according to the hypothesis, because only one basic variable is zero, only if the row in which leaving-basis variable locates is not s row, the value of target function will decrease and \( x_{{}}^{(0)} \) will be transferred; Moreover, because the target value will not increase in iteration, \( x_{{}}^{(0)} \) will not appear again. Therefore, if the conclusion is not valid, there is only one case: In the iteration afterwards, The row in which the leaving-basis variable locates is s row, and hence the entering-basis variable will be the leaving-basis variable in each iteration. This kind of variable is only in the set \( \left\{ {\left. {\varvec{x}_{\varvec{j}} } \right|\varvec{j} = \varvec{j}_{1} ,\varvec{j}_{2} \cdots \varvec{j}_{\varvec{m}} } \right\} \cup \left\{ {\varvec{x}_{{\varvec{j}_{\varvec{s}} }} } \right\} \).

Because the number of the set is finite, if there appears cycling, there must be some variable \( x_{q} \) which leaves the basis and then enters again. Supposing that the corresponding simplex tableau is \( T(B_{t} ) \) when \( x_{q} \) is the leaving-basis variable and the entering-basis variable is \( {\user2{x}}_{\user2{r}} \) in this tableau, \( b_{sq}^{(t)} = 1 \), \( \lambda_{q}^{(t)} = 0 \), \( b_{sr}^{(t)} > 0 \), \( \lambda_{r}^{(t)} > 0 \)

Supposing that the corresponding simplex tableau is \( T(B_{t + k} ) \). When \( x_{q} \) is the entering-basis variable, \( \lambda_{q}^{(t + k)} > 0 \) (because it’s still not optimal). \( T(B_{t} ) \) becomes \( T(B_{t + k} ) \) after iteration, and then

$$ b_{sq}^{(t + 1)} = \frac{{b_{sq}^{(t)} }}{{b_{sr}^{(t)} }} > 0,\;\lambda_{q}^{(t + 1)} = \lambda_{q}^{(t)} - \lambda_{r}^{(t)} b_{sq}^{(t + 1)} < \lambda_{q}^{(t)} = 0 $$

The rest may be deduced by analogy, \( b_{sq}^{(t + k)} > 0 \), \( \lambda_{q}^{(t + k)} < 0 \), which contradicts \( \lambda_{q}^{(t + k)} > 0 \), So the conclusion is valid. The proof is ended.

When there appears degenerate case, from Theorem 1 we can obtain: If only one basic variable is zero in the degenerate basic feasible solution, we can still use the largest testing number method and there will not appear cycling. Therefore, we can modify Rule 1 of the Bland method in order to improve efficiency of iteration:

Improved rule 1: when there are a few of positive testing numbers, if only one basic variable is zero in the corresponding basic feasible solution at most, the entering-basis variable can be determined by the largest testing number; If more than one basic variable is zero in the corresponding basic feasible solution, the entering-basis variable can be determined by Rule 1 of the Bland method.

84.3 Conclusion

In summary, the large testing number method has high iteration efficiency, but it has the cycling problem; the Bland method can avoid the cycling problem, but results in low iteration efficiency. In order to eliminate those two disadvantages, we proposed an improved method which can prevent the cycling theoretically with higher computation efficiency.