# Revised simplex method

In mathematical optimization, the **revised simplex method** is a variant of George Dantzig's simplex method for linear programming.

The revised simplex method is mathematically equivalent to the standard simplex method but differs in implementation. Instead of maintaining a tableau which explicitly represents the constraints adjusted to a set of basic variables, it maintains a representation of a basis of the matrix representing the constraints. The matrix-oriented approach allows for greater computational efficiency by enabling sparse matrix operations.^{[1]}

## Contents

## Problem formulation[edit]

For the rest of the discussion, it is assumed that a linear programming problem has been converted into the following standard form:

where * A* ∈

**R**

^{m×n}. Without loss of generality, it is assumed that the constraint matrix

*has full row rank and that the problem is feasible, i.e., there is at least one*

**A***≥*

**x****0**such that

*=*

**Ax***. If*

**b***is rank-deficient, either there are redundant constraints, or the problem is infeasible. Both situations can be handled by a presolve step.*

**A**## Algorithmic description[edit]

### Optimality conditions[edit]

For linear programming, the Karush–Kuhn–Tucker conditions are both necessary and sufficient for optimality. The KKT conditions of a linear programming problem in the standard form is

where * λ* and

*are the Lagrange multipliers associated with the constraints*

**s***=*

**Ax***and*

**b***≥*

**x****0**, respectively.

^{[2]}The last condition, which is equivalent to

*s*= 0 for all 1 <

_{i}x_{i}*i*<

*n*, is called the

*complementary slackness condition*.

By what is sometimes known as the *fundamental theorem of linear programming*, a vertex * x* of the feasible polytope can be identified by being a basis

*of*

**B***chosen from the latter's columns.*

**A**^{[a]}Since

*has full rank,*

**A***is nonsingular. Without loss of generality, assume that*

**B***= [*

**A**

**B***]. Then*

**N***is given by*

**x**where * x_{B}* ≥

**0**. Partition

*and*

**c***accordingly into*

**s**To satisfy the complementary slackness condition, let * s_{B}* =

**0**. It follows that

which implies that

If * s_{N}* ≥

**0**at this point, the KKT conditions are satisfied, and thus

*is optimal.*

**x**### Pivot operation[edit]

If the KKT conditions are violated, a *pivot operation* consisting of introducing a column of * N* into the basis at the expense of an existing column in

*is performed. In the absence of degeneracy, a pivot operation always results in a strict decrease in*

**B**

**c**^{T}

*. Therefore, if the problem is bounded, the revised simplex method must terminate at an optimal vertex after repeated pivot operations because there are only a finite number of vertices.*

**x**^{[4]}

Select an index *m* < *q* ≤ *n* such that *s _{q}* < 0 as the

*entering index*. The corresponding column of

*,*

**A***, will be moved into the basis, and*

**A**_{q}*x*will be allowed to increase from zero. It can be shown that

_{q}i.e., every unit increase in *x _{q}* results in a decrease by −

*s*in

_{q}

**c**^{T}

*.*

**x**^{[5]}Since

* x_{B}* must be correspondingly decreased by Δ

*=*

**x**_{B}

**B**^{−1}

*subject to*

**A**_{q}x_{q}*− Δ*

**x**_{B}*≥*

**x**_{B}**0**. Let

*=*

**d**

**B**^{−1}

*. If*

**A**_{q}*≤*

**d****0**, no matter how much

*x*is increased,

_{q}*− Δ*

**x**_{B}*will stay nonnegative. Hence,*

**x**_{B}

**c**^{T}

*can be arbitrarily decreased, and thus the problem is unbounded. Otherwise, select an index*

**x***p*= argmin

_{1≤i≤m}{

*x*/

_{i}*d*|

_{i}*d*> 0} as the

_{i}*leaving index*. This choice effectively increases

*x*from zero until

_{q}*x*is reduced to zero while maintaining feasibility. The pivot operation concludes with replacing

_{p}*with*

**A**_{p}*in the basis.*

**A**_{q}## Numerical example[edit]

Consider a linear program where

Let

initially, which corresponds to a feasible vertex * x* = [0 0 0 10 15]

^{T}. At this moment,

Choose *q* = 3 as the entering index. Then * d* = [1 3]

^{T}, which means a unit increase in

*x*

_{3}results in

*x*

_{4}and

*x*

_{5}being decreased by 1 and 3, respectively. Therefore,

*x*

_{3}is increased to 5, at which point

*x*

_{5}is reduced to zero, and

*p*= 5 becomes the leaving index.

After the pivot operation,

Correspondingly,

A positive * s_{N}* indicates that

*is now optimal.*

**x**## Practical issues[edit]

### Degeneracy[edit]

Because the revised simplex method is mathematically equivalent to the simplex method, it also suffers from degeneracy, where a pivot operation does not result in a decrease in **c**^{T}* x*, and a chain of pivot operations causes the basis to cycle. A perturbation or lexicographic strategy can be used to prevent cycling and guarantee termination.

^{[6]}

### Basis representation[edit]

Two types of linear systems involving * B* are present in the revised simplex method:

Instead of refactorizing * B*, usually an LU factorization is directly updated after each pivot operation, for which purpose there exist several strategies such as the Forrest−Tomlin and Bartels−Golub methods. However, the amount of data representing the updates as well as numerical errors builds up over time and makes periodic refactorization necessary.

^{[1]}

^{[7]}

## Notes and references[edit]

### Notes[edit]

**^**The same theorem also states that the feasible polytope has at least one vertex and that there is at least one vertex which is optimal.^{[3]}

### References[edit]

- ^
^{a}^{b}Morgan 1997, §2. **^**Nocedal & Wright 2006, p. 358, Eq. 13.4.**^**Nocedal & Wright 2006, p. 363, Theorem 13.2.**^**Nocedal & Wright 2006, p. 370, Theorem 13.4.**^**Nocedal & Wright 2006, p. 369, Eq. 13.24.**^**Nocedal & Wright 2006, p. 381, §13.5.**^**Nocedal & Wright 2006, p. 372, §13.4.

### Bibliography[edit]

- Morgan, S. S. (1997).
*A Comparison of Simplex Method Algorithms*(MSc thesis). University of Florida. Archived from the original on 7 August 2011. - Nocedal, J.; Wright, S. J. (2006). Mikosch, T. V.; Resnick, S. I.; Robinson, S. M. (eds.).
*Numerical Optimization*. Springer Series in Operations Research and Financial Engineering (2nd ed.). New York, NY, USA: Springer. ISBN 978-0-387-30303-1.