Algorithms

A series of linear constraints on two variables produces a region of possible values for those variables. Solvable problems will have a feasible region in the shape of a simple polygon.

The simplex algorithm of Dantzig

The simplex algorithm, developed by George Dantzig, solves LP problems by constructing a feasible solution at a vertex of the polyhedron and then walking along a path on the edges of the polyhedron to vertices with non-decreasing values of the objective function until an optimum is reached. In many practical problems, “stalling” occurs: Many pivots are made with no increase in the objective function.[2] In rare practical problems, the usual versions of the simplex algorithm may actually “cycle”.[3] To avoid cycles, researchers developed new pivoting rules [4]

In practice, the simplex algorithm is quite efficient and can be guaranteed to find the global optimum if certain precautions against cycling are taken. The simplex algorithm has been proved to solve “random” problems efficiently, i.e. in a cubic number of steps (Borgwadt, Todd), which is similar to its behavior on practical problems[5]

However, the simplex algorithm has poor worst-case behavior: Klee and Minty constructed a family of linear programming problems for which the simplex method takes a number of steps exponential in the problem size.[6] In fact, for some time it was not known whether the linear programming problem was solvable in polynomial time (complexity class P).

The ellipsoid algorithm, following Khachiyan

This long standing issue was resolved by Leonid Khachiyan in 1979 with the introduction of the ellipsoid method, the first worst-case polynomial-time algorithm for linear programming. To solve a problem which has n variables and can be encoded in L input bits, this algorithm uses O(n4L) pseudo-arithmetic operations on numbers with O(L) digits. Khahiyan’s algorithm and his convergence analysis have predecessors, notably iterative methods developed by Naum Z. Shor and by Arkadi Nemirovski and D. Yudin.

Interior point methods, following Karmarkar

Khachiyan’s algorithm was of landmark importance for establishing the polynomial-time solvability of linear programs. The algorithm had little practical impact, as the simplex method is more efficient for all but specially constructed families of linear programs. However, it inspired new lines of research in linear programming with the development of interior point methods, which can be implemented as a practical tool. In contrast to the simplex algorithm, which finds the optimal solution by progressing along points on the boundary of a polyhedral set, interior point methods move through the interior of the feasible region.

In 1984, N. Karmarkar proposed a new interior point projective method for linear programming. Karmarkar’s algorithm not only improved on Khachiyan’s theoretical worst-case polynomial bound (giving O(n3.5L)). Karmarkar also claimed that his algorithm exhibited dramatic practical performance improvements over the simplex method, which created great interest in interior-point methods. Since then, many interior point methods have been proposed and analyzed. Early successful implementations were based on affine scaling variants of the method. For both theoretical and practical properties, barrier function or path-following methods are the most common recently.

Comparison of interior-point methods versus simplex algorithms

The current opinion is that the efficiency of good implementations of simplex-based methods and interior point methods are similar for routine applications of linear programming.[7]

LP solvers are in widespread use for optimization of various problems in industry, such as optimization of flow in transportation networks.[8]