Complementary slackness

It is possible to obtain an optimal solution to the dual when only an optimal solution to the primal is known using the complementary slackness theorem. The theorem states:

Suppose that x = (x1, x2, . . ., xn) is primal feasible and that y = (y1, y2, . . . , ym) is dual feasible. Let (w1, w2, . . ., wm) denote the corresponding primal slack variables, and let (z1, z2, . . . , zn) denote the corresponding dual slack variables. Then x and y are optimal for their respective problems if and only if xjzj = 0, for j = 1, 2, . . . , n, wiyi = 0, for i = 1, 2, . . . , m.

So if the ith slack variable of the primal is not zero, then the ith variable of the dual is equal zero. Likewise, if the jth slack variable of the dual is not zero, then the jth variable of the primal is equal to zero.

This necessary condition for optimality conveys a fairly simple economic principle. In standard form (when maximizing), if there is slack in a constrained primal resource (i.e., there are “leftovers”), then additional quantities of that resource must have no value. Likewise, if there is slack in the dual (shadow) price non-negativity constraint requirement , i.e., the price is not zero, then there must scarce supplies (no “leftovers”).

Theory

Geometrically, the linear constraints define a convex polytope, which is called the feasible region. It is not hard to see that every local optimum (a point x such that for every unit direction vector d with positive objective value any every ε > 0 it holds that x + εd is infeasible) is also a global optimum. This holds more generally for convex programs: see the KKT theorem.

There are two situations in which no optimal solution can be found. First, if the constraints contradict each other (for instance, x ≥ 2 and x ≤ 1) then the feasible region is empty and there can be no optimal solution, since there are no solutions at all. In this case, the LP is said to be infeasible.

Alternatively, the polyhedron can be unbounded in the direction of the objective function (for example: maximize x1 + 3 x2 subject to x1 ≥ 0, x2 ≥ 0, x1 + x2 ≥ 10), in which case there is no optimal solution since solutions with arbitrarily high values of the objective function can be constructed.

Barring these two conditions (which can often be ruled out when dealing with specific LPs), the optimum is always attained at a vertex of the polyhedron (unless the polyhedron has no vertices, for example in the feasible bounded linear program \scriptstyle\max\{ x_1+x_2\,:\ x_1 \leq 1-x_2\}; polyhedra with at least one vertex are called pointed). However, the optimum is not necessarily unique: it is possible to have a set of optimal solutions covering an edge or face of the polyhedron, or even the entire polyhedron (this last situation would occur if the objective function were constant on the polyhedron).

The vertices of the polyhedron are also called basic feasible solutions. The reason for this choice of name is as follows. Let d denote the dimension, i.e. the number of variables. Then the following theorem holds: for every vertex x* of the LP feasible region, there exists a set of d inequality constraints from the LP such that, when we treat those d constraints as equalities, the unique solution is x*. Thereby we can study these vertices by means of looking at certain subsets of the set of all constraints (a discrete universe), rather than the continuous universe of LP solutions. This principle underlies the simplex algorithm for solving linear programs.