Unformatted text preview:

28 Duality in Linear Programming As we have noted , a linear program is an optimization problem with linear constraints and a linear objective function. If there are n variables and m constraints (in addition to constraints that each of the variables must be non-negative), we can write such an optimization problem as SUMk Ajkxk + sj - bj = 0 for each j and each xk and sj is non-negative for each j and k, and we are to maximize SUMk vkxk. As we have also noted, the form of the equations here is one in which the sj variables are solved for in the sense that each appears with coefficient 1 in exactly one equation. This form is convenient for evaluating these s variables when all the x variables are set to 0, so we associate the origin in the x variables with this form for the equations . We can similarly associate every vertex or intersection of n linearly independent constraints with a form of the equations that is solved for all the variables which are not 0 at that vertex. We saw that a vertex is a vertex of the allowed or feasible polytope if all the b parameters are non-negative, and that we can change the form of the equations to move the origin along an edge of that polytope by performing a pivot operation. This involves choosing one of the x variables, solving an appropriate equation for it, and using that equation to eliminate the given x variable from all equations and from the objective function. If we write –bj = Aj0 and ck = A0k, we find that the step of eliminating xK from all the equations but the J equation takes the form Ajk becomes Ajk – AjKAJk/AJK for all j and k from 0 to m or n, except for J and K. This operation has an obvious symmetry between row and column, which suggests that there is some kind of transposed problem that we can define, which will transform in the same way under this kind of pivot operation. The symmetry between row and column manifests itself in a second way as well: the condition for feasibility is that the Aj0 are negative, while the condition at a feasible vertex that the objective function is a minimum is that the A0k are negative. Thus the condition for the maximum feasible point is symmetric as well. There is indeed such a transposed problem hich, if defined as follows, has important and very useful properties. It is called the dual to the original LP, which is often referred to as the primal problem. (Of course the dual of the dual is the primalagain, so that the distinction between those is really about which problem we defined first.) In the dual problem we define a second variable for each of the variables of the primal including the slack variables, here, a y variable for each s variable and a t variable for each x variable of the primal. We define the dual to obey the conditions: 1. The dual equations transform under a pivot exactly as do the primal equations, so that the dual is uniquely determined by the primal equations, and does not depend on the form used to describe them 2. The solution conditions for the dual are the same as those for the primal, so that a solution for one determines a solution for the other. 3. Any feasible solution to the dual provides an upper bound to the primal solution, while any solution to the primal provides a lower bound to the dual solution. All this can be accomplished by writing the dual equations as SUMj yjAjk - ck - tk = 0, and requiring that the dual objective function, SUMj yj bj be minimized at the dual solution, whose feasibility requires that the ck are negative. (We have to make the signs of the tk variables what they are here so that the solution will occur for ck negative, as it does in the primal problem. Minimizing here means that a solution corresponds to the bj non-negative, again what we need here.) Here the tk are called dual slack variables, and the yj are called dual variables. Again each are non-negative in the standard form for an LP. With this form for the dual, not only do the equations transform in the same way under a pivot operation, but both the primal and the dual objective functions change by the same amount, by AJ0 A0K/AJK in each pivot. A JK pivot interchanges the roles of xK and sJ in the primal and of yJ and tK in the dual. The resulting constant term in the objective function is therefore the same for the primal and dual problems at every vertex. Notice that with these equations we can evaluate the double sum SUMj,k yjAjkxk in either order, and draw the conclusion, after rearranging the four resulting terms: SUMj yjbj - SUMk ck xk = SUMj sj yj + SUMk xk tk The left hand side here is the difference between the dual objective function and the primal objective function. The right side is always positive if the variables represent feasible solutions to the primal and dual problems.We can deduce from this equation not only that any feasible dual point has dual objective function greater than the primal maximum, (and any primal feasible solution has objective function less than it) but at the solution point, where the bj are positive and the ck are negative for each problem, and the objective functions are the same for both, at least one of each pair of variables (xk and tk, or yj and sj) must be 0, in order that the right hand side here be 0. At this solution point the origin will not be the original origin since the x and s variables will be mixed up; but the solved for primal variables will be given by the bj at that point, and the rest will be 0; the solved for dual variables will be given by the -ck and the rest of those will be 0. The dual problem is often written without the dual slack variables, just as the primal problem is often written without the sj. The dual inequalities then read: SUMj yj Ajk >= ck. Of course at any feasible point for the primal problem other than the solution maximum itself, the dual problem willl not be feasible at the corresponding dual point, and vice versa. Exercise: The act of writing down explicit dual equations of a given LP is not complicated but it is quite unnatural to human beings, and you can expect to do it hesitantly and incorrectly the first three times you try to do it. Therefore please attempt to do it with random initial LP’s at least four times. 2. Of What Use is the Dual? The dual problem can be useful for solving LP’s either because your origin is feasible for it rather than for the primal, or because there are fewer variables in it, or it just looks nicer. Also the bound that the dual gives on


View Full Document
Download Duality in Linear Programming
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Duality in Linear Programming and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Duality in Linear Programming 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?