DOC PREVIEW
Duke CPS 296.2 - Lecture notes 4: Duality

This preview shows page 1 out of 4 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 4 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 4 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Lecture notes 4: DualityVincent Conitzer1 IntroductionLet us again consider the linear program for our original painting problem instance:maximize 3x1+ 2x2subject to4x1+ 2x2≤ 16x1+ 2x2≤ 8x1+ x2≤ 5x1≥ 0; x2≥ 0We already know that the optimal solution is to set x1= 3, x2= 2, for a solution value of9 + 4 = 13. But how do we prove that this is optimal? Originally, we proved this using a picture:from the picture it was easy to see that the optimal solution is where the lines corresponding tothe blue and red (first and third) constraints intersect, and it is easy to show that this is at thepoint x1= 3, x2= 2. However, this approach does not generalize to more variables—perhapsconvincing pictures can still be drawn for three variables, but how would you draw one for fourvariables? Moreover, it is somewhat informal and relies on our ability to draw straight lines. Oncewe introduce an algorithm that always finds the optimal solution to a linear program, we can simplygive the run of the algorithm as a proof of optimality. Still, such a proof would be quite long andunpleasant to read.We can use duality to give a much nicer proof of optimality. To introduce duality, let us firstpursue a more modest objective: we want to find a simple upper bound on the objective value(revenue) that can be obtained. The lower the upper bound, the better; and if we can give an upperbound of 13, then we will have proven that our solution x1= 3, x2= 2 is optimal, since it in factproduces a revenue of 13.Here is a very simple upper bound: we know that the blue constraint 4x1+ 2x2≤ 16 must hold.Because x1is nonnegative, we must have 3x1+ 2x2≤ 4x1+ 2x2≤ 16. Hence, we know that 16 isan upper bound on the revenue 3x1+ 2x2that we can obtain.We can get a better upper bound by multiplying the red constraint x1+ x2≤ 5 by 3. This givesus 3x1+ 3x2≤ 15, and because x2is nonnegative, 3x1+ 2x2≤ 3x1+ 3x2. So 15 is also an upperbound.We can also multiply constraints by fractions; moreover, we can add them to each other. If wemultiply the blue constraint by 1/2, we get 2 x1+ x2≤ 8. If we then add this to the red constraint,we get 3x1+ 2x2≤ 13. Hence, we get an upper bound of 13—the optimal solution value. We cannotget a lower upper bound than that, since then, it would not be a lower bound, because a revenue of13 can actually be obtained. We note that in finding this optimal upper bound, we used only theconstraints that are binding in the optimal solution—that is, they are satisfied without any slack.This is intuitively sensible because these are the constraints that matter. We will see later on inthese lecture notes that this is a special case of something known as complementary slackness.1All of the above upper bounds were obtained by multiplying the constraints with various non-negative numbers, and then adding the results together. If we multiply by a negative number,the inequality in the constraint flips, so that would not be helpful. Let y1, y2, y3be the numbersby which we multiply the blue, green, and red constraints, respectively. Hence, the first upperbound that we obtained consisted of setting y1= 1, y2= 0, y3= 0; the second upper bound con-sisted of setting y1= 0, y2= 0, y3= 3; and the third, optimal upper bound consisted of settingy1= 0.5, y2= 0, y3= 1.Now, let us think about how we might find the optimal upper bound. As it turns out, we canmodel this as a linear program! As we already noted, all the yishould be nonnegative. In addition,we need to make sure that, once we add up all the multiplied constraints, the coefficient on x1is atleast 3, because the coefficient on x1in the objec tive is 3. That is, 4y1+ y2+ y3≥ 3. Similarly, forx2, we must have 2y1+ 2y2+ y3≥ 2. These inequalities guarantee that we will in fact have a correctupp e r bound. Finally, given that we satisfy all of these inequalities, we want to minimize the upperbound that this gives us, since smaller upper bounds are better. T he upper bound that we obtainis a linear combination of the right-hand sides of the original inequalities: 16y1+ 8y2+ 5y3. Puttingit together, we get:minimize 16y1+ 8y2+ 5y3subject to4y1+ y2+ y3≥ 32y1+ 2y2+ y3≥ 2y1≥ 0; y2≥ 0; y3≥ 0This linear program is the dual of the original (also called primal) linear program. Any feasiblesolution to the dual corresponds to an upper bound on any solution to the primal (this is known asthe weak duality property). Also, we saw that in this case, there is a feasible solution to the dualwhose value is equal to the optimal value for the primal (and therefore, we know that this feasiblesolution to the dual is in fact optimal for the dual, because if there were a better solution for thedual, it would no longer be an upper bound for the primal). It turns out that this was no accident:in fact, the values of the optimal solutions to the primal and dual are always equal. This propertyis known as strong duality. It should be emphasized that we are only talking about linear programshere. If we are considering a (mixed) integer program (say, a maximization problem), we can takeits linear program relaxation, and then take the dual of that. Because the optimal value of theLP relaxation is no lower than that of the original MIP, feasible solutions to the dual will in factcorrespond to upp e r bounds for the orginal MIP. However, in general, the optimal value for the dualof the LP relaxation will not be equal to the optimal value for the original MIP.Finally, let us consider what happens if we take the dual of the dual. We have not yet shown howto take the dual of a minimization problem. One way to do this is to convert it to maximization formfirst, but it is not difficult to reason about the minimization problem directly. Again, we want to putnonnegative weights on the constraints; let us call the weights x1and x2for the above minimizationproblem. This time, we want to find lower bounds. For example, if we set x1= 3, x2= 2, we obtainthe relationship 16y1+ 8y2+ 5y3≥ (3 · 4 + 2 · 2)y1+ (3 · 1 + 2 · 2)y2+ (3 · 1 + 2 · 1)y3≥ 3 · 3 + 2 · 2 = 13.That is, we now need to make sure that the coefficients resulting from our linear combination ofconstraints are less than or equal to those in the objective, and subject to this we want to maximizethe linear combination of the right-hand side. This results in the following linear program:2maximize 3x1+ 2x2subject to4x1+ 2x2≤ 16x1+ 2x2≤ 8x1+ x2≤ 5x1≥ 0; x2≥ 0This is just the primal problem again! Thus,


View Full Document

Duke CPS 296.2 - Lecture notes 4: Duality

Download Lecture notes 4: Duality
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lecture notes 4: Duality and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lecture notes 4: Duality 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?