Unformatted text preview:

15-780: Grad AILecture 7: OptimizationGeoff GordonLast time, on Grad AIPlan graphs•A way to do propositional planning•How to build them•mutex conditions for literals, actions•How to use them•conversion to SATOptimization & Search•Classes of optimization problem•LP, CP, ILP, MILP•algorithms for solving, and complexity•constraints, objective, integrality•How to use DFID, etc. for optimization•Definition of convexityBounds•Pruning search w/ lower bounds on objective•Stopping early w/ upper bounds •Getting bounds from a relaxation of an optimization problem (increase feasible region)•particularly the LP relaxation of an ILPDuality•How to find dual of an LP or ILP•Interpretations of dual•linearly combine constraints to get a new constraint orthogonal to objective•find prices for scarce resources•game between primal and dual players•correspondence faces ↔ vertices of feasible regions (doesn’t include objective)Constrained optimizationMinimization•Unconstrained: set ∇f(x) = 0•E.g., minimize f(x, y) = x2 + y2 + 6x - 4y + 5∇f(x, y) = (2x + 6, 2y - 4)(x, y) = (-3, 2)Equality constraints•Equality constraint:minimize f(x) s.t. g(x) = 0•can’t just set ∇f to 0 (might violate constraint)•Instead, want gradient along constraint’s normal direction: any motion that decreases objective will violate the constraintExample•E.g., minimize x2 + y2 subject to x + y = 2Lagrange MultipliersLagrange multipliers are a way to solve constrained optimization problems. For example, suppose we want to minimize the functionf !x, y" ! x2" y2subject to the constraint0 ! g!x, y" ! x " y # 2Here are the constraint surface, the contours of f, and the solution.lp.nb 3Lagrange multipliers•Minimize f(x) s.t. g(x) = 0•Constraint normal is ∇g•(1, 1) in our example•Want ∇f parallel to ∇g•Equivalently, want ∇f = λ∇g•λ is a Lagrange multiplierLagrange MultipliersLagrange multipliers are a way to solve constrained optimization problems. For example, suppose we want to minimize the functionf !x, y" ! x2" y2subject to the constraint0 ! g!x, y" ! x " y # 2Here are the constraint surface, the contours of f, and the solution.lp.nb 3More than one constraint•With multiple constraints, use multiple multipliers:min x2 + y2 + z2 st x + y = 2x + z = 2(2x, 2y, 2z) = λ(1, 1, 0) + µ(1, 0, 1)Two constraints: the pictureMultiple Constraints: the PictureThe solution to the above equations is!p ! "4#####3, q ! "4#####3, x !4#####3, y !2#####3, z !2#####3"Here are the two constraints, together with a level surface of the objective function. Neither constraint is tangent to the level surface; instead, the normal to the level surface is a linear combination of the normals to the two constraint surfaces (with coefficients p and q).lp.nb 7What about inequalities?•Two cases: if minimum is in interior, can get it by setting ∇f = 0What about inequalities?•But if minimum is on boundary, treat as if boundary were an equality constraint (use Lagrange multiplier)What about inequalities?•Minimum could be at a corner: two boundary constraints are active•In n dims, up to n linear inequalities may be active (more in case of degeneracy)Back to LPWidgets →Doodads →w + d ≤ 42w + 5d ≤ 12profit = w + 2dfeasibleBack to LPmax w + 2d stw + d ≤ 42w + 5d ≤ 12w, d ≥ 0•In LP we’re minimizing linear fn subject to linear constraints•So gradients are really easy to computeBack to LP•Minimum can’t* occur in interior of feasible region•In fact we can assume it’s at a vertex•So to find it, we must check vertices•How many boundary vertices could there be?Bases•With m constraints and n variables, any subset of n constraints might be active•So up to (m choose n) possibilities•Given subset, easy to find corresponding vertex (solve linear system)•Subset = basisSearch•This is a combinatorial optimization problem, so could use one of our standard search algorithms•Search space: •node = basis•objective = linear function of vertex•neighbor = ?Neighboring bases•Two bases are neighbors if they share (n-1) of n constraints•Expanding a node in our search picks one constraint to add and another to deleteNeighborsWidgets →Doodads →NeighborsWidgets →Doodads →NeighborsWidgets →Doodads →Simplex•Notice that the objective increased monotonically throughout search•Turns out, this is always possible—leads to a lot of pruning!•We have just defined the simplex algorithm•if we pretend that arbitrary vertices are feasible, with an objective that penalizes infeasibility heavilyDuality examplePath planning LP•Find the min-cost path: variablesPath planning LPOptimal solutionpsy = pyg = 1, psx = pxg = 0, cost 3Matrix formDualDual objective•To get tightest bound, maximize:Whole thingOptimal dual solution0133Any solution which adds a constant to all λs also worksSimilarly, could reduce λx as far as 2Interpretation•Dual variables are prices on nodes: how much does it cost to start there?•Dual constraints are local price constraints: edge xg (cost 3) means that node x can’t cost more than 3 + price of node gSearch in ILPsSimple search algorithm (from last class)•Run DFS•node = partial assignment•neighbor = set one variable•Prune if a constraint becomes unsatisfiable•E.g., in 0/1 prob, setting y = 0 in x + 3y ≥ 4•If we reach a feasible full assignment, calculate its value, keep bestPruning•Suggested increasing pruning by adding constraints•Constraint from best solution so far: objective ≥ M (for maximization problem)•Constraint from optimal dual solution: objective ≤ M•Can we find more pruning to do?First idea•Analogue of constraint propagation or unit resolution•When we set a variable x, check constraints containing x to see if they imply a restriction on the domain of some other variable y•E.g., setting x to 1 in implication constraint (1-x) + y ≥ 1Example•0/1 variables x, y, z•maximize x subject to2x + 2y - z ≤ 22x - y + z ≤ 2-x + 2y - z ≤ 0Example searchProblem w/ constraint propagation•Constraint propagation doesn’t prune as early as it could:2x + 2y - z ≤ 22x - y + z ≤ 2-x + 2y - z ≤ 0•Consider z = 1Branch and bound•Each time we fix a variable, solve the resulting LP•Gives a tighter upper bound on value of objective in this branch•If this upper bound < value of a previous solution, we can prune•Called fathoming the branchCan we do more?•Yes:


View Full Document

CMU CS 15780 - duality

Download duality
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view duality and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view duality 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?