Unformatted text preview:

� � � � � � � � � � � � 15.083 Integer Programming and Combinatorial Optimization Fall 2009 Enumeration and Heuristics Dynamic Programming • Consider min cx : ax ≥ β, x ∈ {0, 1}n . • Let S = max{|ai| : i = 1, . . . , n}. • Define a directed graph D = (V, A) with vertex set V = {0, . . . , n} × {−nS, . . . , nS} • and arc set A defined by (j, δ), (i, δ�) ∈ A j = i − 1 and δ� − δ ∈ {0, ai}⇔ • The length of (i − 1, δ), (i, δ) is 0. • The length of (i − 1, δ), (i, δ + ai) is ci. • Any directed path P in D from (0, 0) to (n, β�) for some β� ≥ β yields a feasible solution x: – xi = 0 if (i − 1, δ), (i, δ) ∈ P for some δ; – xi = 1 if (i − 1, δ), (i, δ + ai) ∈ P for some δ. • The length of P is equal to cx. • So we can solve the problem by finding a shortest path from (0, 0) to (n, β�) for some β� ≥ β. Primal algorithms • Improving search – begin at a feasible solution x(0) – advance along a sequence of feasible solutions x(1), x(2), x(3), . . . with ever-improving objective value – move between feasible solutions via improving and feasible move directions Δx: x(t+1) x(t) + λΔx← – Guaranteed to find local optimal solutions under mild conditions 1Improving search for discrete optimization • Success of branch-and-bound for ILP depends largely on quality of LP relaxations • But we also need “good” feasible solutions • Some ILPs (and discrete optimization models in general) may be especially resistant to branch-and-bound-type techniques What can we do? • • Improving search can help us find good feasible solutions Discrete neighborhoods and move sets • Optimization model with discrete variables Want the neighborhood of a current solution to be binary/integer ⇒ • We can define neighborhoods and control candidates for improving and feasible directions • Example: max 20x1 − 4x2 + 14x3 s.t. 2x1 + x2 + 4x3 ≤ 5 x1, x2, x3 ∈ {0, 1} • Suppose the current solution is x(t) = (1, 1, 0) • Suppose the neighborhood consists of all feasible solutions that differ in at most one compo-nent. • Example: max 20x1 − 4x2 + 14x3 s.t. 2x1 + x2 + 4x3 ≤ 5 x1, x2, x3 ∈ {0, 1} • Neighborhood of x(t) = (1, 1, 0): (1, 1, 0) + (1, 0, 0) = (2, 1, 0) (1, 1, 0) + (−1, 0, 0) = (0, 1, 0) feasible (1, 1, 0) + (0, 1, 0) = (1, 2, 0) (1, 1, 0) + (0, −1, 0) = (1, 0, 0) feasible and improving (1, 1, 0) + (0, 0, 1) = (1, 1, 1) (1, 1, 0) + (0, 0, −1) = (1, 1, −1) 2Discrete improving search 0. Initialization. • Choose any starting feasible solution x(0) Set solution index t 0 • ← 1. Stopping. • If no neighboring solution of x(t)is both improving and feasible, stop x(t) is a local optimal solution ⇒ 2. Move. Choose some improving feasible move as Δx(t+1) 3. Step. Update x(t+1) x(t) + Δx(t+1) ← 4. Increment. Increment t t + 1, return to Step 1 ← The art of choosing a neighborhood • The solution produced by local search depends on the neighborhood on the move set employed • Larger neighborhoods generally result in superior local optimal solutions, but take longer to examine Multistart search • Different initial solutions lead to different local optimal solutions • All globally optimal solutions are local optimal solutions • Idea: start improving search from different initial solutions and take the best one Escape from local optima: allow nonimproving moves • Another idea: allow nonimproving feasible moves • Rationale: we might be able to “escape” local optimal solutions and move to a better region • Problem: if we don’t “escape” far enough, we will just cycle back to the same local optimal solution • Three popular methods: – Tabu search – Simulated annealing – Genetic algorithms 3Tabu search • Tabu search allows nonimproving moves and deals with cycling by temporarily forbidding moves that would return to a solution recently visited • Makes certain solutions “tabu” (“taboo”?) 0. Initialization. • Choose any starting feasible solution x(0) • Choose iteration limit tmax Set incumbent solution xˆ• Set solution index t 0• ← No moves are tabu • 1. Stopping. • If no non-tabu move Δx in move set M leads to a feasible neighbor of x(t), or if t = tmax, stop. Incumbent solution xˆis approximate optimum ⇒ 2. Move. Choose some non-tabu move Δx ∈ M as Δx(t+1) 3. Step. Update x(t+1) x(t) + Δx(t+1) ← 4. Incumbent solution. If the objective function value of x(t+1) is superior to that of the incumbent solution xˆ, replace xˆx(t+1) ← 5. Tabu list. • Remove from the tabu list any moves that have been on it for a sufficient number of iterations • Add a collection of moves that includes any returning from x(t+1) to x(t) 6. Increment. Increment t t + 1, return to Step 1 ← Simulated annealing • Simulated annealing accepts nonimproving moves with probability • Name comes from the annealing process of slowly cooling metals to improve strength • Suppose we are maximizing • If the move is improving (Δobj > 0), it is accepted 4• If the move is nonimproving (Δobj ≤ 0), it is accepted with probability probability of acceptance = eΔobj/q where q ≥ 0 is the temperature parameter • The probability of accepting a nonimproving move declines the more it worsens the objective function • The probability of accepting a nonimproving move declines as the temperature cools As the algorithm progresses, the temperature cools (q 0)• → • This description is for maximization problems 0. Initialization. • Choose any starting feasible solution x(0) • Choose iteration limit tmax • Set large initial temperature q Set incumbent solution xˆ • Set solution index t 0• ← 1. Stopping. • If no move Δx in move set M leads to a feasible neighbor of x(t), or if t = tmax, stop. Incumbent solution xˆis approximate optimum ⇒ 2. Provisional move. • Randomly choose a feasible move Δx ∈ M as Δx(t+1) • Compute Δobj = (obj. val. at x(t) + Δx(t+1)) − (obj. val. at x(t)) 3. Acceptance. If Δx(t+1) improves (Δobj > 0), or with probability eΔobj/q if Δobj ≤ 0, accept Δx(t+1) and update x(t+1) x(t) + Δx(t+1) ← Otherwise, return to Step 2 4. Incumbent solution. If the objective


View Full Document

MIT 15 083J - Enumeration and Heuristics

Download Enumeration and Heuristics
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Enumeration and Heuristics and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Enumeration and Heuristics 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?