DOC PREVIEW
MIT 6 854J - Approximation Algorithms

This preview shows page 1-2-3 out of 9 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 9 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 9 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 9 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 9 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Approximation Algorithms What do you do when a problem is NP-complete? • or, when the “polynomial time solution” is impractically slow? • assume input is random, do “expected performance.” Eg, Hamiltonian path in all graphs. Problem: agreeing on good distribution. • run a nonpolynomial (hopefully only slightly) algorithms such as branch and bound. Usually no proven bound on runtime, but sometime can. • settle for a heuristic, but prove it does well enough (our focus) Definitions: • optimization problem, instances I, solutions S(I)withvalues f : S(I) → R • maximization/minimization: find solution in S(I) maximizing/minimizing f • called OPT(I) • eg bin-packing. instance is set of si ∈ 0, 1, partition so no subset exceeds 1 Techincal assumptions we’ll often make: • assumption: all inputs and range of f are integers/rationals (can’t repre-sent reals, and allows, eg, LP, binary search). • assumption f (σ) is a polynomial size (num bits) number (else output takes too long) • look for polytime in bit complexity NP-hardness • optimization NP-hard if can reduce an NP-hard decision problem to it • (eg, problem of “is opt solution to this instance ≤ k?”) • but use more general notion of turing-reducibility (GJ). Approximation algorithm: • any algorithm that gives a feasible answer • eg, each item in own bin. • of course, want good algorithm. How measure? 1Absolute Approximations Definition: k-abs approx if on any I,have |A(I) − OP T (I)|≤k Example: planar graph coloring. • NP-complete to decide if 3 colorable • know 4-colorable • easy to 5-color • Ditto for edge coloring: Vizing’s theorem says opt is ∆ or (constructive) ∆+1 Known only for trivial cases, where opt is bounded by a constant. Often, can show impossible by “scaling” the problem. • EG knapsack. – define profits pi, sizes si,sack B – wlog, integers. – suppose have k-absolute – multiply all pi by k + 1,solve,scale down. • EG independent set (clique) – k + 1 copies of G Relative Approximation Definitions: • An α-optimum solution has value at most α times optimum for minimiza-tion, at least 1/α times optimum for minimization. • an algorithm has appr oximation ratio α if on any input, it outputs an α-approximate feasilbe solution. • called an α-approximation algorithm How do we prove algorithms have relative approximations? • Can’t describe opt, so can’t compare to it • instead, comparison to computable lower bounds. 2Greedy Algorithms Do obvious thing at each step. • Hard part is proving it works. • Usually, by attention to right upper/lower bound. Max cut • Upper bound trivial • Max-cut greedy. Min-diameter clustering? • Gonzales’ algorithm. • Distances to existing centers keep dropping • Suppose after k chosen, farthest remaining is distance d • Then OPT ≥ d – k + 1 mutually-distance-d points – some must share a cluster • Now assign each point to closest center • Max distance from center (radius) is d • So max diameter is 2d • 2-approx. Set cover • n items • OPT = k • At each step, can still cover remainder with k sets • So can cover 1/k of remainder Vertex cover: • define problem • suppose repeatedly pick any uncovered edge and cover: no approx ratio • suppose pick uncovered edge and cover both sides: 2-approx • sometime, need to be extra greedy 3� � • Explicit attention to where lower bound is coming from—lower bound informs algorithm. Graham’s rule for P ||Cmax is a 2 − 1 approximation algorithm m • explain problem: m machines, n jobs with proc times pj ,min proc time. • can also think of minimizing max load of continuously running jobs • use a greedy algorithm to solve • proof by comparison to lower bounds 1• first lower bound: average load: OPT ≥ m pj • second lower bound: OPT ≥ max pj • suppose M1 has max runtime L at end • suppose j was last job added to M1 • then before, M1 had load L − pj which was minimum • so pi ≥ m(L − pj )+ pj 1• so OPT ≥ L +(1 − m )pj 1• so L ≤ OPT + (1 − m )pj ≤ (2 − 1 )OPT m Notice: 1• this algorithm is online, competitive ratio 2 − m • we have no idea of optimum schedule; just used lower bounds. • we used a greedy strategy • tight bound: consider m(m − 1) size-1 jobs, one size-m job • where was problem? Last job might be big • LPT achieves 4/3, but not online • newer online algs achieve 1.8orso. Approximation Schemes So far, we’ve seen various constant-factor approximations. • WHat is best constant achievable? • Lower bounds: APX-hardness/Max-SNP An approximation scheme is a family of algorithms A such that • each algorithm polytime • A achieve 1 + approx But note: runtime might be awful as function of 4FPAS, Pseudopolynomial algorithms Knapsack • Dynamic program for bounded profits • B(j, s) = best subset of jobs 1,... ,j of total size ≤ s. • rounding – Let opt be P . – Scale prices to (n/ P )pi – New opt is it least n/ − n =(1 − )n/ – So find solution within 1 − of original opt – But table size polynomial • did this prove P = NP ?No • recall pseudopoly algorithms pseudopoly gives FPAS; converse almost true • Knapsack is only weakly NP-hard • strong NP-hardness (define) means no pseudo-poly Enumeration More powerful idea: k-enumeration • Return to P ||Cmax • Schedule k largest jobs optimally • scheduling remainder greedily • analysis: note A(I) ≤ OP T (I)+ pk+1 – Consider job with max cj – If one of k largest, done and at opt – Else, was assigned to min load machine, so cj ≤ OP T + pj ≤ OP T + pk+1 – so done if pk+1 small – but note OP T (I) ≥ (k/m)pk+1 – deduce A(I) ≤ (1 + m/k)OP T (I). – So, for fixed m, can get any desired approximation ratio Scheduling any number of machines 5• Combine enumeration and rounding • Suppose only k job sizes – Vector of “number of each type” on a given machine—gives “machine type” – Only nk distinct vectors/machine types – So need to find how many of each machine type. – Use dynamic program: ∗ enumerate all job profiles that can be completed by j machines in time T ∗ In set if profile is sum of j − 1 machine profile and


View Full Document

MIT 6 854J - Approximation Algorithms

Download Approximation Algorithms
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Approximation Algorithms and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Approximation Algorithms 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?