Unformatted text preview:

MIT OpenCourseWarehttp://ocw.mit.edu 6.854J / 18.415J Advanced Algorithms Fall 2008��For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.18.415/6.854 Advanced Algorithms November 7, 2001 Lecture 16 Lecturer: Michel X. Goemans Scribes: Nicole Immorlica and Mana Taghdiri Today we begin our discussion of approximation algorithms. We will talk about: NP-hard Optimization Problems Inapproximability Analysis of Approximation Algorithms 1 NP-hard Optimization Problems Many of the algorithms in this world of ours deal with optimization. Some of these problems we can solve exactly in polynomial time. We spent the whole first half of this term dealing with these types of problems (linear programming, circulations). Other optimization problems are impossible to solve in polynomial time unless P=NP. We now focus our attention on handling these NP-hard optimization problems. Definition 1 An optimization problem is NP-hard if the corresponding decision problem is NP-hard, i.e. a polynomial-time algorithm finding the exact optimum would imply P = NP. There are many examples of NP-hard optimization problems. In fact, you can find a rather comprehensive and up-to-date list of these problems along with known results at: We will see three examples of NP-hard optimization problems today. a Vertex Coloring: Given a graph G = (V,E) assign colors to the vertices so that for all (u,w) E E, u is not the same color as w. Minimize the number of colors used. LIN-k-MOD-q: All the computation in this problem is considered in Zq. Given a set of variables X and a set of equations E each in lc of the variables in X, find an assignment of X. Maximize the number of equations in E that are satisfied by this assignment of X. Scheduling: Given a set of jobs, a processing time for each job, and a set of processors, assign the jobs to the processors. Minimize the time it takes to complete all the jobs assuming a processor can only process one job at a time. In order to prove that these are NP-hard optimization problems, we must prove their corre-sponding decision problems are NP-complete. For example, to prove vertex coloring is NP-hard, we must show that to decide whether a given graph is colorable with less than k colors is NP-complete. As complexity theory is not a pre-requisite for this course, we will just take these facts on faith. However the interested reader can find a nice treatment of NP-completeness in [2, 1, 31. A proof that k-colorability is NP-complete can be found in the exercises in each of these books. As these problems are NP-hard, we must abandon all hope of solving them exactly. Instead, we find ways to cope with their intractability. A common method used in practice is to design heuristic algorithms and analyze their performance empirically. As the below definitions for the words heuristic and empirical indicate, we won't be able to study this method in this theoretical class. To see a treatment of this approach, take artificial intelligence.2 heu.ris.tic \hyu.-'ris-tik\ aj [G heuristisch, fr. NL heuristicus, fr. Gk heuriskein to disclover; akin to OIr fu-ar I have found : serving to guide, discover, or reveal; specif : valuable for empirical research but unproved or incapable of proof em.pir.i.ca1 or em.pir.ic \-i-k*l\ \-ik\ \-i-k(*-)le-\ aj 1: relying on experience or observation alone often without due regard for system and theory 2: originating in or based on observation or experience 3: capable of being verified or disproved by observation or experiment (" laws) -em.pir.i.cal.1~av Instead, we will design polynomial time algorithms that approximately solve these NP-hard optimization problems. We will analyze the solutions returned by these algorithms in the worst-case (i.e. the input for which the solution is farthest from the optimum solution). These algorithms are called approximation algorithms. Definition 2 Let cOPT(x) be the optimal solution of a minimization problem P on input x. An a-approximation algorithm for P is a polynomial time algorithm that is guaranteed to deliver a solution such that for all inputs x the uahe of the solution cH(x) is cH(x) 5 acopT(x). If P is a maximization problem, we require CH (x) 2 acop~(x). Inapproximability The problems we are trying to approximate are NP-hard. Therefore we can't hope to find an a-approximation algorithm for which a = 1. But this does not imply that a must be bounded away from one, and in fact for some problems we can find a = 1+ e for all E > 0 (or e < 0 if it's a maximization problem). Of course, in these cases the running time often depends on c. However, we are not always so lucky. Sometimes we can prove that it is NP-hard to find an approximation algorithm with a < p(n) for some function p(n) (often constant). Such results are called inapproxima bility results. Traditionally, inapproximability of an NP-hard optimization problem is derived from the NP- completeness proof of the decision problem. We can use this method to prove vertex coloring is inapproximable within 4/3. This is because k-colorability (for k > 2) is NP-complete. Therefore, given a graph G, it is hard to distinguish whether it can be colored with at most 3 colors or if it needs at least 4 colors. If we were able to find a 4/3 -e approximation algorithm, we could call this algorithm on G. If G needs at most three colors, our algorithm will find a coloring that uses at most acopT 5 3(4/3 -c) < 4 colors. But if G needs more than three colors, our algorithm must return a coloring that uses at least four colors. Therefore, this polynomial time algorithm solves the problem we claimed is NP-complete! This can not be, unless P=NP. There are actually stronger (non-constant) inapproximability results for this problem. Since 1992, another approach to proving inapproximability has emerged. This approach uses PCPs (i.e. probabilistically checkable proofs, not the Communist Party of Peru or phencyclidine or People for a Clearer Phish). As an example, consider the LIN-3-MOD-2 problem. Recall in this problem we have a set of variables xi and rn equations each in three of these variables. All computation takes place in Z2. For example, we might haveWe try to maximize the number of satisfied equations. First we


View Full Document

MIT 6 854J - NP-hard Optimization Problems

Download NP-hard Optimization Problems
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view NP-hard Optimization Problems and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view NP-hard Optimization Problems 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?