New version page

Fields Summer School on Approximability of CSPs

Upgrade to remove ads

This preview shows page 1-2-3-4-5 out of 16 pages.

Save
View Full Document
Premium Document
Do you want full access? Go Premium and unlock all 16 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 16 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 16 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 16 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 16 pages.
Access to all documents
Download any document
Ad free experience

Upgrade to remove ads
Unformatted text preview:

Introduction to approximabilitySlight approximation and inapproximabilitySlight inapproximabilitySlight approximabilityThe Dream Goal attainedLinear programmingExample: Max-SatConnections to algebraLP gapsSemidefinite programmingThe Max-Cut SDPSDP gapsThe Basic SDP relaxation for general CSPsRounding SDP solutions and connections to algebraExercisesFields Summer School on Approximability of CSPsRyan O’DonnellJune 26, 20111 Introduction to approximabilityThese lectures will be about the approximability of constraint satisfaction problems (CSPs).Definition 1.1. Let D be a domain of cardinality q. We usually write D = {0,1,2,..., q −1} andoften call the elements of D labels. Let Γ be a nonempty finite set of nontrivial relations (predicates)over D, with each R ∈Γ having arity ar(R) ≤ k. We usually write an r-ary relation as R : Dr→{0,1}.Together, D and Γ form an algorithmic problem called CSP(Γ).Definition 1.2. An instance P of CSP(Γ) over the n variables V consists of a multiset of m con-straints. Each constraint C ∈ P is a pair (R, S), where R ∈ Γ and S (the scope of C) is list of ar(R)many distinct1variables from V . We assume that each variable is in the scope of at least one con-straint, so m ≥ n/k.The algorithmic goal, given an instance P , is to find an assignment for P which is as “good” aspossible.Definition 1.3. An assignment for an instance P of CSP(Γ) is any mapping F : V → D. The assign-ment satisfies a constraint C =(R, S) if R(F(S)) =1, where the notation F(S) means (F(S1),..., F(Sr)).The value of the assignment, ValP(F) ∈ [0,1], is the fraction of constraints it satisfies; this can bewrittenValP(F) = avgC=(R,S)∈P[R(F(S))].If there is an F such that ValP(F) =1 we call F a satisfying assignment and say that P is satisfiable.More generally, the optimum value of the instance is Opt(P ) =maxF{ValP(F)}.Examples:• k-Sat: D ={0,1}, Γ is all disjunctions of up to k literals.• Ek-Sat: Same, but each disjunction is on exactly k literals.• Horn-k-Sat: Same as k-Sat, but each disjunction contains at most one positive literal.• Cut: D = {0, 1}, Γ = {6=}. Usually thought of as partitioning a graph into two pieces, with thegoal of having all edges “cut” (crossing the partition).• q-Cut: D ={0, 1, . . . , q −1}, Γ again just contains binary disequality. Also known as q-Colouring.• Ek-Lin(q): D =Zq, Γ is all linear equations of the form ±v1±v2±···±vk= c, c ∈Zq.1We will make this not-necessarily-standard assumption.1• 3-CSP: D ={0,1}, Γ is all relations on up to three variables.• L-Retraction, where L is a lattice poset: D = L, Γ contains the binary relation ≤ and all theunary relations =`for ` ∈ L.• Bijection(q), AKA Unique-Games(q): |D|= q, Γ is all binary bijections.• Projection(q), AKA Label-Cover(q): |D|= q, Γ is all binary projections R, meaning that there isa π : D → D such that R ={(b, π(b)) : b ∈ D}.From the point of view of computational complexity, the first thing to ask about CSP(Γ) is whetherthere is an efficient (i.e., poly(m)-time) algorithm which, when given a satisfiable instance P , is guar-anteed to find a satisfying assignment. E.g., for the Cut problem (q = 2, Γ = {6=}), there is such analgorithm; for the 3-Sat problem, it’s NP-hard.But that is not the end of the story. Say you are given an instance P of the Cut problem andOpt(P ) 6=1. You still might want to find an assignment with as large a value as possible. Ideally, youwould want to find an assignment achieving optimal value. But, as is well-known, the “Max-Cut”problem is NP-hard.2More precisely, if you look at the classic reduction from 3-Sat to Max-Cut yousee it proves something like:Theorem 1.4. Given an instance P of Max-Cut with Opt(P ) ≥3/4, it is NP-hard to find an assign-ment F with ValP(F) ≥3/4.This is still not the end of the story. For example, if there were an efficient algorithm which wasguaranteed to find an assignment of value at least 2/3, or even .7499, such an approximation algo-rithm could still be quite worthwhile “in practice”. Let’s make some definitions to help investigatethese issues.Definition 1.5. For real numbers 0 ≤ α ≤ β ≤ 1, we say an algorithm (α,β)-approximates CSP(Γ)(pronounced “α out of β approximates”) if it outputs an assignment with value at least α on anyinput instance with value at least β. (Mnemonic: the αlgorithm gets ≥ α when the βest is ≥ β.) Wealso consider an easier task: we say an algorithm (α,β)-distinguishes CSP(Γ) if it outputs ‘NO’ oninstances P with Opt(P ) <α and ‘YES’ on instances P with Opt(P ) ≥β.Remark 1.6. We prefer to give algorithms for the approximation problem and NP-hardness resultsfor the distinguishing problem. Regarding reductions between distinguishing and approximating,see the exercises. Also note that a single algorithm may (α, β)-approximate CSP(Γ) for many differentvalues of (α,β).Remark 1.7. We also allow randomized algorithms; in this case we study the expected value of thesolutions they produce. As an aside, all randomized algorithms known for CSPs are also known tobe derandomizable.For a given CSP, the question of when (α,β)-approximating is in P and when it is NP-hard can berather fascinating. Here is the state of affairs just for the Max-Cut problem with β =5/6:Theorem 1.8.1. (3/4, 3/4)-distinguishing Max-Cut is NP-hard. (Karp, 1972.)2We usually add the prefix “Max-” to a CSP when considering it as an optimization problem.22. (3/4 −²0,3/4)-distinguishing Max-Cut is NP-hard, where ²0>0 is a certain universal constant.(Follows from the “PCP Theorem” of Feige–Goldwasser–Lovasz–Safra–Szegedy 1991, Arora–Safra 1992, Arora–Lund–Motwani–Sudan–Szegedy 1999.)3. (.878(3/4), 3/4)-approximating for Max-Cut is in P. (Goemans–Williamson 1994.) Remark:.878(3/4) ≈ .659, and here .878 is shorthand for 2/(π sin θ∗), where θ∗is the positive root ofθ∗=tan(θ∗/2).4. (11/16+², 3/4)-distinguishing for Max-Cut is NP-hard for all ² >0. (Håstad 1997, plus Trevisan–Sorkin–Sudan–Williamson.) Remark: 11/16 =.6875.5. (2/3, 3/4)-distinguishing Max-Cut is not known to be in P and is not known to be NP-hard.6. (.878(3/4) +², 3/4)-distinguishing Max-Cut is in NP-hard for all ² > 0, assuming the controver-sial “Unique-Games Conjecture” (“UGC”). (Khot–Kindler–Mossel–O’Donnell 2004 plus Mossel–O’Donnell–Oleszkiewiz 2005.)Comparing results 3


Download Fields Summer School on Approximability of CSPs
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Fields Summer School on Approximability of CSPs and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Fields Summer School on Approximability of CSPs 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?