� � 15.083J Integer Programming and Combinatorial Optimization Fall 2009 Approximation Algorithms III Maximum Satisfiability Input: Set C of clauses over n Boolean variables, nonnegative weights wc for each clause c ∈ C. Output: A truth assignment to the Boolean variables that maximizes the weight of satisfied clauses. • Special case: MAX-kSAT (each clause is of size at most k). • Even MAX-2SAT is NP-hard. A first algorithm: 1. Set each Boolean variable to be True independently with probability 1/2. 2. Output the resulting truth assignment. Lemma 1. Let Wc be a random variable that denotes the weight contributed by clause c. If c contains k literals, then E[Wc] = (1 − 2−k)wc. Proof: • Clause c is not satisfied iff all literals are set to False. • The probability of this event is 2−k . • E[Wc] = wc · Pr[c is satisfied]. Theorem 2. The first algorithm has an expected performance guarantee of 1/2. Proof: • By linearity of expectation, 1 1E[W ] = E[Wc] ≥ wc ≥ OPT.2 2c∈C c∈C Derandomizing via the method of conditional expectations: • Note that E[W ] = 1 · E[W |x1 = T] + 1 · E[W |x1 = F]. 2 2 • Also, we can compute E[W |x1 = {T, F}] in polynomial time. 1� � � � � • We choose the truth assignment with the larger conditional expectation, and continue in this fashion: +1• E[W |x1 = a1, . . . , xi = ai] = 1 · E[W |x1 = a1, . . . , xi = ai, xi+1 = T] · E[W |x1 = 2 2 a1, . . . , xi = ai, xi+1 = F]. • After n steps, we get a deterministic truth assignment of weight at least 1 · OPT.2 An integer programming formulation: � max wcyc � c∈C � s.t. xi + (1 − xi) ≥ yc c ∈ C i∈c+ i∈c− yc ∈ {0, 1} xi ∈ {0, 1} c ∈ C i = 1, . . . , n And its linear programming relaxation: max wcyc c∈C s.t. xi + (1 − xi) ≥ yc c ∈ C −i∈c+ i∈c0 ≤ yc ≤ 1 c ∈ C 0 ≤ xi ≤ 1 i = 1, . . . , n Randomized rounding: ∗1. Solve the LP relaxation. Let (x , y ∗) denote the optimal solution. 2. FOR i = 1 TO n ∗3. Independently set variable i to True with probability xi . 4. Output the resulting truth assignment. Lemma 3. If c contains k literals, then � �k1 ∗E[Wc] ≥ 1 − 1 − wcy .ck Proof: • We may assume that c = (x1 ∨ . . . ∨ xk). 2� � � � � � � � � � � � • The probability that not all x1, . . . xk are set to False is 1 − k� (1 − x ∗ i ) ≥ 1 − ��k i=1(1 − x ∗ i ) k �k (1) i=1 � �k ∗ �k = 1 − 1 − i=1 xi (2)k � ∗ �k ≥ 1 − 1 − yc (3)k where (1) follows from the arithmetic-geometric mean inequality and (3) follows from the LP constraint. Proof: � �k• The function g(y) := 1 − 1 − ky is concave. � �k• In addition, g(0) = 0 and g(1) = 1 − 1 − k 1 . � �k• Therefore, for y ∈ [0, 1], g(y) ≥ 1 − 1 − 1 y.k � �k ∗• Hence, Pr[c is satisfied ] ≥ 1 − 1 − 1 y .kc Thus, � �k• Randomized rounding is a 1 − 1 − 1 -approximation algorithm for MAX-kSAT. k • Randomized rounding is a 1 − 1 -approximation algorithm for MAX-SAT. e k Simple algorithm Randomized rounding 1 2 3 4 5 0.5 0.75 0.875 0.938 0.969 1.0 0.75 0.704 0.684 0.672 Theorem 4. Given any instance of MAX-SAT, we run both algorithms and choose the better solution. The (expected) performance guarantee of the solution returned is 3/4. Proof: ≥ 3 ∗• It suffices to show that 1 E[W 1] + E[W 2] wcy .2 c c 4 c • Assume that c has k clauses. ∗• By the first lemma, E[Wc 1] ≥ 1 − 2−k wcyc . 3� � � � � � � �k ∗• By the second lemma, E[W 2] ≥ 1 − 1 − 1 wcy .c kc ≥ 3 ∗• Hence, 1 E[W 1] + E[W 2] wcy .2 c c 4 c • Note that this argument also shows that the integrality gap is not worse than 3/4. • The following example shows that this is tight: • Consider (x1 ∨ x2) ∧ (¯x1 ∨ x2) ∧ (x1 ∨ x¯2) ∧ (¯x1 ∨ x¯2). • xi = 1/2 and yc = 1 for all i and c is an optimal LP solution. • On the other hand, OPT = 3. Bin Packing Input: n items of size a1, . . . , an ∈ (0, 1]. Output: A packing of items into unit-sized bins that minimizes the number of bins used. Theorem 5. The Bin-Packing Problem is NP-complete. Proof: • Reduction from Partition: Input: n numbers b1, . . . , bn ≥ 0. ?: Does there exist S ⊆ {1, . . . , n} such that bi = i�∈S bi?i∈S • Define ai := P n 2bi , for i = 1, . . . , n. j=1 bj • Obviously, there exists a partition iff one can pack all items into two bins. Corollary 6. There is no α-approximation algorithm for Bin Packing with α < 3/2, unless P = NP. First Fit: • “Put the next item into the first bin where it fits. If it does not fit in any bin, open a new bin.” • This is an obvious 2-approximation algorithm: 4� � � � � • If m bins are used, then at least m − 1 bins are more than half full. Therefore, nm − 1 ai > .2 i=1 nSince i=1 ai is a lower bound, m − 1 < 2 · OPT. The result follows. Theorem 7. For any 0 < � < 1/2, there is an algorithm that runs in time polynomial in n and finds a packing using at most (1 + 2�)OPT + 1 bins. Step 1: Lemma 8. Let � > 0 and K ∈ Z+ be fixed. The bin-packing problem with items of size at least � and with at most K different item sizes can be solved in polynomial time. Proof: • Let the different item sizes be s1, . . . , sl, for some l ≤ K. • Let bi be the number of items of size si. • Let T1, . . . , TN be all ways in which a single bin can be packed: � � � m� T1, . . . , TN = (k1, . . . , km) ∈ Zm : kisi ≤ 1 .+ i=1 • We write Tj = (tj1, . . . , tjm). • Then bin packing is equivalent to the following …
View Full Document