� �� � �� �6.856 — Randomized Algorithms David Karger Handout #10, 2002 — Homework 4 Solutions Problem 1 (a) MR Exercise 4.2. Each node aibi sends a packet to node biai through node bibi. There 2are 2n/2 = 2 log N packets that have to be routed through a given node xx. Half of these packets have to flip the ( n + 1)-st bit. All these messages have to cross the one 2 edge that connects xx to the node with a different ( n + 1)-st bit. Therefore, at least 2 N√N = Ω �√N = Ω steps are needed. 2 n (b) MR 4.9. Consider the transpose permutation again, and again restrict attention to 0n/2packets with bi = . We show that with high probability, 2Ω(n) packets go through vertex 0� n, which means we take time at least 2Ω(n)/n = 2Ω(n). For the proof, fix attention on n/2 packets whose ai have exactly k ones (we’ll fix k later). Note that the bit fixing k algorithm must change these k ones to zeroes, and must change a corresponding k zeroes to ones. We go through vertex 0n if all k ones in ai are corrected to zeroes before any of the zeroes in bi are corrected to ones. Since the corrections are in random order, meaning that the first k bits to be fixed are a random subset of the 2k that must be fixed, the probability that this happens is 2k −1 . k It follows that the expected number of packets hitting 0n is � � � �kn/2 n �k �� 2k �k2k ≥ 2ek kk� �kn = 4ek Now suppose we take k = n/8e. Then we get an expected packet number of 2n/8e = 2Ω(n). Since each packet is deciding independently whether to go through 0n , we can apply the Chernoff bound to deduce that at least 1 2n/8e packets go through 0n with high 2 · probability. 1 M. R. refers to this text:Motwani, Rajeez, and Prabhakar Raghavan. Randomized Algorithms. Cambridge: Cambridge University Press, 1995.� � � � � � � � � � �Problem 2 1. As mentioned in the problem statement, every Xi has a distribution equal to the length of a sequence of coin flips until we see the first heads. Therefore Xi has the same distribution as the length of a sequence of coin flips until we see the n-th head. Imagine having an infinite sequence of coin flips, then Xi gives the position of the n-th head. The event X > (1 + δ)2n is therefore the same as saying that the n-th head does not occur among the first (1 + δ)2n coin flips. Let Y be the random variable giving the number of heads among the first (1 + δ)2n coin flips. Then we have Pr[X > (1 + δ)2n] = Pr[Y < n] Since Y is the sum of independent Poisson trials, we can apply a Chernoff bound on Y to bound the above probability. Noting that µY = (1 + δ)n, we have δ Pr[X > (1 + δ)2n] = Pr[Y < n] = Pr Y < (1 − )(1 + δ)n 1 + δδ2 ≤ exp −(1 + δ)n · 2(1 + δ)2 nδ2 = exp .−2(1 + δ) 2. (optional) Instead of considering E[X] directly, we consider E[exp(tX)] = E[exp(t Xi)] = E[Π exp(tXi)] = ΠE[exp(tXi)], where we fix t later. By applying a Markov bound, we obtain Pr[X > (1 + δ)2n] = Pr[exp(tX) > exp(t(1 + δ)2n)] E[exp(tX)]≤ exp(t(1 + δ)2n) ΠE[exp(tXi)] = (∗)exp(t(1 + δ)2n) Now we have (assuming et < 2): t t1 1 3t �� e�k et/2 eE[exp(tXi)] = e t +1 e 2t + e = ∞= = t2 4 8+ ··· 2 1 − et/2 2 − ek=1 Substitution in (∗) yields: etn 1 n Pr[X > (1 + δ)2n] =≤ (2 − et)net(1+δ)2n (2 − et)et(1+2δ) Taking the derivative by t, and setting it equal to zero shows that this term takes its minimum for t = ln(1 + δ/(1 + δ)), which implies et < 2 as desired. We therefore have the bound �� � � �(1+2δ) �−n δ δ Pr[X > (1 + δ)2n] 1 − 1 + ≤ 1 + δ 1 + δ (∗∗) 2� �� � � � � This becomes a bit tighter than the result from (a) if δ becomes small. Let ε > 0 be some small constant. Then there is some δ0 such that for all δ < δ0, we have: 1 − δ/(1 + δ) > exp(−ε) (1 + δ/(1 + δ))(1+δ)/δ > exp(1 − ε) δ2/(1 + δ) + δ > δ2 We can use these to bound (∗∗): exp(−ε + (1 − ε)(δ2/(1 + δ) + δ)) −n ≤ exp(−n((1 − ε)δ2 − ε))(∗∗) ≤ Thus, we come arbitrarily close to exp(−nδ2) as ε tends to 0. Problem 3 (a) The probability of a sample yielding the mean of n/2 can be approximated by Stirling’s formula as follows: nn n/2 n! n e = √2� πn �� n �= � 2 = Θ(1/√n). n 2n2n (n/2)!22n ≈ πn πn 2e It is also not hard to see that all other outcomes have lower probability. It follows that the range of values in n/2 ± c√n has total probability O(c), so by appropriate choice of the constant, one can achieve the desired probability of being outside the range. (b) We use the set notation, where we have a universe U = {1, . . . , n}, subsets S1, S2, . . . , Sn ⊆ U, and a partition P ⊂ U. We call a partition good for Si if the discrepancy of Si is at most 2/3c√n, where c is chosen according to part (a). A partition is good for a ¯choice of sets S = (Si)i=1,...,n if it is good for all sets. We choose the sets as follows: we set S1 = U, and let the other Si be independent random sets that include each element with probability 1/2. We want to show that ¯there exists a choice of sets S, so that no partition is good for it. Using the probabilistic method, this amounts to showing: ¯Pr[∃ good partition P for S] < 1 S¯We have ¯ ¯Pr[∃ good partition P for S] Pr[P is good for S]¯S ≤ S¯P n= Pr[P is good for Si], Si Pi=1 by a union bound and the independence of the Si. Let P be fixed in the following, and we will estimate Πni=1 PrSi [P is good for Si]. Since S1 = U, this quantity is non-zero only if |P | ∈ [n/2 − c√n/3, n/2 + c√n/3], (∗) 3� � � � � � � � � � � � � since P would otherwise split S1 too unevenly. So we can assume that P is in this range for the following. Consider some set Si for i ≥ 2. Let X, Y , Z be the following random variables: ¯ ¯ ¯X = Si ∩ P , Y = Si ∩ P , Z = Si ∩ P .| | | | | | Note …
View Full Document