DOC PREVIEW
Berkeley COMPSCI 294 - Lecture 20

This preview shows page 1 out of 4 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 4 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 4 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

U.C. Berkeley Handout N20CS294: Pseudorandomness and Combinatorial Constructions November 8, 2005Professor Luca Trevisan Scribe: Alexandra KollaNotes for Lecture 20In the previous two lectures, we have seen that we can use the analysis of the Nisan - Wigdersongenerator to argue that starting from a distribution X of min-entropy at least k over strings oflength n, if our construction is not a (k, ) extractor, then we obtain a short description for anon-negligible fraction of X. For this to be a contradiction, we need k > O(m2) + log1. This force sk to be large (Ω(n)) if we want to use only t = O(logn) truly random bits. In order to achieveextractors for distributions having smaller min-entropy k, we will need to pre-process the inputrandom source using a condenser and output a shorter string c lose to a distribution of the samemin entropy as the original one.To do that, we apply again the same construction used in the previous lecture based on the NWgenerator but now we pick the output length m to be much bigger than k. Therefore, the outputcannot be close to uniform, our construction cannot be an extractor and we have a short descriptionof a non-negligible fraction of X. Now we can proceed as follows : On input x, we give as output thestring that corresponds to the short description of x. This string will be of length m ·2a=√n andwe will enable us to reconstruct x w.h.p (since we are using a suitable ECC for x), thus preservingthe entropy.Formally, in the NW generator wi use the following parameters:m subsets of {1, . . . , d} S1, . . . , Sm.|Si| = l|Si∩ Sj| ≤ a.For f : {0, 1}l→ {0, 1} we denote by NWf(z) = f(z|S1) ···f(z|Sm)We use an error-correcting code ECC : f : {0, 1}n→ {0, 1}¯n.For n = 2lwe view ECC(x) ∈ {0, 1}¯nas a function fx: {0, 1}l→ {0, 1}. We denote NW E(x, z) =NWfx(z) = fx(z|S1) ···fx(z|Sm)We will first consider the case where X is uniform over a set of size 2kand therefore has min-entropyexactly k. In following lectures we will generalize for min - entropy ≥ k.For m >> k the output cannot be close to uniform, therefore there is a statistical test thatdistinguishes it from uniform. By a hybrid argument, we can conclude that there is an i suchthat fx(z|Si) can be predicted given fx(z|S1) ···fx(z|Si−1). Equivalently, for w ∈ {0, 1}l, z|Si=w, z|[d] −Sirandom, fx(w) can be predicted given fx(z|S1) ···fx(z|Si−1). Each of those functionsdepend only on < a bits of n and they need at most 2avalues to be stored in a table. This givesus a total of m ·2abits of information as promised.1Idea : input x, output m · 2abits of information. Since we are using ECC(x) (the appropriatechoice is to be specified later), x can be reconstructed from m · 2abits, which will ensure that wehave almost the same entropy in the output. However, there could be a catch : it could be the casethat only a small fraction of the z allow us to predict x w.h.p.In order for our condenser to succeed we want to look at the output and be able to reconstruct theinput w.h.p for almost every choice of i and z. Formally :Take m >10k . We want to predict fx(z|Si) with probability ≥ 1−/10, given fx(z|S1) ···fx(z|Si−1)To achieve our goal, we would like a predictor function as follows :Fix random source X uniform over a set of size 2kFix particular zLet x ∼ X , x ∈ {0, 1}n, fx= ECC(x) , i ∼ [1 ···m] u.a.r.(*) given fx(z|S1) ···fx(z|Si−1) want to compute fx(z|Si)with probability ≥ 1 −10over the distribution of x and the choice of iIn order to be able to accomplish (*), let’s first look at the Shannon entropy of the distributionNWfx(z). By definition,H(Y ) =Xa:P r[a]6=0P r [Y = a]log1P r [Y = a]Since NW is a deterministic procedure, it can only decrease the entropy (the probabilities of theevents can only get larger). Therefore,k ≥ H(fx(z|S1) ···fx(z|Sm)) = H(fx(z|S1))+H(fx(z|S2)|fx(z|S1))+···+H(fx(z|Sm)|fx(z|S1) ···fx(z|Sm−1))The left-hand-side of this sum has m terms, each of those measuring how much ’fresh’ informationthere is given the previous bits. On average, this information is only k/m = /10 :Ei∼[m]H(fx(z|Si)|fx(z|S1) ···fx(z|Si−1)) ≤ k/m ≤ /10Now we are ready to define the predictor that will allow us to accomplish (*) above:When we want to compute fx(z|Si)output 1 if P r[fx(z|Si) = 1|fx(z|S1) ···fx(z|Si−1)] > Pr[fx(z|Si) = 0|fx(z|S1) ···fx(z|Si−1)]output 0 otherwiseIn the above, the probabilities are taken over the distribution of x.2Now suppose that fx(z|S1) = b1···fx(z|Si−1) = bi−1. There are only some of the original x thatcan lead to these values. Over the distribution of those x’s, letP r [fx(z|Si) = 1|fx(z|S1) = b1···fx(z|Si−1) = bi−1] = pb1,··· ,bi−1= pIt follows that conditioning on those values, the predictor will be wrong with probability min{p, 1−p} ≤ plog1p+ (1 − p)log11−p= H(p).Let z(i, z) be the event that the predictor is wrong for the specific i and z. Taking probabilityover the distribution X :P rx∼X[z(i, z)] ≤Xb1,··· ,bi−1P r [fx(z|Si) = b1, ··· , fx(z|Si−1) = bi−1] · H(pb1,··· ,bi−1) == H(fx(z|Si)|fx(z|S1) ···fx(z|Si−1))If we want to choose i as well :P rx∼X,i∼[m][z(i, z)] ≤EiXb1,··· ,bi−1P r [fx(z|Si) = b1, ··· , fx(z|Si−1) = bi−1] · H(pb1,··· ,bi−1) ==Ei(H(fx(z|Si)|fx(z|S1) ···fx(z|Si−1))) ≤ k/m = /10Therefore the algorithm we specified above is correct with probability ≥ 1 −/10 over the choice ofi and x. We can now conclude that for every z there is a function pz(the predictor defined above)such that:= P r[pz(fx(z|S1) ···fx(z|Si−1) = fx(z|Si))] ≥ 1 − /10We are now ready to define our condenser:Cond(x, z, i) with z ∈ {0, 1}d, i ∈ [m]Compute fx= ECC(x), view fxas function fx: {0, 1}l→ {0, 1}for j = 1, ··· , i − 1{ for every z0that differs from z only in Si∩ Sjoutput fx(z0|Sj)}output z, iIn the rest of the lecture, we will prese nt the main lemma which will allow us later to prove thatindeed the output of the condenser is -close to a distribution with the same min-entropy as theoriginal one. Intuitively, we want to prove that the output of the condenser doesn’t loose muchentropy, and for this to be proved we will need a deterministic reconstruction procedure that canreconstruct the input x of the condenser with high probability. More precisely :3Lemma 1 Main Lemma Assuming that the ECC has min-distance > ¯n/5, there is a


View Full Document

Berkeley COMPSCI 294 - Lecture 20

Documents in this Course
"Woo" MAC

"Woo" MAC

11 pages

Pangaea

Pangaea

14 pages

Load more
Download Lecture 20
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lecture 20 and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lecture 20 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?