Unformatted text preview:

Slide 1Slide 2Slide 3Slide 4Slide 5Slide 6Slide 7Slide 8Slide 9Slide 10Slide 11Slide 12Slide 13Slide 14Slide 15Slide 16Slide 17Slide 18Slide 19Slide 20Slide 21Slide 22Slide 23Slide 24Slide 25Slide 26Slide 27Slide 28Slide 29Slide 30Slide 31Slide 32Slide 33Slide 34Slide 35Slide 36Slide 37Slide 38Slide 39Slide 40Slide 41Slide 42Slide 43Slide 44Slide 45Slide 46Study BeeCOMPSCI 102Introduction to Discrete MathematicsCPS 102ClassicsLecture 11 (October 3, 2007)Today, we will learn about a formidable tool in probability that will allow us to solve problems that seem really really messy…If I randomly put 100 letters into 100 addressed envelopes, on average how many letters will end up in their correct envelopes?= k k (…aargh!!…)Hmm… k k Pr(k letters end up incorrect envelopes)On average, in class of size m, how many pairs of people will have the same birthday?k k Pr(exactly k collisions)= k k (…aargh!!!!…)The new tool is called “Linearity ofExpectation”Random VariableTo use this new tool, we will also need to understand the concept of a Random VariableToday’s lecture: not too much material, but need to understand it wellRandom VariableA Random Variable is a real-valued function on SExamples:X = value of white die in a two-dice rollX(3,4) = 3, X(1,6) = 1Y = sum of values of the two diceY(3,4) = 7, Y(1,6) = 7W = (value of white die)value of black dieW(3,4) = 34, Y(1,6) = 16Let S be sample space in a probability distributionTossing a Fair Coin n TimesS = all sequences of {H, T}nD = uniform distribution on S  D(x) = (½)n for all x  SRandom Variables (say n = 10)X = # of headsX(HHHTTHTHTT) = 5Y = (1 if #heads = #tails, 0 otherwise)Y(HHHTTHTHTT) = 1, Y(THHHHTTTTT) = 0Notational ConventionsUse letters like A, B, E for eventsUse letters like X, Y, f, g for R.V.’sR.V. = random variableTwo Views of Random VariablesInput to the function is randomRandomness is “pushed” to the values of the functionThink of a R.V. as A function from S to the reals Or think of the induced distribution on 012TTHTTHHH¼¼¼¼STwo Coins TossedX: {TT, TH, HT, HH} → {0, 1, 2} counts the number of heads¼½¼Distribution on therealsIt’s a Floor Wax And a Dessert ToppingIt’s a function on the sample space SIt’s a variable with a probability distribution on its valuesYou should be comfortable with both viewsFrom Random Variables to EventsFor any random variable X and value a, we can define the event A that X = aPr(A) = Pr(X=a) = Pr({x  S| X(x)=a})Pr(X = a) = Pr({x  S| X(x) = a})Two Coins TossedX: {TT, TH, HT, HH} → {0, 1, 2} counts # of heads012TTHTTHHH¼¼¼¼S¼½¼Distribution on XX= Pr({x  S| X(x) = 1})= Pr({TH, HT}) = ½Pr(X = 1)0.30.30.20.10.050.0500.450.5510From Events to Random VariablesFor any event A, can define the indicator random variable for A:XA(x) = 1 if x  A0 if x  AX has adistribution on its valuesX is a function on the sample space SDefinition: ExpectationThe expectation, or expected value of a random variable X is written as E[X], and is Pr(x) X(x) =  k Pr[X = k]x  S kE[X] =A Quick Calculation…What if I flip a coin 3 times? What is the expected number of heads?E[X] = (1/8)×0 + (3/8)×1 + (3/8)×2 + (1/8)×3 = 1.5But Pr[ X = 1.5 ] = 0Moral: don’t always expect the expected.Pr[ X = E[X] ] may be 0 !Type CheckingA Random Variable is the type of thing you might want to know an expected value ofIf you are computing an expectation, the thing whose expectation you are computing is a random variableE[XA] = 1 × Pr(XA = 1) = Pr(A)Indicator R.V.s: E[XA] = Pr(A)For any event A, can define the indicator random variable for A:XA(x) = 1 if x  A0 if x  AAdding Random VariablesE.g., rolling two dice. X = 1st die, Y = 2nd die, Z = sum of two diceIf X and Y are random variables (on the same set S), then Z = X + Y is also a random variableZ(x) = X(x) + Y(x)Adding Random VariablesExample: Consider picking a random person in the world. Let X = length of the person’s left arm in inches. Y = length of the person’s right arm in inches. Let Z = X+Y. Z measures the combined arm lengthsIndependenceTwo random variables X and Y are independent if for every a,b, the events X=a and Y=b are independentHow about the case of X=1st die, Y=2nd die? X = left arm, Y=right arm?Linearity of ExpectationIf Z = X+Y, thenE[Z] = E[X] + E[Y]Even if X and Y are not independentE[Z] Pr[x] Z(x)x  S Pr[x] (X(x) + Y(x))x  S Pr[x] X(x) +  Pr[x] Y(x))x  Sx  SE[X] + E[Y]====1,0,1HT0,0,0TT0,1,1TH1,1,2HHLinearity of ExpectationE.g., 2 fair flips: X = 1st coin, Y = 2nd coin Z = X+Y = total # headsWhat is E[X]? E[Y]? E[Z]?1,0,1HT0,0,0TT1,0,1TH1,1,2HHLinearity of ExpectationE.g., 2 fair flips: X = at least one coin is headsY = both coins are heads, Z = X+YAre X and Y independent?What is E[X]? E[Y]? E[Z]?By InductionE[X1 + X2 + … + Xn] = E[X1] + E[X2] + …. + E[Xn]The expectation of the sum =The sum of the expectationsIt is finally time to show off ourprobabilityprowess…If I randomly put 100 letters into 100 addressed envelopes, on average how many letters will end up in their correct envelopes?Hmm… k k Pr(k letters end up incorrect envelopes)= k k (…aargh!!…)Use Linearity of ExpectationXi = 1 if Ai occurs0 otherwiseLet Ai be the event the ith letter ends up in its correct envelopeLet Xi be the indicator R.V. for AiWe are asking for E[Z]Let Z = X1 + … + X100E[Xi] = Pr(Ai) = 1/100So E[Z] = 1So, in expectation, 1 letter will be in the same correct envelopePretty neat: it doesn’t depend on how many letters!Question: were the Xi independent?No! E.g., think of n=2Use Linearity of ExpectationGeneral approach:View thing you care about as expected value of some RVWrite this RV as sum of simpler RVs (typically indicator RVs)Solve for their expectations and add them up!We flip n coins of bias p. What is the expected number of heads?We could do this by summingBut now we know a better way!Examplek k Pr(X = k)=k k pk(1-p)n-knkLet X = number of heads when n independent coins of bias p are flippedBreak X into n simpler RVs:E[ X ] = E[i Xi ] = npXj = 1 if the jth coin is heads0 if the jth coin is tailsLinearity of Expectation!What About Products?If Z = XY, then E[Z] = E[X] × E[Y]?No!X=indicator for “1st flip is heads” Y=indicator for “1st flip is tails”E[XY]=0But It’s True If RVs Are IndependentProof:E[X] = a a × Pr(X=a)E[Y] = b b × Pr(Y=b)E[XY] = c c × Pr(XY = c)=


View Full Document

Duke CPS 102 - Lecture 11

Documents in this Course
Lecture

Lecture

34 pages

Lecture

Lecture

42 pages

Lecture

Lecture

46 pages

Lecture

Lecture

77 pages

Notes

Notes

17 pages

Notes

Notes

52 pages

Lecture 9

Lecture 9

72 pages

Lecture

Lecture

7 pages

Lecture

Lecture

11 pages

Lecture

Lecture

28 pages

Lecture

Lecture

25 pages

Forbes

Forbes

9 pages

Lecture

Lecture

53 pages

Lecture

Lecture

21 pages

Lecture 4

Lecture 4

54 pages

Lecture

Lecture

24 pages

Lecture

Lecture

46 pages

Lecture

Lecture

16 pages

Lecture

Lecture

7 pages

Lecture

Lecture

46 pages

Graphs II

Graphs II

34 pages

Lecture

Lecture

81 pages

Lecture

Lecture

46 pages

Load more
Download Lecture 11
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lecture 11 and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lecture 11 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?