CS 70 Discrete Mathematics for CSFall 2006 Papadimit rio u & Vazirani Lecture 18Conditional Probabil i tyA pharmaceutical company is marketing a new test for a certain medical condition. According to clinicaltrials, the test has the following properties:1. When applied to an affected person, the test comes up positive in 90% of cases, and negative in 10%(these are called “false negatives”).2. When applied to a healthy person, the test comes up negative in 80% of cases, and positive in 20%(these are called “false positives”).Suppose that the incidence of the condition in the US population is 5%. When a random person is testedand the test comes up positive, what is the probability that the person actually has the condition? (Note thatthis is presumably not the same as the simple probability that a random person has the condition, which isjust120.)This is an example of a conditional probability: we are interested in the probability that a person has thecondition (event A) given that he/she tests positive (event B). Let’s write this as Pr[A|B].How should we compute Pr[A|B]? Well, since event B is guaranteed to happen, we need to look not at thewhole sample spaceΩ, but at the smaller sample space consisting only of the sample points in B. Whatshould the probabilities of these sample points be? If they all simply inherit their probabilities fromΩ, thenthe sum of these probabilities will be∑ω∈BPr[ω] = Pr[B], which in general is less than 1. So we need toscale the probability of each sample point by1Pr[B]. I.e., for each sample pointω∈ B, the new probabilitybecomesPr[ω|B] =Pr[ω]Pr[B].Now it is clear how to compute Pr[A|B]: namely, we just sum up these scaled probabilities over all samplepoints that lie in both A and B:Pr[A|B] =∑ω∈A∩BPr[ω|B] =∑ω∈A∩BPr[ω]Pr[B]=Pr[A∩ B]Pr[B].Definition 18.1 (conditional probability): For events A,B in the same probability space, such that Pr[B] > 0,the conditional probability of A given BisPr[A|B] =Pr[A∩ B]Pr[B].Let’s go back to our medical testing example. The sample space here consists of all people in the US —denote their number by N (so N ≈ 250 million). The population consists of four disjoint subsets:CS 70, Fall 2006, Lecture 18 1TP: the true positives (90% ofN20=9N200of them);FP: the false positives (20% of19N20=19N100of them);TN: the true negatives (80% of19N20=76N100of them);FN: the false negatives (10% ofN20=N200of them).Now let A be the event that a person chosen at random is affected, and B the event that he/she tests positive.Note that B is the union of the disjoint sets TP and FP, so|B| = |TP| + |FP| =9N200+19N100=47N200.Thus we havePr[A] =120and Pr[B] =47200.Now when we condition on the event B, we focus in on the smaller sample space consisting only of those47N200individuals who test positive. To compute Pr[A|B], we need to figure out Pr[A ∩ B] (the part of A thatlies in B). But A∩ B is just the set of people who are both affected and test positive, i.e., A∩ B = TP. So wehavePr[A∩ B] =|TP|N=9200.Finally, we conclude from Definition 18.1 thatPr[A|B] =Pr[A∩ B]Pr[B]=9/20047/200=947≈ 0.19.This seems bad: if a person tests positive, there’s only about a 19% chance that he/she actually has thecondition! This sounds worse than the original claims made by the pharmaceutical company, but in fact it’sjust another view of the same data.[Incidentally, note that Pr[B|A] =9/2001/20=910; so Pr[A|B] and Pr[B|A] can be very different. Of course, Pr[B|A]is just the probability that a person tests positive given that he/she has the condition, which we knew fromthe start was 90%.]To complete the picture, what’s the (unconditional) probability that the test gives a correct result (positiveor negative) when applied to a random person? Call this event C. ThenPr[C] =|TP|+|TN|N=9200+76100=161200≈ 0.8.So the test is about 80% effective overall, a more impressive statistic.But how impressive is it? Suppose we ignore the test and just pronounce everybody to be healthy. Thenwe would be correct on 95% of the population (the healthy ones), and wrong on the affected 5%. I.e., thistrivial test is 95% effective! So we might ask if it is worth running the test at all. What do you think?Here are a couple more examples of conditional probabilities, based on some of our sample spaces from theprevious lecture.1. Balls and bins. Suppose we toss m = 3 balls into n = 3 bins; this is a uniform sample space with33= 27 points. We already know that the probability the first bin is empty is (1−13)3= (23)3=827.What is the probability of this event given that the second bin is empty? Call these events A,BCS 70, Fall 2006, Lecture 18 2respectively. To compute Pr[A|B] we need to figure out Pr[A∩ B]. But A∩ B is the event that both thefirst two bins are empty, i.e., all three balls fall in the third bin. So Pr[A∩ B] =127(why?). Therefore,Pr[A|B] =Pr[A∩ B]Pr[B]=1/278/27=18.Not surprisingly,18is quite a bit less than827: knowing that bin 2 is empty makes it significantly lesslikely that bin 1 will be empty.2. Dice. Roll two fair dice. Let A be the event that their sum is even, and B the event that the first die iseven. By symmetry it’s easy to see that Pr[A] =12and Pr[B] =12. Moreover, a little counting gives usthat Pr[A∩ B] =14. What is Pr[A|B]? Well,Pr[A|B] =Pr[A∩ B]Pr[B]=1/41/2=12.In this case, Pr[A|B] = Pr[A], i.e., conditioning on B does not change the probability of A.Independent eventsDefinition 18.2 (independence): Twoevents A,B in the same probability space are independentif Pr[A|B] =Pr[A].Note that independence is symmetric: i.e., if Pr[A|B] = Pr[A] then it must also be the case that Pr[B|A] =Pr[B]. To see this, use the definition of conditional probabilities:Pr[B|A] =Pr[A∩ B]Pr[A]=Pr[A∩ B]Pr[B]×Pr[B]Pr[A]=Pr[A|B]Pr[A]× Pr[B] = Pr[B].In the last step here, we used our assumption that Pr[A|B] = Pr[A]. [We are assuming here that Pr[A] andPr[B] are both > 0. Otherwise the conditional probabilities are not defined.]Examples: In the balls and bins example above, events A,B are not independent. In the dice example, eventsA,B are independent.Knowing that events are independent is very useful, because of the following simple observation:Theorem 18.1: If A,B are independent, then Pr[A∩ B] = Pr[A]Pr[B].Proof: From the definition of conditional probability we havePr[A∩ B] = Pr[A|B]Pr[B] = Pr[A]Pr[B],where in the second step we have used independence. 2Note that the condition in Theorem 18.1
View Full Document