DOC PREVIEW
Berkeley COMPSCI 70 - Lecture Notes

This preview shows page 1-2 out of 6 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 6 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 6 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 6 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

CS 70 Discrete Mathematics for CSFall 2003 Wagner Lecture 17Introduction to ProbabilityThe topic for the third and final major portion of the course is Probability. We will aim to make sense ofstatements such as the following:1. “There is a 30% chance of a magnitude 8 earthquake in Northern California before 2030.”2. “The average time between system failures is about three days.”3. “The chance of getting a flush in a five-card poker hand is about 2 in 1000.”4. “In this load-balancing scheme, the probability that any processor has to deal with more than twelverequests for service is negligible.”Implicit in all such statements is the notion of an underlying probability space. This may be the result ofsome model we build of the real world (as in 1 and 2 above), or of a random experiment that we haveourselves constructed (as in 3 and 4 above). None of these statements makes sense unless we specify theprobability space we are talking about: for this reason, statements like 1 (which are typically made withoutthis context) are almost content-free.Probability spacesEvery probability space is based on a random experiment, such as rolling a die, shuffling a deck of cards,picking a number, assigning jobs to processors, running a system etc. Rather than attempt to define “experi-ment” directly, we shall define it by its set of possible outcomes, which we call a “sample space.” Note thatall outcomes must be disjoint, and they must cover all possibilities.Definition 17.1 (sample space): The sample space of an experiment is the set of all possible outcomes. Anoutcome is often called a sample point or atomic event.Definition 17.2 (probability space): A probability space is a sample spaceΩ, together with a probabilityPr[ω] for each sample pointω, such that• 0 ≤ Pr[ω] ≤ 1 for allω∈Ω.•∑ω∈ΩPr[ω] = 1, i.e., the sum of the probabilities of all outcomes is 1.[Strictly speaking, what we have defined above is a restricted set of probability spaces known as discretespaces: this means that the set of sample points is either finite or countably infinite (such as the naturalnumbers, or the integers, or the rationals, but not the real numbers). Later, we will talk a little aboutcontinuous sample spaces, but for now we assume everything is discrete.]CS 70, Fall 2003, Lecture 17 1Here are some examples of (discrete) probability spaces:1. Flip a fair coin. HereΩ= {H, T}, and Pr[H] = Pr[T] =12.2. Flip a fair coin three times. HereΩ= {(t1, t2, t3) : ti∈ {H, T}}, where tigives the outcome of theith toss. ThusΩconsists of 23= 8 points, each with equal probability18. More generally, if we flipthe coin n times, we get a sample space of size 2n(corresponding to all words of length n over thealphabet {H, T}), each point having probability12n.3. Flip a biased coin three times. Suppose the bias is two-to-one in favor of Heads, i.e., it comes up Headswith probability23and Tails with probability13. The sample space here is exactly the same as in theprevious example. However, the probabilities are different. For example, Pr[HHH] =23×23×23=827,while Pr[THH] =13×23×23=427. [Note: We have cheerfully multiplied probabilities here; we’llexplain why this is OK later. It is not always OK!] More generally, if we flip a biased coin with Headsprobability p (and Tails probability 1− p) n times, the probability of a given sequence is pr(1− p)n−r,where r is the number of H’s in the sequence. Biased coin-tossing sequences show up in manycontexts: for example, they might model the behavior of n trials of a faulty system, which fails eachtime with probability p.4. Roll two dice (one red, one blue). ThenΩ= {(i, j) : 1 ≤ i, j ≤ 6}. Each of the 36 outcomes has equalprobability,136.5. Roll two indistinguishable dice (i.e., both blue). HereΩ= {(i, j) : 1 ≤ i ≤ j ≤ 6}. There are 21outcomes (why?); those of the form (i, i) have probability136, and the rest have probability236=118.[How did we derive these probabilities?] When the dice are indistinguishable, we are forced to usethis sample space. We might also use it, for example, when modeling a game of backgammon, evenwhen the two dice are distinguishable: for we are only interested in the two numbers that come up,not in their order.6. Card shuffling. Shuffle a deck of cards. HereΩconsists of the 52! permutations of the deck, eachwith equal probability152!. [Note that we’re really talking about an idealized mathematical model ofshuffling here; in real life, there will always be a bit of bias in our shuffling. However, the mathemat-ical model is close enough to be useful.]7. Poker hands. Shuffle a deck of cards, and then deal a poker hand. HereΩconsists of all possiblefive-card hands, each with equal probability (because the deck is assumed to be randomly shuffled).The number of such hands is525, i.e., the number of ways of choosing five cards from the deckof 52 (without worrying about the order).1Recall thatnk=n!k!(n−k)!. Consequently,525=52!5!47!=52×51×50×49×485×4×3×2×1= 2598960. Such experiments, in which some number of distinct elements are drawnfrom a set of size n, are usually known as “sampling without replacement.”8. Prize draws. n contestants enter a prize draw; three distinct winners are picked at random, in order(first, second, third). HereΩconsists of all triples of the form (i, j, k), where i, j, k ∈ {1, . . . , n} andi, j, k are distinct. Each sample point has equal probability. The number of sample points, |Ω|, isequal to the number of ways of picking three elements, in order, out of a set of size n: this wasdenoted P(n, 3) in Lecture 9, where we also saw that P(n, k) =n!(n−k)!. So we have |Ω| =n!(n−3)!=n(n−1)(n− 2). This is another example of sampling without replacement, except that in this case theorder of the chosen elements is significant.1The notationnkis pronounced “n choose k”. Some writers use the alternative notation C(n, k), which is entirely equivalent.In either case, it is usually pronounced “n choose k.”CS 70, Fall 2003, Lecture 17 29. Balls and bins. Throw 20 (distinguishable) balls into 10 (distinguishable) bins, so that each ball isequally likely to land in any bin, regardless of what happens to the other balls. HereΩ= {(b1, b2, . . . , b20) :1 ≤ bi≤ 10}; the component bidenotes the bin in which ball i lands. There are 1020possible out-comes (why?), each with probability11020. More generally, if we throw m


View Full Document

Berkeley COMPSCI 70 - Lecture Notes

Documents in this Course
Notes 1

Notes 1

12 pages

Note 2

Note 2

6 pages

Notes

Notes

6 pages

Notes

Notes

7 pages

Note 10

Note 10

8 pages

n20

n20

10 pages

n19

n19

10 pages

n18

n18

10 pages

n17

n17

6 pages

n16

n16

9 pages

n15

n15

10 pages

n14

n14

9 pages

n13

n13

6 pages

n12

n12

7 pages

n11

n11

6 pages

n10

n10

8 pages

n9

n9

5 pages

n8

n8

7 pages

n7

n7

8 pages

n6

n6

5 pages

n5

n5

8 pages

n4

n4

7 pages

n3

n3

10 pages

n2

n2

7 pages

n1

n1

5 pages

Load more
Download Lecture Notes
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lecture Notes and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lecture Notes 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?