Unformatted text preview:

Math 52 — Notes on Probability(These are sketchy notes to accompany our discussion in lecture and section, and thetext pages 361-2. Please let us know of any typos.)1. Random Variables and PDFs in R1.We say a function p(x) is a PDF (probability density/distribution function) if p(x) ≥ 0for all x, andRRp(x) dx = 1.Consider a random occurrence with an outcome we call X, some real number. We say“X is a random variable with distribution p(x)” if for every a, b the probability that X liesbetween a and b isProb(a ≤ X ≤ b) =Zbap(x) dx.The interval [a, b] ⊂ R is also called an event. Notice that the only events we measure aresets (usually ranges) of values of X, not individual values of X.[Think of the x-axis as the set of possible values of the random variable X, and thegraph of p(x) as the continuous analogue of a histogram. This falls in line with p(x) havingan interpretation as a probability density, as the density principle would then dictate thatthe probability that X falls in a small range of width ∆x near X = x0is approximatelyp(x0)∆x; furthermore, probability is an aggregating quantity (like mass or area), so we mayadd together disjoint measurements and approximate the total probability via a Riemannsum. I n the limit as ∆x → 0, the Riemann sum approaches the above integral.]Expectation. The mean value or expected value of X, written E(X), is defined to bethe first moment of x with respect to p(x):E(X) =ZRxp(x) dx.(Note this corresponds to a weighted average of x, since E(X) =E(X)1=RRxp(x) dxRRp(x) dx.)In general, the expected value of some function f of X is the first moment of f(x), i.e.E(f (X)) =ZRf(x)p(x) dx.(For example, if X is the random variable giving the no ontime temperature, in degreesFahrenheit, on a January day in Palo Alto, then E(X) is the expected value of this tem-perature, and E(59(X − 32)) is the expected value of this temp erature measured in degreesCelsius.)Class e xerci se. Show that expectation value is linear, i.e. that for constants c and d,E(cf (X) + dg(X)) = cE(f(X)) + dE(g(X)).1Variance and Standard Deviati on. The variance of a random variable X is thesecond moment of X − E(X), i.e.Var(X) = E((X − E(X))2).The variance is also called the “second moment of X about the mean.” The standard devia-tion of X is the square root of Var(X).Class e xerci se. Show that Var(X) = E(X2) − E(X)2.Examples of PDFs1a. For constants σ > 0 and µ, letp(x) =1σ√2πe−(x−µ)2/2σ2,which is called the normal (Gaussian, bell-shaped) distribution with center µ and width σ.(You can use single-variable calculus to verify that the graph of p is symmetric about x = µ,has one local max at x = µ, and has two inflection points at x = µ ± σ.)By calculating the area under the curve using a trick, we proved in class last week thatin the case µ = 0, σ = 1, then p(x) is indeed a PDF. (See also Problem 33 on page 361.)Exercise: verify this for arbitrary values of µ, σ.Other facts: if X is a random variable with the above PDF, then E(X) = µ and Var(X) =σ2. Prove this! (Hint: to compute the integrals, first make the substitution t = (x − µ)/σ.)Furthermore, anyone who’s taken a stats course knows thatProb(µ − σ ≤ X ≤ µ + σ) ≈ 0.68, and Prob(µ − 2σ ≤ X ≤ µ + 2σ) ≈ 0.95.(You need a calculator to get these approximations, but how can you use the integrals totell that the quantities don’t depend on µ and σ?)1b. For a constant λ > 0, letp(x) =(1λe−x/λx ≥ 00 x < 0This is usually called the exponential-type distribution with width λ. It is often used tomodel the time spent waiting in a queue, or perhaps the lifespan of a light bulb.Exercises: show that p(x) is a PDF! What are the mean and the variance?1c. For any a, b, with a < b, the uniform distribution on [a, b] isp(x) =(1b−aa ≤ x ≤ b0 otherwiseYou should check that p(x) is a PDF, and compute the mean and variance. If X is a randomvariable with this distribution, what is the probability that X lies in [a, b]? outside thisinterval?22. Random Variables in R2and Joi nt PDFs.A two-variable function p(x, y) is a joint PDF (or 2-dim PDF) if p(x, y) ≥ 0 for all (x, y),andRRR2p(x, y) dA = 1.Consider a random occurrence with an outcome some ordered pair−→X ∈ R2. We say−→Xis a random variable with distribution p(x, y) if for every region D ⊂ R2,Prob(−→X ∈ D) =ZZDp(x, y) dA.Analogously to the R1case, the region D ⊂ R2is also called an event.For a random variable−→X = (X, Y ) with joint distribution p(x, y), the expectation (mean)value of some function f of X and Y is computed analogously to the R1case, as the firstmoment of f:E(f (X, Y )) =ZZR2f(x, y)p (x, y) dA.In particular, one could ask for the expected value of X, or of Y , alone, etc.Examples (via Text Problems)2a. Throw a dart at a dartboard; what are the (x, y) coordinates of the spot wherethe dart lands? (See also Problem 41: given a joint distribution on (x, y), compute theprobability that the dart lands inside a given region in R2, etc.)2b. Two lightbulbs, A and B; say the lifespan of bulb A is X hours and the lifespan ofbulb B is Y hours, each of which are random variables. We could package this informationtogether into a single 2-dimensional random variable−→X = (X, Y ). See also Problem 42: ifa single bulb has PDF p(x), which is an exponential-type distribution with λ = 2000, undercertain circumstances (independence; see below) it makes sense to say that the joint PDFof two bulbs is p(x, y) = p(x)p(y). What is the probability that both bulbs fail within 2000hours? What is the probability that bulb A fails before bulb B, but both fail within 1000hours?Marginal Probabilities and IndependenceGiven a random variable−→X = (X, Y ) in R2, it makes sense that we might want to focusour attention on X alone, or on Y alone, as examples of random variables in R1. The basicquestion is, given the joint PDF p(x, y) for−→X , what would be the PDF for either X aloneor Y alone? These are the marginal distributions: letpx(x) =ZRp(x, y) dy, and py(y) =ZRp(x, y) dx.Then px(sometimes written p1) is the single-variable PDF for X alone, and py(sometimeswritten p2) is the single-variable PDF for Y alone. (Why are pxand pynecessarily PDFs?)We say the two random variables X and Y in R1are independent if the joint PDF p(x, y)for the random variable−→X = (X, Y ) satisfies the property thatp(x, y) = px(x)py(y),3where pxand pyare the marginal distributions. (See also Problem


View Full Document

MIT MATH 52 - Notes on Probability

Download Notes on Probability
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Notes on Probability and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Notes on Probability 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?