Unformatted text preview:

GEOS 36501/EVOL 33001 6 January 2012 Page 1II. Introduction to probability, 21 Random Variables1.1 Definition:A random variable is a function defined on a sample space. In other words, it is a mappingof events to numbers.1.2 This can be a simple one-to-one mapping...1.2.1 Example:Throw a six-sided die, and let the random variable X be the face value.1.3 ...or any specified functio n.1.3.1 Example:In our previous 8-point sample space of the outcomes of three coin tosses, we can define arandom variable X as the number of heads. We then have:Event XHHH 3HHT 2HTH 2HTT 1TTT 0TTH 1THT 1THH 21.4 Random variables and probabilites (illustrate with samplespace of discrete points):Let x1, x2, ... be all the values that the random variable X can take on. Then we denote theprobability that X takes on the value xjas P (X = xj) = f(xj). In the previous example ofthe die, assuming it is fair, we have:GEOS 36501/EVOL 33001 6 January 2012 Page 2xjf(xj)1 1/62 1/63 1/64 1/65 1/66 1/6And in the example of the sum of heads in three coin tosses, we have:xjf(xj)0 1/81 3/82 3/83 1/82 Density and Distribution2.1 Density:The density function f(x) is proportional to the probability that a random variable willtake on a value between x and x + δx, where δx is an infinitesimal increment. (Strictlyspeaking, with a continuous distribution, the probability of taking on exactly a particularvalue is vanishingly small.)2.1.1 By definition:R∞−∞f(x) dx = 1 if f(x) is a density function.2.2 Distribution:The distribution function F (x) is the probability that the random variable will take on avalue less than or equal to x.2.2.1F (x) = P (X ≤ x) =Rx−∞f(y) dy, and f (x) =d[F (x)]dxSo, if we know the density we can determine the distribution function, and vice versa.2.3 Discrete case:f(xj) = P (X = xj) is the probability distribution, and F (x) = P (X ≤ x) =Pxj≤xf(xj) isthe distribution function.GEOS 36501/EVOL 33001 6 January 2012 Page 32.4 Examples2.4.1 Exponential with parameter rdensity: f(x) = re−rxdistribution: F (x) = 1 −e−rx0.0 0.5 1.0 1.5 2.0 2.5 3.00.20.40.60.81.0Exponential with parameter r=1xf(x)0.0 0.5 1.0 1.5 2.0 2.5 3.00.00.20.40.60.8xF(x)GEOS 36501/EVOL 33001 6 January 2012 Page 4Derivation of these functions (in relation to waiting times):• Suppose we have a Poisson process —with uniform rate or probability of occurrence.Then the waiting times between successive events are exponentially distributed.• To see why this should be the case, suppose the process goes on at an instantaneousrate of r acting over a span of time t.• Imagine that we subdivide t into n fine increments of length t/n each.• There will be on average rt events over the span of time t.• The probability that the event will occur in one of these fine increments is rt/n(assuming the increments are short enough that the probability of multiple events isnegligible—i.e. the only two possibilities are zero or one events in an increment).• The probability that it will not occur in one of these increments is (1 − rt/n).• The probability that it will not occur in n successive increments is therefore(1 −rt/n)n.• As n → ∞, so that we are now dealing with a continuous-time process, we havePr(no events in t) = e−rt.• This is the same as the probability that the waiting time is greater than t.• Thus the probability that the waiting time is less than or equal to t is1 −e−rt.• This last quantity is the distribution F (t), and its first derivative, f(t) = re−rt, is thedensity.2.4.2 Uniform density and distribution on (a, b)density: f(x) =1b−adistribution: F (x) =Rxa1b−ady =x−ab−aapplications: spatial and temporal pattern of events “dropped” with constantprobabilityGEOS 36501/EVOL 33001 6 January 2012 Page 52.4.3 Normaldensity: f(x) =1√2πe−x2/2distribution: F (x) =1√2πRx−∞e−y2/2dyGEOS 36501/EVOL 33001 6 January 2012 Page 62.5 Relevant R functions for foregoing distributi ons2.5.1 rexp(n,a)returns n numbers drawn from exponential distribution with parameter (rate) a.2.5.2 dexp(x,a)returns density of exponential distribution with parameter a at X = x.2.5.3 pexp(x,a)returns cumulative probability of exponential distribution with parameter a at X = x.2.5.4 rnorm(n)returns n numbers drawn from standard normal distribution (with zero mean and unitvariance).2.5.5 dnorm(x)returns the normal density at X = x.2.5.6 pnorm(x)returns the normal distribution function (cumulative probability) at X = x.2.5.7 runif(), dunif(), punif():These are like rnorm(), dnorm(), pnorm(), but for uniform distribution on (0,1).2.6 Discrete functions we have already considered: binomial,multinom ial , Poisson3 Expectation (i.e. mean)3.1 continuous:E(X) =R∞−∞xf(x) dx3.2 discrete:E(X) =Pjxjf(xj)GEOS 36501/EVOL 33001 6 January 2012 Page 7I.e. the expectation is the sum of all values of a random variable, weighted by thedensity or probability.3.3 Examples3.3.1 Exponential with rate r: E(X) = 1/r3.3.2 Binomial: E(k) = np; E(kn) = p3.3.3 Poisson: E(k) = λ3.3.4 Uniform on (a, b): E(X) =b+a23.4 Working with expectations (illustrate with discrete case)3.4.1 Let g(x) be some functio n of x.Then E[g(x)] =Pjg(xj)f(xj). For example, E(X2) =Pjxj2f(xj).3.4.2 Sums:Suppose there are several random variables X, Y, Z, etc. ThenE(X + Y + Z) = E(X) + E(Y ) + E(Z).3.4.3 Let a be a constant.Then E(aX) = aE(X).3.4.4 Products:In general, E(XY ) 6= E(X)E(Y ). Exception: E(XY ) = E(X)E(Y ) if X and Y aremutually independent random variables.4 Median:Value of x at which F (x) = 0.5.4.1 Example:Exponential: Med(X) = ln(2)/r5 Mode:Value of x with maximal value of f(x).GEOS 36501/EVOL 33001 6 January 2012 Page 85.1 Example:Exponential: Mode(X) = 0.5.1.1 Comment:Gaps between fossil finds are exponentially distributed. Modal gap of zero implies that thesingle most probable outcome is that there is an infinitesimally small offset between originand first appearance or between extinction and last appearance. But this still anexceedingly improbably outcome when compared with the sum of all possible alternatives.6 Variance6.1 Definition: V (X) = E(X2) −[E(X)]2= E{[X − E(X)]2}6.2 What it captures:Average squared deviation between a random variable and its mean.6.3 Examples6.3.1 Binomial: V (k) = np(1 − p); V (kn) = p(1 − p)/n6.3.2 Poisson: V (k) = λNB m ean=variance for Poisson. This can be useful in testing whether data agreewith model of Poisson process.This vari ance may seem different from that in the binomial, but


View Full Document

UChicago GEOS 36501 - Probability 2

Download Probability 2
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Probability 2 and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Probability 2 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?