DOC PREVIEW
UCLA STATS 100A - BEAMER-lesson8-student

This preview shows page 1 out of 4 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 4 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 4 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Random variablesDiscrete Random Variables IIJuana [email protected] Department of StatisticsJ. Sanchez Stat 100A Intro ProbabilityThis lectureRecommended reading: Ross, chapter 4 (4.1-4.7); Chapter 7, section 7.7(discrete variables cases only).I. Solution to problem from last lecture notes and review ofexpectations and expectations of functions of random variables.II. Some proofs of expectations and variance of linear functions of arandom variable. Other proofs left as exercises.III. Alternative way of computing the variance.IV. The Moment generating function of a discrete random variable.V. The Poisson Random VariableVI. The Bernoulli r.v.J. Sanchez Stat 100A Intro ProbabilityProblem of last lectureExampleThe manager of a stock room in a factory knows from his study ofrecords that the daily demand (number of times used) for a certain toolhas the following probability distribution:Quantity demanded (X) 0 1 2Probability 0.1 0.5 0.4(In other words, 50% of thedaily records show that the tool was used one time). Let X denote thedaily demand.(a) How much can he expect the tool to be used tomorrow?(b) By how much could he be off?(c) Suppose that it costs the factory $10 each time the tool is used.What is the expected daily cost of using the tool? How much is thiscost expected to vary from one day to another?J. Sanchez Stat 100A Intro ProbabilityProblem from last lectureQuantity demanded 0 1 2Probability 0.1 0.5 0.4(a) How much can he expect the tool to be used tomorrow?E (X ) =XxXP(X = x) = 0 ×0.1 + 1 × 0.5 + 2 × 0.4 = 1.3The Expected value of X is 1.3, the manager should expect the toolto be used tomorrow 1.3 times..J. Sanchez Stat 100A Intro ProbabilityProblem from last lecture.Quantity demanded 0 1 2Probability 0.1 0.5 0.4(a) How much can he expect the tool to be used tomorrow?(b) By how much could he be off?Var(X ) = E (X − µ)2=Xx(x −E (X ))2P(X = x)= (0 − 1.3)2(0.1) + (1 − 1.3)2(0.5) + (2 − 1.3)2(0.4) = 0.41SD(X ) =√0.41 = 0.6303Because the standard deviation of X is 0.6303, he may be off by0.6303. That is, the use of the tool could be expected to be 1.3times give or take 0.6303.J. Sanchez Stat 100A Intro ProbabilityClass problem from last lectureQuantity demanded (X) 0 1 2Probability 0.1 0.5 0.4(a) How much can he expect the tool to be used tomorrow?(b) By how much could he be off?(c) Suppose that it costs the factory $10 each time the tool is used.What is the expected daily cost of using the tool? How much is thiscost expected to vary from one day to another?Let C equal cost. ThenC = 10X, a linear function of X. Use expectation of the function of a randomvariable (although we do not need to).E (10X ) =Xx(10X )P(X ) = 10XxXP(X ) = 10E(X ) = 10(1.3) = $13Var(10X ) = 102Var(X ) = 100(0.41) = 41 SD(X ) =p(41) = 6.4031The expected cost is $13 per day give or take $6.4031.J. Sanchez Stat 100A Intro ProbabilityWe have used in last example: Definitions of ExpectedValue, variance and standard deviation of a discreterandom variable and a function of a random variable.Review: Definition of Expected value of a discrete random variableThe expected value of a discrete random variable X with probabilitydistribution P (x ) is given byE (X ) =XxXP(X = x). The sum is over all values of X . We denote E(X) with the greek letterµ .J. Sanchez Stat 100A Intro ProbabilityDefinition of Variance of a discrete random variableLet g (X ) = (X − µ)2represent the square distance of a random variablefrom its expected value. This is a function of X. To find the variance,apply the definition of expectation to g (x). Then we defineVar(X ) = E (g (X )) = E[(X − µ)2] =Xx(X − µ)2P(X = x)We denote variance by σ2symbol.J. Sanchez Stat 100A Intro ProbabilityDefinition of standard deviation of a discrete random variableThe standard deviation is the square root of the variance, to make theunits be the same as the units of the random variable.σ =p(σ2)J. Sanchez Stat 100A Intro ProbabilityWe also used in the last problem the definitions ofexpectation and variance of a function of a random variableLet X be a random variable with probability mass function P (X ). Letg(X ) be a function of the random variable X . A function of a randomvariable is itself a random variable.ThenExpected value and variance of g (X )E [g (X )] =Xxg(X )P(X )Var[g(X )] =Xx(g(X ) − E[g (X )])2P(X )J. Sanchez Stat 100A Intro ProbabilityII. Some proofs of expectations and variance of linearfunctions of a random variable. Other proofs left asexercises.Expectation of linear functions of XIfX is a r.v. and k and c are constants, consider the following functions of X :If g (X ) = k then E (g (X )) = k Var(g(X )) = 0If g (X ) = kX then E (g (X )) = kE(X ) Var(g(X )) = k2Var(X )If g (X ) = kX + c then E (g (X )) = kE(X ) + c Var(g (X )) = k2Var(X )J. Sanchez Stat 100A Intro ProbabilityWhen g(X ) = k.Prove that if g (X ) = k a constant, then E (g(X )) = k andVar(g(X )) = 0Always use definitions to prove. Let Y=g(X)=k. Then,E (Y ) =XxkP(X = x) = kXxP(X = x) = k(1) = kVar(Y ) = E(Y − E(Y ))2=Xx(k − k)2P(X ) = 0.J. Sanchez Stat 100A Intro ProbabilityWhen g(X ) = kX.Prove that if g (X ) = kX a constant, then E (g (X )) = kE (X )Var(g(X )) = k2Var(X )Always use definitions to prove. Let Y = g(X),E (Y ) =XxkXP(X = x) = kXxXP(X = x) = kE (X )Var(Y ) = E(Y − E(Y ))2=Xx(kX − kE (X ))2P(X = x)= k2X(X − E (X ))2P(X = x) = k2Var(X ).J. Sanchez Stat 100A Intro ProbabilityWhen g(X ) = c + kX .Prove that if Y = g(X ) = c + kX , c, k constants, thenE (Y ) = c + kE(X ) and Var(Y ) = k2Var(X )J. Sanchez Stat 100A Intro ProbabilityAlternative way of computing the variance of a randomvariableBy definition,Var(X ) = E (X − E(X ))2=Xx(X − E (X ))2P(X )ButXx(X − E (X ))2P(X ) =Xx(X2+ (E(X ))2− 2XE(X ))P (X )=XxX2P(X ) +Xx(E (X ))2P(X ) −Xx2XE (X )P(X )= E (X2) + (E(X ))2− 2(E(X ))2= E (X2) −(E (X ))2SoVar(X ) = E (X2) −(E (X ))2J. Sanchez Stat 100A Intro ProbabilityNote: You can make your notation neater, if you use the notation µxforE(X); σ2xfor Var(X), and σxfor SD(X). Under this notation,Shortcut: σ2X= E (X2) −µ2xThe Variance of a random variable can be computed asσ2X= E (X2) −µ2J. Sanchez Stat 100A Intro ProbabilityIV. The Moment generating function of a discrete randomvariableA very special function of a random variable is g (X ) = etX, which is notlinear. Using this function, we define a very special function, the momentgenerating function of a random variable, which helps us find themoments


View Full Document

UCLA STATS 100A - BEAMER-lesson8-student

Documents in this Course
Hwk8

Hwk8

3 pages

HW09Key

HW09Key

3 pages

Normal

Normal

7 pages

clt

clt

3 pages

Load more
Download BEAMER-lesson8-student
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view BEAMER-lesson8-student and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view BEAMER-lesson8-student 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?