DOC PREVIEW
MIT 6 041 - Covariance, Estimation, Limit Theorems

This preview shows page 1 out of 3 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 3 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 3 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Massachusetts Institute of TechnologyDepartment of Electrical Engineering & Computer Science6.041/6.431: Probabilistic Systems Analysis(Spring 2006)Problem Set 8Topics: Covariance, Estimation, Limit TheoremsDue: April 26, 20061. Consider n independent tosses of a k-sided fair die. Let Xibe the number of tosses that resultin i. Show that X1and X2are negatively correlated (i.e., a large number of ones suggests asmaller number of twos).2. Oscar’s dog has, yet again, run away from him. But, this time, Oscar will be using moderntechnology to aid him in his search: Oscar uses his pocket GPS device to help him pinpointthe distance b etween him and his dog, X miles.The reported distance h as a noise component, and since Oscar bought a cheap GPS devicethe noise is quite s ignificant. The measurement that Oscar reads on his display is randomvariableY = X + W (in miles),where W is independent of X and has the uniform distribution on [−1, 1].Having knowledge of the distribution of X lets Oscar do better th an just use Y as his guess ofthe distance to the dog. Oscar somehow kn ows that X is a random variable with the uniformdistribution on [5, 10].(a) Determine an estimator g(Y ) of X th at minimizes E[(X − g(Y ))2] for all possible mea-surement values Y = y. Provide a plot of this optimal estimator as a function of y.(b) Determine the linear least squares estimator of X based on Y . Plot this estimatorand compare it with the estimator from part (a). (For comparison, jus t plot the twoestimators on the same graph and make some comments.)3. (a) Given the information E[X] = 7 and var(X) = 9, use the C hebyshev inequality to finda lower bound for P(4 ≤ X ≤ 10).(b) Find the smallest and largest possible values of P(4 < X < 10), given the mean andvariance information from part (a).4. Investigate whether the Chebyshev inequality is tight. That is, for every µ, σ ≥ 0, and c ≥ σ,does there exist a random variable X w ith mean µ and standard deviation σ such thatP(|X − µ| ≥ c) =σ2c2?Page 1 of 3Massachusetts Institute of TechnologyDepartment of Electrical Engineering & Computer Science6.041/6.431: Probabilistic Systems Analysis(Spring 2006)5. Define X as the height in meters of a randomly selected Canadian, where the selection prob-ability is equal for each Canadian, and denote E[X] by h. Bo is interested in estimating h.Because he is s ure that no Canadian is taller than 3 meters, Bo decides to use 1.5 meters asa conservative (large) value for th e standard deviation of X. To estimate h, Bo averages theheights of n Canadians that he selects at random; he denotes this qu antity by H.(a) In terms of h and Bo’s 1.5 meter bound for the standard deviation of X, determine theexpectation and standard deviation for H.(b) Help Bo by calculating a minimum value of n (with n > 0) such that the standarddeviation of Bo’s estimator, H, will be less than 0.01 meters.(c) Say Bo would like to be 99% sure that his estimate is within 5 centimeters of the trueaverage height of Canadians. Using th e Ch ebyshev inequality, calculate the minimumvalue of n that will make Bo happy.(d) If we agree that no Canadians are taller than three meters, w hy is it correct to use 1.5meters as an upper bound on the standard deviation for X, the height of any Canadianselected at random?6. Let X1, X2, . . . be independent, identically distributed, continuous random variables withE[X] = 2 and var(X) = 9. Define Yi= (0.5)iXi, i = 1, 2, . . .. Also define Tnand Anto bethe sum and the average, respectively, of the terms Y1, Y2, . . . , Yn.(a) Is Ynconvergent in probability? If so, to what value? Explain.(b) Is Tnconvergent in probability? If so, to what value? Explain.(c) Is Anconvergent in probability? If so, to what value? Explain.7. There are various senses of convergence for sequences of random variables. We have definedin lecture “convergence in p robability.” In this exercise, we will define “convergence in meanof order p.” (In the case p = 2, it is called “mean square convergence.”)The sequence of random variables Y1, Y2, . . . is said to converge in mean of order p (p > 0)to the real numb er a iflimn→∞E [|Yn− a|p] = 0.(a) Prove that convergence in mean of order p (for any given positive value of p) impliesconvergence in probability.(b) Give a counterexample that shows that the converse is not tr ue, i.e., convergence inprobability does not imply convergence in mean of order p .G1†. One often needs to use sample data to estimate unknown parameters of the underlying distri-bution from which samples are drawn. Examples of underlying parameters of interest includethe mean and variance of the distribution. In this problem, we look at estimators f or meanand variance based on a set of n observations X1, X2, . . . , Xn. If needed, assume that first,second, and fourth moment of the distribution are finite.Denote an unkn own parameter of interest by θ. An estimator is a function of the observedsample dataˆθ(X1, X2, . . . , Xn)†Required for 6.431; optional for 6.041 Page 2 of 3Massachusetts Institute of TechnologyDepartment of Electrical Engineering & Computer Science6.041/6.431: Probabilistic Systems Analysis(Spring 2006)that is used to estimate θ. An estimator is a function of random samples and, hence, a randomvariable itself. To simplify the notation, we drop the argument of the estimator function.One desired property of an estimator is unbiasedness. An estimatorˆθ is said to be unbiasedwhen E[ˆθ] = θ.(a) Show thatˆµ =1n(X1+ · · · + Xn)is an unbiased estimator for the true mean µ.(b) Now suppose that the mean µ is kn own but the variance σ2must be estimated from thesample. (The more realistic situation with both µ and σ2unknown is considered below.)Show thatˆσ2=1nnXi=1(Xi− µ)2is an unbiased estimator for σ2.It is more realistic to have to estimate both µ and σ2from the same set of n observations.This is developed in the following parts.(c) Use basic algebra to show thatnXi=1(Xi− ˆµ)2=nXi=1(Xi− µ)2− n(ˆµ − µ)2.(d) Show thatE"nXi=1(Xi− ˆµ)2#= (n − 1)σ2.(e) What is an unbiased estimator for σ2(using only the data sample, not µ)?Another desired property for an estimator is asymptotic consistency. An estimatorˆθ is calledasymptotically consistent when it converges in probability to the true parameter θ as theobservation s ample size n → ∞.(f) Show that var(ˆµ) = σ2/n and use this to argue that ˆµ is asymptotically consistent.(g)


View Full Document

MIT 6 041 - Covariance, Estimation, Limit Theorems

Documents in this Course
Quiz 1

Quiz 1

5 pages

Quiz 2

Quiz 2

6 pages

Quiz 1

Quiz 1

11 pages

Quiz 2

Quiz 2

2 pages

Syllabus

Syllabus

11 pages

Quiz 2

Quiz 2

7 pages

Quiz 1

Quiz 1

6 pages

Quiz 1

Quiz 1

11 pages

Quiz 2

Quiz 2

13 pages

Quiz 1

Quiz 1

13 pages

Load more
Download Covariance, Estimation, Limit Theorems
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Covariance, Estimation, Limit Theorems and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Covariance, Estimation, Limit Theorems 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?