Asymptotic Normality

(2 pages)
Previewing page 1 of actual document.

Asymptotic Normality

I. Asymptotic Normality II. Central Limit Theorem III. Distribution of Difference in Means


Lecture number:
5
Pages:
2
Type:
Lecture Note
School:
Cornell University
Course:
Econ 3120 - Applied Econometrics
Edition:
1

Unformatted text preview:

Lecture 5 Outline of Last Lecture I. Marginal Distributions II. Expectations and Variance III. Covariance Outline of Current Lecture IV. Asymptotic Normality V. Central Limit Theorem VI. Distribution of Difference in Means Current Lecture 3.2 Asymptotic Normality and the Central Limit Theorem While the probability limit tells us whether an estimator converges to the true parameter for a large sample size, we’d like to know something about the behavior of the distribution as n becomes large. Estimators are asymptotically normal if their distribution becomes normal as the sample size gets large. Formally, if, for a a sequence of random variables Z1,...,Zn, P(Zn ≤ z) → Φ(z) asn → ∞ then Zn is has an asymptotic standard normal distribution, or Zn a∼ N(0,1). We can also say that Zn “converges in distribution” to a standard normal.1 The central limit theorem states that, regardless of the underlying distribution, a “standardized” sample mean has a standard normal distribution as the sample size gets large. This very important result will come in very handy for conducting inference on means of random samples with nonnormal or unknown distributions. Formally, the central limit theorem states that if Y1,Y2,...,Yn are a random sample with mean µ and variance σ 2 , then Zn = Y¯ n − µ σ/ √ n approaches a standard normal distribution as the sample size gets large. A remark. The sample mean itself, Y¯ n converges to the true mean in large samples with 0 variance. Similarly, Y¯ n−µ σ also converges to a single point in large samples. We divide the denominator by √ n to keep the variance constant as the sample size gets large. The central limit theorem also applies when σ 2 must be estimated (i.e., in most cases). In particular, Y¯ n − µ sn/ √ n has an asymptotic standard normal distribution as well. This follows because sn is a consistent estimator of σ. Therefore, as the sample size gets large, sn converges to σ. Example: Monte Carlo simulation. The graphs below illustrate the central limit theorem using samples drawn from a Bernoulli distribution. The Bernoulli distribution is a discrete distribution that describes a random variable that can take on the value of 1 with probability p, and 0 with probability 1− p. First, let’s compute the variance of a Bernoulli random variable: Back to the simulation. I chose p = 0.1. I then estimated a distribution for Y¯ n, using different values of n, by creating 500 samples of size n from the Bernoulli distribution with mean 0.1 and plotting the resulting (standardized) means as histograms. That is, for each sample, I computed the standardized random variable Zn = Y¯ n − µ σ/ √ n where σ = p 0.1 ·(1−0.1). Note that I could have replaced σ with s = q n n−1 Y¯ n(1−Y¯ n). Econ 3120 1st Edition



View Full Document

Access the best Study Guides, Lecture Notes and Practice Exams