DOC PREVIEW
CORNELL ECON 3120 - Exam 1 Study Guide
Type Study Guide
Pages 6

This preview shows page 1-2 out of 6 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 6 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 6 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 6 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Econ 3120 1st EditionExam # 1 Study Guide Lectures: 1 - 121 Random sampling This lecture discusses how to estimate the mean from a population of interest. Normally it is impractical or impossible to examine the whole population. If we could do this, we could simply take the mean of the population and we’d be done. Given that we can only examine a sample, we have to use statistical inference to 1) estimate the parameters that we care about (in this case it’s the mean) and 2) test hypotheses about these parameters. To estimate the mean (or any other parameter of interest), we’ll focus on random samples from the population. Formally, a random sample is a set of independent and identically distributed (i.i.d) random variables {Y1,Y2,...,Yn} that share a probability density function f(y;θ). For the first part of the lecture, we’re going to assume that the population is distributed Normal(µ,σ 2 ), an assumption we will relax when we discuss large sample properties later in the lecture. 2 Estimators Once we have our random sample, we can use it to estimate the mean (or any other parameter of interest). An estimator is a rule (or a function) that uses the outcome of random sampling to assign a value to the parameter based on the sample. θˆ = h(Y1,Y2,...,Yn) For the mean, the most obvious estimator is the sample mean or sample average: Y¯ = 1 n n ∑ i=1 Yi 1We often write estimates by puttinga “hat” on top of the parameter, i.e., µˆ1 = Y¯. Note that this is just one possible way to estimate the mean. We could estimate the mean by simply looking at the first observation, i.e., µˆ2 = Y1. In fact, an estimator doesn’t even need to depend on the random sample at all. µˆ3 = 4 is a perfectly valid estimator. We’ll see, though, that the sample mean is preferable because it is unbiased and efficient. Estimators, like the samples they come from, have their own distributions, called sampling distributions. Estimators have distributions since they are simply functions of realizations of random variables, and we have seen that functions of random variables have distributions. 2.1 Unbiasedness An estimator is unbiased if its expectation equals the parameter of interest, that is, E(θˆ) = θ The bias of an estimator is computed as the difference between the expectation and the true parameter: Bias(θˆ) = E(θˆ)−θ Example: Show that µˆ1 = Y¯ and µˆ2 = Y1 are unbiased estimates of the mean, and that µˆ3 = 4 is biased. 2.2 Sample variance and sampling variance of estimators In order to conduct inference (that is, say something about how accurate our estimator is), we need to be able to estimate the variance of a population. We’ll start by introducing the sample variance, an unbiased estimator of the population variance. This is given by s 2 = 1 n−1 n ∑ i=1 (Yi −Y¯) 2 2Note that in order for it to be unbiased, we need to divide by n−1 instead of n. The reason for this is a bit subtle, but it comes from the fact that Y¯ is an estimate and not the true parameter µ. If we knew µ, then an unbiasedestimate of the variance would be given by 1 n ∑(Yi − µ) 2 . As suggested above, the sampling variance is the variance of an estimator (which is based on a sample). Example: What is the sampling variance of our estimators µˆ1 = Y¯, µˆ2 = Y1, and µ3 = 4 ? The sample analog of the standard deviation for estimators is called standard error, which we denote se(θˆ) = q Var(θˆ). Note that if we have the samplingvariance of Y¯, we can fully characterize its distribution. Since Y¯ is simply a linear combination of normally-distributed random variables, its distribution will be Normal(µ,σ 2/n). Note that this implies that Y¯−µ σ/ √ n ∼ N(0,1) , a fact that we will use later on. 2.3 Efficiency The relative efficiency of an estimator is a measure of how close our estimate will be to the true parameter. We measure efficiency by comparing the variances. Suppose we have two unbiased estimators θˆ 1 and θˆ 2. The estimate θˆ 1 is more efficient when Var(θˆ 1) ≤ Var(θˆ 2). Example. Which is a more efficient estimator of the mean, Y¯or Y1? We usually care about unbiasedness first, then efficiency. That is, we typically compare efficiency among unbiased estimators. µˆ3 = 4 has variance of 0, but it is not necessarily preferable to µˆ1 =Y¯, since µˆ3 will almost certainly be biased. 3 Large-sample Properties of Estimators Now let’s introduce a few concepts that help us describe the behavior of estimators as the sample size becomes large. For our purposes, we’ll define “large” as n = 30, although it depends on the 3estimator and how accurate you want to be. 3.1 Consistency Consistency tells us whether an estimator converges to the true parameter as the sample size grows large. An estimator θˆ is consistent if, for every ε > 0, P(|θˆ −θ| > ε) → 0 as n → ∞ A shorthand way of saying this is plim(θˆ) = θ If θˆ is consistent, we can say it converges in probability to θ. The formula above is a bit complicated, but it implies that an estimator becomes “arbitrarily close” to the true parameter as the sample size grows large. “Arbitrarily close” implies that you can set ε to be as small as you want, and if the estimator is consistent, at a large enough sample size you will get within ε of the true parameter. One important consistency result is the law of large numbers. If Y1,Y2,...,Ynare i.i.d. random variables with mean µ, then plim(Y¯) = µ Note that unbiasedness and consistency are related concepts, but one does not necessarily imply the other. µˆ = Y1 is unbiased but not consistent, and we’ll see below that there are other estimators that are consistent but not unbiased. 3.1.1 Properties of plims It turns out that plims are somewhat easier to work with than expectations because they “pass through” nonlinear functions. Suppose we have two estimators θˆ 1 and θˆ 2. 1. plim(g(θˆ 1)) = g(plim(θˆ 1)) for any continuous function g(·) 2. plim(θˆ 1 +θˆ 2) = plim(θˆ 1) + plim(θˆ 2) 3. plim(θˆ 1θˆ 2) = plim(θˆ 1)· plim(θˆ 2) 44. plim(θˆ 1/θˆ 2) = plim(θˆ 1)/plim(θˆ 2) Example 1: Is 1 Y¯ an unbiased and/or consistent estimator for 1 µy ? Example 2: Is s 2 = 1 n−1 ∑(Yi −Y¯) 2 a consistent estimator of σ 2 ? 3.2 Asymptotic Normality and the Central Limit Theorem While the probability limit tells us whether an estimator converges to the true parameter for a


View Full Document
Download Exam 1 Study Guide
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Exam 1 Study Guide and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Exam 1 Study Guide 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?