# PSU STAT 544 - Intervals and Tests (22 pages)

Previewing pages 1, 2, 21, 22 of 22 page document
View Full Document

## Intervals and Tests

Previewing pages 1, 2, 21, 22 of actual document.

View Full Document
View Full Document

## Intervals and Tests

45 views

Pages:
22
School:
Pennsylvania State University
Course:
Stat 544 - Categorical Data
##### Categorical Data Documents
• 17 pages

• 22 pages

• 22 pages

• 20 pages

• 19 pages

• 22 pages

• 14 pages

• 25 pages

• 20 pages

• 12 pages

• 19 pages

Unformatted text preview:

Stat 544 Lecture 2 1 Likelihood Based Intervals and Tests Readings Agresti 2002 Sections 1 3 1 4 Review Last time we defined the loglikelihood and score function and asserted that the score function had mean zero If y1 y2 yn is a random sample from distribution f y then the score function is n X u log f yi i 1 If there are k parameters 1 2 k T then the score function is a vector of length k and each element of this vector has mean zero In regular problems we can find the ML estimate by setting the score function s to zero and solving for The equations u 0 are called the score equations More generally they can be called estimating equations because their solution is the estimate for Stat 544 Lecture 2 2 This approach to estimating be regarded as method of moments MOM estimation applied to the scores Recall that in MOM we equate statistics to their expectations and solve for parameters MOM produces estimates that are n consistent and asymptotically unbiased In addition to the scores it is possible to find other functions of that have zero expectation These other functions which we might call quasi scores can be used to form estimating equations The solution to those estimating equations will also be n consistent and unbiased Last time we also discussed two different ways to approximate the variance of an ML estimate We defined the Fisher information as the variance of the score function or in the multiparameter case the covariance matrix of the score vector The Fisher information is often computed by using the information identity i E l00 The first way to approximate the variance of an ML estimate is to plug into i and invert 1 V i Stat 544 Lecture 2 3 The second way is to invert minus one times the actual second derivative of the loglikelihood at 00 1 V l The first method is called expected information whereas the second method is Method 2 is called observed information Sometimes the two methods are the same If not they tend to give similar answers in large samples

View Full Document

Unlocking...