PSU STAT 544 - ML Estimation for Restricted Models

Unformatted text preview:

1 Stat 544 Lecture 3 ML Estimation for Restricted Models Readings Agresti 14 1 14 3 Last time we reviewed properties of the multinomial distribution X X1 Xk T Mult n The parameter 1 k lies within the simplex X S j 0 j 1 j If we assume nothing about other than S then the ML estimate for is the vector of sample proportions p n 1 X This estimate has limiting distribution D n p N 0 where Diag T is rank de cient An 2 Stat 544 Lecture 3 approximate covariance matrix for p is i 1h T V p Diag p pp n When is allowed to lie anywhere in S we say that the model is saturated The saturated model has k 1 free parameters Often we will suppose that S where S is a lower dimensional subset of S We will suppose that the elements of are functions of unknown parameters 1 t T where t k 1 Example 1 Independence in a 2 2 table Suppose that X X11 X12 X21 X22 T are cell counts from a 2 2 table B 1 B 2 A 1 X11 X12 A 2 X21 X22 where the row and column classi cations correspond to a binary variables A 1 2 and B 1 2 and Xij is the number of subjects having A i B j Stat 544 Lecture 3 3 If A and B are unrelated then ij P A i B j P A i P B j will hold for all cells If we de ne P A 1 and P B 1 then 2 3 3 2 11 6 7 7 6 6 7 6 12 7 1 6 7 7 6 6 7 7 6 6 7 6 21 7 1 4 5 5 4 1 1 22 This is a restricted i e non saturated model with free parameters T where 0 1 and 0 1 Example 2 2 2 table with symmetry Let X X11 X12 X21 X22 T again be cell counts from a 2 2 table But now suppose that A and B represent the same characteristic measured at two points in time For example A could be the response to Do you approve of George W s performance as President asked last month 1 yes 2 no and B could be the answer to the same question asked this month 4 Stat 544 Lecture 3 When the same characteristic is measured twice it is usually because the researcher wants to detect change over time If there has been no overall change over time then the probability of a yes response last month P A 1 11 12 will equal the probability of a yes response this month P B 1 11 21 Notice that P A 1 P B 1 if and only if 12 21 a condition known as symmetry Under symmetry we can express the elements of as functions of two free parameters 11 1 12 2 21 2 22 1 1 2 2 In both of these examples ML estimates of are available in closed form In Example 1 the estimates are X11 X12 n 5 Stat 544 Lecture 3 X11 X21 n which will be demonstrated next week In Example 2 they are 1 2 X11 n X12 X21 2n homework problem In more complicated models the ML estimates cannot be written down in closed form and must be computed by an iterative technique e g Newton Raphson We will spend a fair amount of time discussing methods for nding ML estimates under various models Even though the details of calculating will vary from one model to the next we can say something general about the properties of in regular problems In regular problems is the solution to the score equations l X 0 j j 1 t and is a smooth function of the sample proportions p n 1 X 6 Stat 544 Lecture 3 Following Agresti s Section 14 2 let s assume that is a function of 1 t T 1 2 k T that 0 is the true value of which does not lie on the boundary of its parameter space that the corresponding true value of 0 0 lies in the interior of S i e that it has no zero elements and that the derivatives of with respect to 2 3 1 1 1 t 2 6 1 7 6 2 2 2 7 6 1 2 t 7 6 7 6 T 7 6 7 4 5 k 1 k 2 k t are continuous in a neighborhood of 0 and that this matrix has full rank t when evaluated at 0 The requirement that the matrix T has full rank ensures that the model is locally identi able The condition would be violated for example if 1 t T contains elements that are 7 Stat 544 Lecture 3 redundant Without this identi ability we can t consistently estimate from X alone we need additional information beyond X to estimate Key result Under these regularity conditions the ML estimate has the property D n 0 N 0 ATA 1 where A Diag 0 1 2 T 0 In practice we could use this result to obtain con dence intervals and regions for elements or functions of if we replace A by A Diag 1 2 T where is shorthand for evaluated at The approximation becomes N 0 n 1 A TA 1 8 Stat 544 Lecture 3 Establishing the key result Agresti derives this result by expressing as a linearized function of p and then applying the method Recall that by the CLT D n p 0 N 0 where Diag 0 0 0T Suppose that g is a real valued function of with two derivatives at 0 Then by the usual method reviewed in Section 14 1 D T n g p g 0 N 0 g 0 g 0 where g g 1 g k T More generally if G is a vector valued function of with two derivatives at 0 then D T n G p G 0 N 0 G 0 G 0 where G is the matrix of partial derivatives of G with respect to If we take G and apply a few algebraic tricks the key result follows 9 Stat 544 Lecture 3 Why is the key result useful 1 Because it allows us to compute an approximate covariance matrix for without having to directly nd the second derivatives of the loglikelihood with respect to Sometimes depending on how is computed we may need to get these second derivatives anyway e g if we are using Newton Raphson But in other cases we don t 2 Also this result holds under slightly more general conditions than the usual asymptotic result for ML estimation D n N 0 I 1 where I is the Fisher information The conditions for general ML estimation are given by Rao 1973 Example Let s return to the 2 2 table with independence of the row and column classi cation We said that 11 12 1 21 1 and 22 1 1 and that …


View Full Document

PSU STAT 544 - ML Estimation for Restricted Models

Download ML Estimation for Restricted Models
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view ML Estimation for Restricted Models and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view ML Estimation for Restricted Models 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?