Yale STAT 619 - Estimation of Large Covariance Matrices: Lower bound (II)

Unformatted text preview:

Week 11Spring 2009Lecture 21. Estimation of Large Covariance Matrices: Lowerbound (II)ObserveX1; X2; : : : ; Xni.i.d. from a p-variate Gaussian distribution, N (; pp) .We assume that the covariance matrix pp= (ij)1i;jpis contained in thefollowing parameter space,F (; "; M) =n : jijj  M ji  jj(+1)for all i 6= j and max()  1="o(1)Theorem 1 Under the assumption (1), we haveinf^supFE^  2 cn22+1+ clog pn: (2)Last time we have showninf^supFE^  2 clog pn:In this lecture we will showinf^supFE^  2 cn22+1by the Assouad’s lemma.We shall now de…ne a parameter space that is appropriate for the minimaxlower bound argument. For given positive integers k and m with 2k  p and1  m  k, de…ne the p  p matrix B(m; k) = (bij)ppwithbij= I fi = m and m + 1  j  2k, or j = m and m + 1  i  2kg :Set k = n12+1and a = k(+1). We then de…ne the collection of 2kcovariancematrices asH =( () :  () = Ip+ akXm=1mB(m; k);  = (m) 2 f0; 1gk)(3)where Ipis the p  p identity matrix and  is a constant. It is easy to checkthat as long as 0 <  < min fM; (1  ") =2g the collection H  F("; M). Wewill showinf^supHE^  2 cn22+1(4)1A Lower bound by the Assouad’s LemmaWe …rst prove equation (4). Let X1; X2; : : : ; Xnbe i.i.d. N (0;  ( )) with () 2 H. Denote the joint distribution by P. We apply Assouad’s Lemma tothe parameter space H,max2H22E^   ()2 minH(;)1 ()  02H (; )k2minH(;)=1kP^ PkFrom Lemma 2 we haveminH(;)1 ()  02H (; ) cka2and from Lemma 3,minH(;)=1kP^ Pk  c > 0thusmax2F1122E^   ()2c22k2a2 c1n22+1.Now we give proofs of auxiliary lemmas.Lemma 2 For  () de…ned in (3) we haveminH(;)1 ()  02H (; ) cka2:Proof of Lemma 2 : We de…ne v = (1 fk  i  2kg). Let ()  0 v = (wi) .There are exactly H (; ) number of wisuch that jwij = ka (just consider upperhalf of the matrix), which implies ()  0 v22 H (; )  (ka)2and so ()  02 H (; )  (ka)2=k  cka2.Lemma 3 Let Pbe the joint distribution of n i.i.d. X1; X2; : : : ; Xnwith X1N (0;  ()) and  () 2 F11. Then for some c1> 0 we haveminH(;)=1kP^ Pk  c1.Proof of Lemma 3 : When H; 0= 1, we will showkP0 Pk21 2K (P0jP) = 2n12tr01()12log det01()p2 n  cka22for some small c > 0, where K (j) is the Kullback–Leibler divergence andthe …rst inequality follows from the well known Pinsker’s inequality (see, e.g.,Csiszár (1967)). This immediately implies the L1distance between two measuresis bounded away from 1, and then the lemma follows. Write0= D1+  () :Then12tr01()p2=12trD11().Let ibe the eigenvalues of D11(). Since D11() is similar to the sym-metric matrix 1=2() D11=2(), and1=2() D11=2()1=2()kD1k1=2() c1kD1k  c1kD1k1 c2ka;then all eigenvalues i’s are real and in the interval [c2ka; c2ka], where ka =k  k(+1)= k! 0. Note that the Taylor expansion yieldslog det01()= log detI + D11()= trD11() R3whereR3 c3pXi=12ifor some c3> 0.Write 1=2() = UV1=2UT, where UUT= I and V is a diagonal matrix. Itfollows from the fact that the Frobenius norm of a matrix remains the sameafter an orthogonal transformation thatpXi=12i=1=2() D11=2()2F kV k2UTD1U2F=1()2kD1k2F c4ka2:3Lecture 22. Estimation of Large Covariance Matrices: Discussi onsTopics1. Adaptive estimation2. Estimation und er di¤erent matrix norms3. Estimating fu nctionals of the covariance matrix4. Sparse covariance estimation (graphical models)5. Estimation of covariance function with functional data and its connectionto fu nctional data analysis6. Toeplitz matrix estimation7. all interactions


View Full Document

Yale STAT 619 - Estimation of Large Covariance Matrices: Lower bound (II)

Download Estimation of Large Covariance Matrices: Lower bound (II)
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Estimation of Large Covariance Matrices: Lower bound (II) and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Estimation of Large Covariance Matrices: Lower bound (II) 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?