Unformatted text preview:

Lecture 17 – Regression Models with ARCH(m) Errors:Estimation and InferenceAssume –yt = xt’β + εt t = 1,2,…where- xt is a a kx1 vector of weakly exogenous regressors (which allows for lagged dependent variables and includes the AR(p) model as a special case)- εt is an ARCH(m) process, i.e.,εt = ht1/2vtvt ~ i.i.d. N(0,1)ht = η + δ1εt-12 + … + δmεt-m2η > 0; δi > 0, i = 1,…,mδ’s satisfy the stationarity conditionIn this case, the OLS estimator of β is consistent and asymptotically normal, but is not asymptotically efficient. In addition, OLS confidence interval construction and test procedures will be invalid. Nonparametric corrections to the OLS covariance matrix to account for cond’l heteroskedasticity? Bootstrap methods? Gonclaves and Killlian, J. of Econometrics, 2004, “Bootstrapping Autoregressions with Conditional Heteroskedasticity of Unknown Form.”Efficient estimation – MLEThe (conditional) MLE:[Digression – Consider an AR(1) model with i.i.d. N(0,σ2) innovations. The OLS estimator is the conditional MLE, conditional on y1. The unconditional MLE estimator would include a term associated with the unconditional distribution of y1 and would have to be computed numerically. Since the effects of y1 disappear asymptotically, the conditional MLE is asymptotically equivalent to the unconditionalMLE. In finite samples, however, particulary when the AR parameter is large (i.e., close to 1), the conditional and unconditional MLE estimators can differ.Suppose that xt includes p lagged values of y(p < k). We will condition the likelihood function on the first p+m observations of y:y1,…,yp+m.Let θ be the k+m+1 parameter vector:θ = [β’ η δ’]’ where δ = [δ1 … δm]’.Then the conditional log-likelihood functionis:])()[log(21)2log(2)()(2'1tttTpmthxyhpmTLwhere 21')(mitititxyh.(Derivation? TpmttttyyxxxyfL11111),...,,,...,,(log)(, where ttthxytehf2/)'(221(...) )How to compute the MLE of θ –1. Numerical Optimization2. The Method of Scoring, Engle 1982 (which takes advantage of the special form of the information matrix; block diagonal)i. Estimate β by OLS (a consistent estimator of β): ˆ,ˆii. Fit the squared OLS residuals to anAR(m) by OLS: ˆ,ˆ iii. Use ˆˆand to re-estimate β:~~)~'~(ˆˆ'1XXX ~~)~'~(ˆˆ'1XXXwheretttrxxˆ~ttttrsˆ/ˆˆ~2/11221]ˆˆˆ2ˆ[ˆmitittthhr])ˆˆ(ˆˆˆ[ˆ1221mititititthhhsiv. Given ˆ, fit ˆˆXY  to an AR(m) by OLS to get ˆ,ˆ.v. Return to step (iii) and iterate until θ-hat ( = (β-hat, δ-hat)) converges.Under appropriate regularity conditions on {yt,xt,εt}, and),0()ˆ(ˆˆ NTDT),0()ˆ(*ˆˆ*** NTDTwhereδ* = [η δ1 …δm]’To apply this in practice we note that )ˆ1,(~ˆˆˆTNT)ˆ1,(~ˆ*ˆˆ***TNTwhere ˆˆˆ and *ˆˆ*ˆare consistent estimators of ˆˆand *ˆˆ*.Consistent estimators:12'ˆˆ]ˆ1[ˆtttrxxT1*ˆˆ)~'~(2ˆ* ZZ]~~[~''1'TpzzZ tmttthzˆ/]ˆˆ1[~221 Some Complications –- How to assure that the AR(p) coefficients satisfy the stationarity conditions? How to assure that the ARCH(m) coefficients satisfy the stationarity and nonnegativity conditions?- constrained optimization (problematic if m+p are “large”)- ht = η2 + Σδi2εt-i2 (Engle 1982)- How to select m and p?- ACF and PACF of yt’s and ε-hat-squared’s- LR tests (rest’d vs. unrest’d model); AIC, SIC- diagnostic checking: Does estimated εt/σt sequence behavelike an i.i.d. N(0,1)


View Full Document

ISU ECON 674 - lecture 17

Download lecture 17
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view lecture 17 and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view lecture 17 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?