Unformatted text preview:

Lecture 17 – Regression Models with ARCH(m) Errors: Estimation and Inference Assume – yt = xt’β + εt t = 1,2,… where • xt is a a kx1 vector of weakly exogenous regressors (which allows for lagged dependent variables and includes the AR(p) model as a special case) • εt is an ARCH(m) process, i.e., εt = ht1/2vt vt ~ i.i.d. N(0,1) ht = η + δ1εt-12 + … + δmεt-m2 η > 0; δi > 0, i = 1,…,m δ’s satisfy the stationarity conditionIn this case, the OLS estimator of β is consistent and asymptotically normal, but is not asymptotically efficient. In addition, OLS confidence interval construction and test procedures will be invalid. Nonparametric corrections to the OLS covariance matrix to account for cond’l heteroskedasticity? Bootstrap methods? Gonclaves and Killlian, J. of Econometrics, 2004, “Bootstrapping Autoregressions with Conditional Heteroskedasticity of Unknown Form.”Efficient estimation – MLE The (conditional) MLE: [Digression – Consider an AR(1) model with i.i.d. N(0,σ2) innovations. The OLS estimator is the conditional MLE, conditional on y1. The unconditional MLE estimator would include a term associated with the unconditional distribution of y1 and would have to be computed numerically. Since the effects of y1 disappear asymptotically, the conditional MLE is asymptotically equivalent to the unconditional MLE. In finite samples, however, particulary when the AR parameter is large (i.e., close to 1), the conditional and unconditional MLE estimators can differ.Suppose that xt includes p lagged values of y (p < k). We will condition the likelihood function on the first p+m observations of y: y1,…,yp+m. Let θ be the k+m+1 parameter vector: θ = [β’ η δ’]’ where δ = [δ1 … δm]’. Then the conditional log-likelihood function is: ])()[log(21)2log(2)()(2'1tttTpmthxyhpmTLβπθ−+−−−−=∑++ where . 21')(βδη∑−−−+=mitititxyh (Derivation? ∑++−−=TpmttttyyxxxyfL11111),...,,,...,,(log)(θ, where ttthxytehf2/)'(221(...)βπ−−= )How to compute the MLE of θ – 1. Numerical Optimization 2. The Method of Scoring, Engle 1982 (which takes advantage of the special form of the information matrix; block diagonal) i. Estimate β by OLS (a consistent estimator of β): εβˆ,ˆii. Fit the squared OLS residuals to an AR(m) by OLS: δηˆ,ˆ iii. Use to re-estimate β: δηˆˆand εββ~~)~'~(ˆˆ'1XXX−+=εββ~~)~'~(ˆˆ'1XXX−+= where tttrxxˆ~= ttttrsˆ/ˆˆ~εε= 2/11221]ˆˆˆ2ˆ[ˆ∑−+−+=mitittthhrδε ])ˆˆ(ˆˆˆ[ˆ1221∑++−+−−−=mititititthhhsεδ iv. Given , fit to an βˆβεˆˆXY −= AR(m) by OLS to get . δηˆ,ˆ v. Return to step (iii) and iterate until θ-hat ( = (β-hat, δ-hat)) converges.Under appropriate regularity conditions on {yt,xt,εt}, and),0()ˆ(ˆˆββββΣ→− NTDT ),0()ˆ(*ˆˆ***δδδδΣ→− NTDT where δ* = [η δ1 …δm]’ To apply this in practice we note that )ˆ1,(~ˆˆˆββββΣTNT )ˆ1,(~ˆ*ˆˆ***δδδδΣTNT where and are consistent estimators of ββˆˆˆΣ*ˆˆ*ˆδδΣββˆˆΣand *ˆˆ*δδΣ.Consistent estimators: ∑−=Σ12'ˆˆ]ˆ1[ˆtttrxxTββ 1*ˆˆ)~'~(2ˆ*−=Σ ZZδδ ]~~[~''1'TpzzZ L+= tmttthzˆ/]ˆˆ1[~221 −−=εεLSome Complications – • How to assure that the AR(p) coefficients satisfy the stationarity conditions? How to assure that the ARCH(m) coefficients satisfy the stationarity and nonnegativity conditions? - constrained optimization (problematic if m+p are “large”) - ht = η2 + Σδi2εt-i2 (Engle 1982) • How to select m and p? - ACF and PACF of yt’s and ε-hat-squared’s - LR tests (rest’d vs. unrest’d model); AIC, SIC - diagnostic checking: Does estimated εt/σt sequence behave like an i.i.d. N(0,1)


View Full Document

ISU ECON 674 - lecture 17

Download lecture 17
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view lecture 17 and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view lecture 17 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?