STOR 556: Time Series Data Analysis Spring 2022Lecture 6Lecture date: January 27 Scribe: Younghoon KimAutoregressive models:Last time, we derived candidate causal (non-causal) solutions to the AR(1) equation. Wenow show that these solutions are stationary. Consider more generallyXt=∞Xj=−∞ajZt−j, {Zt} ∼ WN(0, σ2Z). (1)Obviously, EXt=PjajEZt−j= 0. For autocovariance function (ACVF),Cov(Xt, Xt+h) = E(XtXt+h) =∞Xj1=∞∞Xj2=aj1aj2E(Zt−j1Zt+h−j2) = σ2Z∞Xj=−∞ajaj+h=: γX(h),(2)sinceE(Zt−j1Zt+h−j2) =0, if t − j1= t + h − j2,σ2Z, if t − j1= t + h − j2⇔ j2= j1+ h.Note that (2) does not depend on t. The model (1) is called a linear time series model.Now, let’s go back to a causal solution of AR(1) equation where |ϕ| < 1: Xt=P∞j=0ϕjZt−j.We have aj= 0 for j < 0. So, for h ≥ 0,γX(h) = σ2z∞Xj=0ϕj|{z}=ajϕj+h|{z}=aj+h= σ2zϕh∞Xj=0ϕ2j=σ2zϕh1 − ϕ2, (3)since11−a= 1 + a + a2+ . . . if |a| < 1. Also, ρX(h) =γX(h)γX(0)= ϕh. For non-causal solutionwhen |ϕ| > 1, rewrite the solution Xt= −P∞k=1ϕ−kZt+k= −P1j=−∞ϕjZt−j. This leadstoγX(h) = σ2Z−1Xj=−∞ϕjϕj+h=σ2Zϕhϕ−21 − ϕ2= (σ2Zϕ−2) ·(ϕ−1)|h|1 − (ϕ−1)2. (4)Note that by replacing ϕ by ϕ−1and σ2Zby σ2Zϕ−2, one has the equivalence of (3) and (4).For this reason, we focus on |ϕ| < 1 case for identification purpose.Recall that a general polynomial of order n has n roots in general, but they may be complex-valued. For a complex number z = z1+ iz2, one can visual it on a plane as in the left panel6-1Figure 1: Illustration of the representation on a complex plane.of Figure 1. The circle centered at the origin is called a unit circle. Now, think about alinear polynomial ϕ(z) = 1 − ϕz for AR(1) model. Then, a root is 1/ϕ. So, the root for|ϕ| > 1 lies outside the unit circle. Similarly, for |ϕ| < 1 case, the root is placed inside theunit circle. This is illustrated in the right panel of Figure 1.Here is an another (simpler) derivation for the autocovariance function of AR(1) model.ConsiderXt= ϕXt−1+ Zt,multiply both sides by Xt−1and take E. This gives usγX(1) = ϕγX(0), (5)since Xt−1= Zt−1+ . . . Zt−2+ . . .. Likewise, by multiplying both sides by Xt−2yieldsγX(2) = ϕγX(1) = ϕ2γX(0).In general, γX(h) = ϕhγX(0). Finally, by multiplying both sides by Xtleads toγX(0) = ϕγX(1) + σ2Z. (6)Combining (5) and (6) gives us γX(0) =σ2Z1−ϕ2. Finally, we get γX(h) =σ2Zϕh1−ϕ2.We introduce another perspective on the solution of AR(1) model. Consider the case |ϕ| < 1.Xt= ϕXt−1+ Zt⇔ Xt− ϕXt−1= Zt⇔ (I − ϕB)Xt= Zt.⇔ Xt=1I − ϕBZt= (I + ϕB + ϕ2B2+ . . .)Zt= Zt+ ϕZt−1+ ϕ2Zt−2+ . . .6-2For |ϕ| > 1,(I − ϕB)Xt= Zt⇔ Xt=1I − ϕBZt= −ϕ−1B−1I − ϕ−1B−1Zt= −ϕ−1B−1(I + ϕ−1B−1+ ϕ−2B−2+ . . .)Zt= −ϕ−1Zt+1− ϕ−2Zt+2− ϕ−3Zt+3− . . .One can use this argument for more general models. E.g. consider AR(2),Xt= ϕ1Xt−2+ ϕ2Xt−2+ Zt.Then one has(I − ϕ1B − ϕ2B2)| {z }ϕ(B)Xt= Zt, (7)where ϕ(z) = 1 − ϕ1z − ϕ2z2. If we define roots of this polynomial as r1, r2, then ϕ(z) =(1 −1r1z)(1 −1r2z). This gives us ϕ(B) = (1 −1r1B)(1 −1r2B) so that (7) becomesXt=1I −1r1BI −1r2BZt. (8)If the roots of ϕ(z) are outside the unit circle, one can show that (8) defines a causalstationary solution to AR(2) equation.AR models can be viewed from the perspective of differentiated equations. Consider AR(1)model xt= ϕxt−1+ ztfirst. If the time steps are small, one can think of xt− xt−1asa derivative of xtwith respect to t. That is, by subtracting xton both sides, we get afirst-order differential equation asxt− xt−1= (ϕ − 1)xt+ zt⇒ ˙xt= (ϕ − 1)xt(+zt).The solution of the first-order differential equation behaves like an (increasing or decreasing)exponential function. However, a second-order differential equation gotten from an AR(2)model¨xt= +a1˙xt+ a2xt(+zt),now can have a different behavior from a solution to the first-order differential equation. Inparticular, if the two complex-valued roots are outside the unit circle, the solution exhibitsa harmonic oscillation. As we can see through the lecture material, AR(2) can exhibit notonly both positive and negative correlations but also cyclical behavior.Autoregressive moving average models:Now we look at ARMA(p, q) model. q refers to the order of the moving average part. ForAR(p), we take q = 0. The ARMA(p, q) equation isXt− ϕ1Xt−1− . . . − ϕpXt−p= Zt+ θ1Zt−1+ θ2Zt−2+ . . . + θqZt−q.6-3But why don’t we focus on moving average model? This relates to the homework problem.Let’s consider MA(1) model with |θ| < 1.Xt= Zt+ θZt−1= (I + θB)Zt(9)⇔1I − θBXt= Zt⇔ Xt+ θXt−1+ θ2Xt−2+ θ3Xt−3|{z }(∗)+ . . . = Zt.In practice, if we truncate the infinite number of terms to AR(3) as in (∗), the difference fromMA(1) will be small. Thus, an autoregressive model is sufficient but note that MA modelmay be more parsimonious; the simpler, model is better. The autocorrelation function of(9) can be shown to beρX(h) =1, h = 0,θ1+θ2, h = 1,0, h = 2, 3, . . . ,(10)which looks simple. However, while estimation of ϕ in AR(1) Xt= ϕXt−1+ Ztis straight-forward estimation, the estimation of θ in MA(1) Xt= Zt+ θZt−1is more delicate. Thisis because ztis latent; for example, suppose we have observations x1, x2, . . . , xT. Then wewant to estimate θ.x1= z1+ θz0⇒ z1= x1, (take z0= 0),x2= z2+ θz1⇒ z2= x2− θx1,x3= z3+ θz2⇒ z3= x3− θ(x2− θ1)...zT= zT+ θzT −1⇒ zt= xT− θ(. . .),so that to solve minθPTt=1zt(θ)2is to minimize a polynomial of order T −1, which is numer-ically difficult. In contrast, for AR(1) model,PTt=2(xt− ϕxt−1)2is a quadratic polynomial.Note that a simple estimator of θ for MA(1) model is that solvingθ1 + θ2= ˆρ(1)where ˆρ(1) is the sample ACF at lag 1 (see (10)). It turns out that this estimator of θ isworse in terms of estimation quality than that solving
View Full Document