DOC PREVIEW
Maximum Entropy Bootstrap Inference for Time Series Class Notes

This preview shows page 1-2 out of 6 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 6 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 6 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 6 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Maximum Entropy Bootstrap Inference for Time Series Class Notesby H. D. Vinod , May 2, 2005A Description of the ME density and the Ergodic TheoremOrdered Quantiles xj,(t),me from the ME densityOrdered Quantiles xj,(t),me from the ME densityRecovery of the original time order in xj,t quantilesDiscussion of assumptions behind the ME boot for time series inferenceMaximum Entropy Bootstrap Inference for Time Series Class Notesby H. D. Vinod , May 2, 2005We have seen from extended Nelson-Plosser data that unit root testing does not give conclusive results. The testing compares I(1) with I(0), but completely ignores I(d) or long memory models, where d is fractional which are quite realistic for econometrics.It is nearly impossible to make every variable in a regression to be of the same order, I(0), I(1) etc., as is suggested by the unit root theorists. An important motivation behind unit root models is a desire to avoid spurious regression. If we regress I(1) variable on another I(1) variable, the problem is that the usual t tests are unreliable. But we do not reliably know that the variables are indeed I(1) and we do not have to use the over-optimistic t tests. If only we have an alternative inference method, we can avoid most of the unit root methods.The trick described below is to use the power of modern computers (bootstrap) insteadof usual t tests for time series inference. I have written gauss and R programs to implement the ideas described here.Time Series Inference from 1930s: Note that I(0) means stationary. Why bother withstationarity? Wiener, Kolmogorov and Khintchine (WKK), and others, developed statistical inference techniques for time series in the 1930s before the era of modern computers. They relied heavily on the stationarity assumption, which permits a hypothetical construction of an ‘ensemble’  (containing, possibly infinite, time series) asthe ‘population’ for formal inference. Stationarity permits the lag operator-based invariance Lk= of underlying joint density f(x1, x2, .., xT) = ΠTt=1 ft|past, a product of all conditional densities. For natural sciences, it makes sense to think of stationarity. The heat and waves can be genuinely assumed to revert to some kind of stationary behavior.Vinod (2003-2004 , or “V04”) argues that economic data are evolutionary not stationary. He proposes the “maximum entropy density based dependent data bootstrap” (hereafter “ME boot”) to construct a large number of evolving time series j with j=1, 2, …, J (=999, say) approximating the contents of . The evolution modeled by Vinod is not entirely arbitrary, but relative to economists’ idea of “ordinal utility” based on the order in the data. He replaces Lk= by Ord xt=x(t) which finds the usual order statistics x(t), and its inverse map Orev x(t) =xt. Similar to the usual “independent and identically distributed density bootstrap” (iid boot), we use the 999 estimates of time series parameters for statistical inference. For many economic data sets the WKK stationary model is highly unrealistic for following reasons. (i) Resource endowments do matter. (ii) Preliminary testing for stationarity, as needed by the WKK theory, is not definitive with only few data points. (iii) Observed series have distinct order d of integration, I(d), with possibly zero, nonzeroand fractional d. (iv) Some series (like GDP) are assumed to be I(1), implying that past errors have an infinite memory. Such memory is unrealistic when one notes that definitions (e.g., GDP) change. (v) The Lk of the WKK theory often uses series that are reversible in time, whereas our data are obviously evolving over time where we cannot undo legal, institutional and technological changes. Hence our motivation for using the ME boot for inference is obvious.Given any observed time series o ={xt, t=1, .., T}, the iid boot uses the ‘empirical density function’ (edf) to shuffle the data J times with replacement so that each shuffle chooses its elements xj,t from o. Consider a simple example with T=5, in the attachedTable 1 where column 1 has time and col. 2 has the data: x1 = 4, x2 = 12, x3 = 36, x4 = 20, x5 = 8, representing a firm’s profits (in millions of dollars) over five time periods. We view the familiar order statistics x(t) in col. 4 as arising from sorting both columns of a T2 matrix comprising the first two columns of Table 1. The sorted T 2 matrix is placed incolumn 3 and 4. This construction of a vector of T numbers (denoted by Irev) in col. 3 is critically important for our ME boot, because we use Irev to recover the evolving time series even after the bootstrap shuffling.Table 1Col. 1 Col. 2 Col. 3=IrevCol. 4 Col. 5 Col. 6 Col.7 Col.8time originaldataorder Sort col.s1 &2 oncol. 2IntermediatePointsIntervalMeanQuantilej=1Sort col.s7&3 oncol. 3t xt(t) x(t)ztmt xj,(t),mexj,t1 4 1 4=xmin = x(1)6 5 -8 -82 12 5 8 10 8 -1 73 36 2 12 16 13 7 364 20 4 20 28 22 17 175 8 3 36=xmax= x(T)32 36 -1col. sum 80 80 80 51 51The iid shuffling from the edf permits only the original xt values to appear (with repetitions /exclusions) in each and every one of the J shuffles. The firm’s profits at $3.2 <xmin or $41>xmax million are assumed to be impossible under the iid boot. Since the edf isnot flexible enough for creating J time series in , the ME boot uses the ME density, which is “maximally non-committal with regard to the missing information,” Jaynes (1957). [We commit to normality when we assume normally distributed errors].A Description of the ME density and the Ergodic TheoremA brief description of the ‘ME density’ based on Theil and Laitinen (1980) given hereuses the T=5 example, without loss of generality. The entropy is defined by the mathematical expectation of the Shannon information E(log f(x)). The ME density is theunique f(x) which maximizes E(log f(x)), or our ignorance, subject to certain constraints; and finding it is called a characterization problem in Kagan et al (1973). Its solution states that if the “support” of f(x) is finite and known to be [a,b], the f(x) is uniform, denoted by U[a,b], whereas when either a or b is infinite, f(x) is exponential denoted here by expo[a,b] . Define consecutive averages as intermediate points zt= 0.5 (x(t)+ x(t+1)), t= 1,….T1, given in col. 5 of Table 1. For example, z1= 6=0.5(x(1)+x(2)). The ME density derived in V04 is constructed by dividing the assumed range


Maximum Entropy Bootstrap Inference for Time Series Class Notes

Download Maximum Entropy Bootstrap Inference for Time Series Class Notes
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Maximum Entropy Bootstrap Inference for Time Series Class Notes and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Maximum Entropy Bootstrap Inference for Time Series Class Notes 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?