New version page

spatial_revision

This preview shows page 1-2-22-23 out of 23 pages.

View Full Document

End of preview. Want to read all 23 pages?

View Full Document
Unformatted text preview:

Statistical analysis of a spatio-temporalmodel with location dependent parametersand a test for spatial stationaritySuhasini Subba RaoDepartment of Statistics,Texas A&M University,College Station, TX 77843-3143, USAEmail: [email protected] September, 2007AbstractIn this paper we define a spatio-temporal model with location dependent parametersto describe temporal variation and spatial nonstationarity. We consider the predictionof observations at unknown locations using known neighbouring observations. Furtherwe propose a local least squares based method to estimate the parameters at unob-served locations. The sampling properties of these estimators are investigated. Wealso develop a statistical test for spatial stationarity. In order to derive the asymptoticresults we show that the spatially nonstationary process can be locally approximatedby a spatially stationary process. We illustrate the methods of estimation with somesimulations.key words: Autoregressive process, ground ozone data, kriging, local least squares,local stationarity, polynomial interpolation, spatio-temporal models, testing for spatialstationarity.1 IntroductionThe modelling of spatial data has been an active area of research, because of its vast potentialin applications to ecology, the environmental sciences and finance amongst others. If at agiven time, we have observations over various locations (equally or unequally spaced) wecan find a suitable spatial model or covariance function to describe the dependence over1space (cf. Whittle (1954), Mardia and Marshall (1984) Cressie (1993), Johns et al. (2003),Hallin et al. (2004), Guan et al. (2004) and Lu et al. (2004)). In the situation where timeis not fixed, then we have observations over both space and time, and there are variousways to model this type of data. For instance, often it is assumed that the observationsare Gaussian, therefore to model the dependence a covariance is often fitted. Generally it issupposed that the process is spatially stationary and the covariance has a particular structure(usually isotropic or anisotropic). In this case likelihood methods are often used to estimatethe parameters (c.f. Cressie and Huang (1999), Shitan and Brockwell (1995), Matsuda andYajima (2004), Zhang (2004) and Jun and Stein (2007)). However many factors could causethe process to be spatially nonstationary, therefore it would be of interest to develop methodsof estimation and theory for such processes.We observe if we fix a location, the observations at that location can be consideredas a time series. This inspires us to define a spatio-temporal process in terms of its timedynamics. Let us suppose that for every fixed location, the resulting time series has an ARrepresentation, where the innovations are samples from a spatial process. By assuming theinnovations are observations on a spatial process, dependence between two observations inspace can be modelled. More precisely, we define the location dependent autoregressive (AR)process {Xt(u) : u ∈ [0, 1]2}t, where Xt(u) satisfies the representationXt(u) =pXj=1aj(u)Xt−j(u) + σ(u)ξt(u) t = 1, . . . , T, (1)with u = (x, y) ∈ [0, 1]2, and {aj(·); j = 1, . . . , p} and σ(·) are nonparametric functions. Wesuppose the innovations {ξt(u) : u ∈ [0, 1]2} are independent over time and are spatiallystationary processes, with E[ξt(u)] = 0 and var[ξt(u)] = 1. We observe if the {aj(·)} are notconstant over space, then {Xt(u)} is a spatially nonstationary process. We mention that thelocation dependent AR process is used to fit ozone and house price data in Gilleland andNychka (2005) and Gelfand et al. (2003) respectively. An integrated spatially stationary ARprocess is considered Storvik et al. (2002). We note that the results in this paper do not relyon any distributional assumptions on ξt(u).In Section 2 we consider the prediction of observations at unknown locations, using knownneighbouring observations. The predictor requires an estimate of aj(·) at the unobservedlocation. In Section 2 we propose two methods for estimating the AR functions {aj(·)}.Both methods are based on a localised least squares criterion. The first estimator is alocalised least squares estimator with constant regressors, whereas the second estimator is alocal linear least squares estimator. In Section 3 we consider the sampling properties of bothestimators. We consider the two cases where (i) the number of locations are kept fixed andtime T → ∞ and (ii) both the number of locations and T → ∞. In the case that the numberof locations is fixed, we show that both estimators are asymptotically normal but biased (inprobability). However, if the parameters are sufficiently smooth, the linear interpolating2least squares estimator yields a bias which is smaller than the constant interpolating leastsquares estimator. In the case that the number of locations (m) also grow, the estimatorsare asymptotically consistent.In Section 4 we develop a test for spatial stationarity, which is based on testing for ho-mogeneity. We evaluate the limiting distribution of the test statistic under the null andalternative hypotheses of spatial stationarity and nonstationarity. We note that the ‘rough-ness’ of the parameters {aj(·)} determine the power of the test.To illustrate the methods and test for spatial stationarity, in Section 5 we consider somesimulations. If aj(·) is smooth we show that the local linear estimator is better than thelocal least squares estimators. However when the parameter aj(·) is relatively rough (its firstderivative does not exists everywhere), then the two estimation methods are comparable.An outline of the proofs can be found in the Appendix. The full details and someadditional results can be found in the accompanying technical report Subba Rao (2007).2 Estimation of location parameters at an unobservedlocation2.1 The model and assumptionsThroughout this paper we let u = (x, y) and for s = 0, . . . , m, we suppose us= (xs, ys). Wenote that (x, y) can denote the spatial coordinates, but it is straightforward to generalise theresults in this paper to higher dimensions. Let {ξt(u) : u ∈ [0, 1]2} be a spatially stationaryprocess and cξ(u) = cov{ξt(0), ξt(u)}. Let k · k∞denote the sup-norm of a vector, k · k2denote the Euclidean norm and k·k1the `1-norm. Suppose A is a p ×p matrix, then kAkspecdenotes the spectral norm of A, Aijdenotes the (i, j)th element of A, A·,j, the jth columnof A, and Unlocking...