DOC PREVIEW
UIUC GE 423 - An Introduction to the Kalman Filter

This preview shows page 1-2-3-4-5 out of 16 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 16 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 16 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 16 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 16 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 16 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 16 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

1 The Discrete Kalman Filter2 The Extended Kalman Filter (EKF)3 A Kalman Filter in Action: Estimating a Random ConstantAn Introduction to the Kalman Filter Greg Welch 1 and Gary Bishop 2 TR 95-041Department of Computer ScienceUniversity of North Carolina at Chapel HillChapel Hill, NC 27599-3175Updated: Monday, April 5, 2004 Abstract In 1960, R.E. Kalman published his famous paper describing a recursive solution to the discrete-data linear filtering problem. Since that time, due in large part to ad-vances in digital computing, the Kalman filter has been the subject of extensive re-search and application, particularly in the area of autonomous or assisted navigation.The Kalman filter is a set of mathematical equations that provides an efficient com-putational (recursive) means to estimate the state of a process, in a way that mini-mizes the mean of the squared error. The filter is very powerful in several aspects: it supports estimations of past, present, and even future states, and it can do so even when the precise nature of the modeled system is unknown.The purpose of this paper is to provide a practical introduction to the discrete Kal-man filter. This introduction includes a description and some discussion of the basic discrete Kalman filter, a derivation, description and some discussion of the extend-ed Kalman filter, and a relatively simple (tangible) example with real numbers & results. 1. [email protected], http://www.cs.unc.edu/~welch 2. [email protected], http://www.cs.unc.edu/~gbWelch & Bishop, An Introduction to the Kalman Filter 2UNC-Chapel Hill, TR 95-041, April 5, 2004 1 The Discrete Kalman Filter In 1960, R.E. Kalman published his famous paper describing a recursive solution to the discrete-data linear filtering problem [Kalman60]. Since that time, due in large part to advances in digital computing, the Kalman filter has been the subject of extensive research and application, particularly in the area of autonomous or assisted navigation. A very “friendly” introduction to the general idea of the Kalman filter can be found in Chapter 1 of [Maybeck79], while a more complete introductory discussion can be found in [Sorenson70], which also contains some interesting historical narrative. More extensive references include [Gelb74; Grewal93; Maybeck79; Lewis86; Brown92; Jacobs93]. The Process to be Estimated The Kalman filter addresses the general problem of trying to estimate the state of a discrete-time controlled process that is governed by the linear stochastic difference equation, (1.1)with a measurement that is. (1.2)The random variables and represent the process and measurement noise (respectively). They are assumed to be independent (of each other), white, and with normal probability distributions, (1.3). (1.4)In practice, the process noise covariance and measurement noise covariance matrices might change with each time step or measurement, however here we assume they are constant.The matrix in the difference equation (1.1) relates the state at the previous time step to the state at the current step , in the absence of either a driving function or process noise. Note that in practice might change with each time step, but here we assume it is constant. The matrix B relates the optional control input to the state x . The matrix in the measurement equation (1.2) relates the state to the measurement z k . In practice might change with each time step or measurement, but here we assume it is constant. The Computational Origins of the Filter We define (note the “super minus”) to be our a priori state estimate at step k given knowledge of the process prior to step k , and to be our a posteriori state estimate at step k given measurement . We can then define a priori and a posteriori estimate errors asx ℜn∈xkAxk 1–Buk 1–wk 1–++=z ℜm∈zkHxkvk+=wkvkpw() N 0 Q,()∼pv() N 0 R,()∼QRnn×Ak 1–kAnl×u ℜl∈mn×HHxˆk-ℜn∈xˆkℜn∈zkek-xkxˆk-, and–≡ekxkxˆk.–≡Welch & Bishop, An Introduction to the Kalman Filter 3UNC-Chapel Hill, TR 95-041, April 5, 2004 The a priori estimate error covariance is then, (1.5)and the a posteriori estimate error covariance is. (1.6)In deriving the equations for the Kalman filter, we begin with the goal of finding an equation that computes an a posteriori state estimate as a linear combination of an a priori estimate and a weighted difference between an actual measurement and a measurement prediction as shown below in (1.7). Some justification for (1.7) is given in “The Probabilistic Origins of the Filter” found below.(1.7)The difference in (1.7) is called the measurement innovation , or the residual . The residual reflects the discrepancy between the predicted measurement and the actual measurement . A residual of zero means that the two are in complete agreement. The matrix K in (1.7) is chosen to be the gain or blending factor that minimizes the a posteriori error covariance (1.6). This minimization can be accomplished by first substituting (1.7) into the above definition for , substituting that into (1.6), performing the indicated expectations, taking the derivative of the trace of the result with respect to K , setting that result equal to zero, and then solving for K . For more details see [Maybeck79; Brown92; Jacobs93]. One form of the resulting K that minimizes (1.6) is given by 1 . (1.8)Looking at (1.8) we see that as the measurement error covariance approaches zero, the gain K weights the residual more heavily. Specifically,.On the other hand, as the a priori estimate error covariance approaches zero, the gain K weights the residual less heavily. Specifically,. 1. All of the Kalman filter equations can be algebraically manipulated into to several forms. Equation (1.8)represents the Kalman gain in one popular form.Pk-Eek-ek- T[]=PkEekekT[]=xˆkxˆk-zkHxˆk-xˆkxˆk-KzkHxˆk-–()+=zkHxˆk-–()Hxˆk-zknm×ekKkPk-HTHPk-HTR+( )1–=Pk-HTHPk-HTR+-----------------------------=RKkRk0→lim H1–=Pk-KkPk-0→lim 0=Welch & Bishop, An Introduction to the Kalman Filter 4UNC-Chapel Hill, TR 95-041, April 5, 2004Another way of thinking about the weighting by K is that as the measurement error covariance approaches zero, the actual measurement is “trusted” more and more, while the predicted measurement is trusted less and less. On the other hand, as the a priori estimate error covariance


View Full Document

UIUC GE 423 - An Introduction to the Kalman Filter

Documents in this Course
ARTWORK

ARTWORK

16 pages

Load more
Download An Introduction to the Kalman Filter
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view An Introduction to the Kalman Filter and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view An Introduction to the Kalman Filter 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?