DOC PREVIEW
Berkeley COMPSCI C280 - Lecture Notes

This preview shows page 1-2-3-4-5-6-7-46-47-48-49-50-51-93-94-95-96-97-98-99 out of 99 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 99 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 99 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 99 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 99 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 99 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 99 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 99 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 99 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 99 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 99 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 99 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 99 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 99 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 99 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 99 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 99 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 99 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 99 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 99 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 99 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 99 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

C280 Computer VisionC280, Computer VisionProf. Trevor [email protected] 19: TrackingTracking scenarios• Follow a pointll l•Follow a template• Follow a changing template• Follow all the elements of a moving person, fit a model to it.2Things to consider in trackingWhat are the dynamics of the thing being tracked?ii b d?How is it observed?3Three main issues in tracking4Simplifying Assumptions5Kalman filter graphical model and corresponding factorized joint probabilitycorresponding factorized joint probabilityx1x2x3y1y2y3),,,,,(321321yyyxxxP)|()|()|()|()|()(),,,,,(33232212111321321xyPxxPxyPxxPxyPxPyyy6Tracking as induction• Make a measurement starting in the 0thframehhihih•Then: assume you have an estimate at the ith frame, after the measurement step.Sh th t d di ti f th i+1th•Show that you can do prediction for the i+1th frame, and measurement for the i+1th frame.7Base case8Prediction stepgiven9Update stepgiven10The Kalman Filter• Key ideas: Li d l i t t i l ll ith G i–Linear models interact uniquely well with Gaussian noise - make the prior Gaussian, everything else Gaussian and the calculations are easy– Gaussians are really easy to represent --- once you know the mean and covariance, you’re done11Recall the three main issues in tracking(Ignore data association for now)12(Ignore data association for now)The Kalman Filter13[figure from http://www.cs.unc.edu/~welch/kalman/kalmanIntro.html]The Kalman Filter in 1D• Dynamic Model•NotationPredicted meanCorrected mean14The Kalman Filter15Prediction for 1D Kalman filter• The new state is obtained bylti l i ld t t b k t t–multiplying old state by known constant– adding zero-mean noise• Therefore, predicted mean for new state isconstant times mean for old state–constant times mean for old state• Old variance is normal random variable–variance is multiplied by square of constantvariance is multiplied by square of constant– and variance of noise is added.1617The Kalman Filter18Measurement update for 1D Kalman filterNotice:– if measurement noise is small, we rely mainly on the measurement,– if it’s large, mainly on the predictiondtdd19–does not depend on y20Kalman filter for computing an on-line average•What Kalman filter parameters and initialWhat Kalman filter parameters and initial conditions should we pick so that the optimal estimate for x at each iteration is just the average of all the observations seen so far?211,0,1,1iimdiimdKalman filter model000x,,,iimdiiInitial conditionsIteration 0 1 2000xix00y210yyix0y210yy3210yyyi1121122i12131What happens if the x dynamics are given a non-zero variance?nonzero variance?231,1,1,1iimdiimdKalman filter model000x,,,iimdiiInitial conditionsIteration 0 1 2000xix00y3210yyix0y3210yy852210yyyi23524i13285Linear dynamic models• A linear dynamic model has the formxi N Di1xi1;diyi N Mixi;mi• This is much, much more general than it looks, and extremely powerful25Examples of linear state space modelsxi N Di1xi1;dimodels• Drifting pointsth t th iti f th i t i th ldyi N Mixi;mi–assume that the new position of the point is the old one, plus noiseD=IdentityD Identity26cic.nist.gov/lipman/sciviz/images/random3.gif http://www.grunch.net/synergetics/images/random3.jpgConstant velocity xi N Di1xi1;di• We haveyi N Mixi;miuiui1tvi1ivi vi1i– (the Greek letters denote noise terms)•Stack (u, v) into a single state vectorStack (u, v) into a single state vectoru1tui– which is the form we had abovevi01vi1noiseD27Di-1xi-1xipositionvelocitypositiontimepCtimemeasurement,positionConstantVelocityModelModel28timeConstant accelerationxi N Di1xi1;di• We haveuutvyi N Mixi;miuiui1tvi1ivi vi1tai1iaa– (the Greek letters denote noise terms)•Stack (u, v) into a single state vectoraiai1iStack (u, v) into a single state vectoruv1t001tuvnoise– which is the form we had abovevai01t00 1vai1noise29Di-1xi-1xipositionvelocitytimepositiontimeConstantAcceleration30cce e oModelPeriodic motionNMxi N Di1xi1;diAssume we have a point, moving on a line with a periodic movement defined with ayiNMixi;mia periodic movement defined with a differential eq: can be defined ascan be defined as ih d fi d k d ii d31with state defined as stacked position and velocity u=(p, v)Periodic motionxi N Di1xi1;diyi N Mixi;miTake discrete approximation….(e.g., forward Euler integration with t stepsize.)32Di-1xi-1xin-DGeneralization to n-D is straightforward but more complex.33n-DGeneralization to n-D is straightforward but more complex.34n-D PredictionGeneralization to n-D is straightforward but more complex.Prediction:• Multiply estimate at prior time with forward model:• Propagate covariance through model and add new noise:35n-D CorrectionGeneralization to n-D is straightforward but more complex.Correction:• Update a priori estimate with measurement to form a posteriori36n-D correctionFind linear filter on innovations which minimizes a posteriori error covariance:xxxxETK is the Kalman Gain matrix. A solution is37Kalman Gain MatrixAs measurement becomes more reliable, K weights residual more heavily, 10lim MKimAs prior covariance approaches 0, measurements are ignored:0liK380lim0iKi39tionocitypositvelopositiontimepositiontime40Constant Velocity Modelsitionpos41timeThis is figure 17.3 of Forsyth and Ponce. The notation is a bit involved, but is logical. Weplot the true state as open circles, measurements as x’s, predicted means as *’swith three standard deviation bars, corrected means as +’s with three standard deviation


View Full Document

Berkeley COMPSCI C280 - Lecture Notes

Download Lecture Notes
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lecture Notes and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lecture Notes 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?