Lecture 10: Observers and Kalman FiltersStochastic Models of an Uncertain WorldObserversKalman Filter: Optimal ObserverEstimates and UncertaintyGaussian (Normal) DistributionThe Central Limit TheoremEstimating a ValueA Second ObservationMerged EvidenceUpdate Mean and VarianceFrom Weighted Average to Predictor-CorrectorPredictor-CorrectorStatic to DynamicDynamic PredictionDynamic CorrectionQualitative PropertiesKalman FilterSimplificationsLecture 10:Observers and Kalman FiltersCS 344R: RoboticsBenjamin KuipersStochastic Models of anUncertain World•Actions are uncertain.•Observations are uncertain.• i ~ N(0,i) are random variables€ ˙ x = F(x,u)y = G(x)⇒˙ x = F(x,u,ε1)y = G(x,ε2)Observers•The state x is unobservable.•The sense vector y provides noisy information about x.•An observer is a process that uses sensory history to estimate x .•Then a control law can be written € u=Hi(ˆ x )€ ˙ x = F(x,u,ε1)y = G(x,ε2)€ ˆ x =Obs(y)Kalman Filter: Optimal Observer€ u€ x€ y€ ˆ x € ε2€ ε1Estimates and Uncertainty•Conditional probability density functionGaussian (Normal) Distribution•Completely described by N(,) –Mean –Standard deviation , variance 2€ 1σ 2πe−(x−μ)2/2σ2The Central Limit Theorem•The sum of many random variables–with the same mean, but–with arbitrary conditional density functions, converges to a Gaussian density function.•If a model omits many small unmodeled effects, then the resulting error should converge to a Gaussian density function.Estimating a Value•Suppose there is a constant value x.–Distance to wall; angle to wall; etc.•At time t1, observe value z1 with variance •The optimal estimate is with variance € σ12€ ˆ x (t1)=z1€ σ12A Second Observation•At time t2, observe value z2 with variance € σ22Merged EvidenceUpdate Mean and Variance•Weighted average of estimates.•The weights come from the variances.–Smaller variance = more certainty€ ˆ x (t2) =Az1+Bz2€ ˆ x (t2) =σ22σ12+σ22⎡ ⎣ ⎢ ⎢ ⎤ ⎦ ⎥ ⎥ z1+σ12σ12+σ22⎡ ⎣ ⎢ ⎢ ⎤ ⎦ ⎥ ⎥ z2€ 1σ2(t2)=1σ12+1σ22€ A +B =1From Weighted Averageto Predictor-Corrector•Weighted average:•Predictor-corrector:•This version can be applied “recursively”.€ ˆ x (t2) =Az1+Bz2=(1−K)z1+Kz2€ ˆ x (t2) =z1+K(z2−z1)€ =ˆ x (t1)+K(z2−ˆ x (t1))Predictor-Corrector•Update best estimate given new data•Update variance:€ ˆ x (t2) =ˆ x (t1)+K(t2)(z2−ˆ x (t1))€ K(t2) =σ12σ12+σ22€ σ2(t2) =σ2(t1)−K(t2)σ2(t1)€ =(1−K(t2))σ2(t1)Static to Dynamic•Now suppose x changes according to€ ˙ x =F(x,u,ε)=u+ε (N(0,σε))Dynamic Prediction•At t2 we know •At t3 after the change, before an observation.•Next, we correct this prediction with the observation at time t3.€ ˆ x (t3−)=ˆ x (t2)+u[t3−t2]€ σ2(t3−)=σ2(t2)+σε2[t3−t2]€ ˆ x (t2) σ2(t2)Dynamic Correction•At time t3 we observe z3 with variance •Combine prediction with observation.€ σ32€ ˆ x (t3) =ˆ x (t3−)+K(t3)(z3−ˆ x (t3−))€ K(t3) =σ2(t3−)σ2(t3−)+σ32€ σ2(t3) =(1−K(t3))σ2(t3−)Qualitative Properties•Suppose measurement noise is large.–Then K(t3) approaches 0, and the measurement will be mostly ignored.•Suppose prediction noise is large.–Then K(t3) approaches 1, and the measurement will dominate the estimate.€ K(t3) =σ2(t3−)σ2(t3−)+σ32€ ˆ x (t3) =ˆ x (t3−)+K(t3)(z3−ˆ x (t3−))€ σ32€ σ2(t3−)Kalman Filter•Takes a stream of observations, and a dynamical model.•At each step, a weighted average between –prediction from the dynamical model–correction from the observation.•The Kalman gain K(t) is the weighting,–based on the variances and •With time, K(t) and tend to stabilize.€ σ2(t)€ σε2€ σ2(t)Simplifications•We have only discussed a one-dimensional system.–Most applications are higher dimensional.•We have assumed the state variable is observable.–In general, sense data give indirect evidence.•We will discuss the more complex case next.€ ˙ x =F(x,u,ε1)=u+ε1€ z=G(x,ε2)
View Full Document