CMU CS 10708  Kalman Filters Gaussian MNs (22 pages)
Previewing pages 1, 2, 21, 22 of 22 page document View the full content.Kalman Filters Gaussian MNs
Previewing pages 1, 2, 21, 22 of actual document.
View the full content.View Full Document
Kalman Filters Gaussian MNs
0 0 128 views
 Pages:
 22
 School:
 Carnegie Mellon University
 Course:
 Cs 10708  Probabilistic Graphical Models
Probabilistic Graphical Models Documents

Learning Completely Observed Undirected Graphical Models
15 pages

Clique Trees 2 Undirected Graphical Models
44 pages

29 pages

Junction Trees 3 Undirected Graphical Models
9 pages

Parameter learning in Markov nets
15 pages

15 pages

15 pages

25 pages

24 pages

BN Semantics 3 – Now it’s personal!
12 pages

Mean Field and Variational Methods
24 pages

Unifying Variational and GBP Learning Parameters of MNs EM for BNs
35 pages

BN Semantics 3 – Now it’s personal!
34 pages

Structure Learning (The Good), The Bad, The Ugly
30 pages

35 pages

35 pages

21 pages

53 pages

16 pages

11 pages

Contextspecific independence Parameter learning: MLE
45 pages

Variable elimination 2 Clique trees
32 pages

19 pages

15 pages

Undirected Graphical Models (finishing off)
16 pages

26 pages

12 pages

42 pages

30 pages

30 pages

39 pages

18 pages

47 pages

Structure Learning (The Good), The Bad, The Ugly Inference
15 pages

Bayesian Param. Learning Bayesian Structure Learning
13 pages

16 pages

42 pages

Approximate Inference by Sampling
23 pages

23 pages

34 pages

15 pages

9 pages

2 pages

19 pages

17 pages

16 pages

BN Semantics 2 – The revenge of dseparation
46 pages

15 pages

17 pages

19 pages

Parameter Learning 2 Structure Learning 1
42 pages

Kalman Filters Switching Kalman Filter
41 pages

42 pages

Mean Field and Variational Methods
24 pages

BN Semantics 2 – The revenge of dseparation
46 pages

Parameter and Structure Learning
26 pages

Structure Learning: the good, the bad, the ugly
21 pages

16 pages

22 pages

Complexity of Var. Elim MPE Inference Junction Trees
22 pages

23 pages

23 pages

Generalized Belief Propagation
14 pages

23 pages

18 pages

18 pages

20 pages

Learning Pmaps Param. Learning
45 pages

30 pages

19 pages

Unifying Variational and GBP Learning Parameters of MNs EM for BNs
18 pages

Parameter and Structure Learning
26 pages

Hidden Markov Model and Conditional Random Fields
22 pages

Structure Learning: the good, the bad, the ugly
38 pages

35 pages

35 pages

26 pages

42 pages

51 pages

Mean Field and Variational Methods Loopy Belief Propagation
12 pages

38 pages

22 pages

26 pages

Approximate Inference by Sampling
11 pages

36 pages

19 pages

BN Semantics 2 – Representation Theorem The revenge of dseparation
38 pages

21 pages

Variable elimination 2 Clique trees
32 pages

Loopy Belief Propagation Generalized Belief Propagation Unifying Variational and GBP
30 pages

19 pages

21 pages

17 pages

13 pages

Approximate Inference by Sampling
22 pages

Mean Field and Variational Methods finishing off
24 pages

37 pages

Towards Complex Graphical Models and Approximate Inference
22 pages

50 pages

18 pages

24 pages

20 pages

22 pages

Kalman Filters Switching Kalman Filter
41 pages

15 pages

17 pages

51 pages

38 pages

Junction Tree Algorithm and a case study of the Hidden Markov Model
21 pages

30 pages

Bayesian Param. Learning Bayesian Structure Learning
18 pages

Param. Learning (MLE) Structure Learning
12 pages

35 pages

38 pages

27 pages

28 pages

26 pages

11 pages

20 pages

42 pages

30 pages

21 pages

5 pages

36 pages

21 pages

12 pages

Structure Learning: the good, the bad, the ugly
41 pages

Structure Learning 2: the good, the bad, the ugly
15 pages

15 pages

Complexity of Var. Elim MPE Inference Junction Trees
13 pages

Learning Partially Observed Graphical Models
16 pages

Mean Field and Variational Methods First approximate inference
11 pages

30 pages

30 pages

Mean Field and Variational Methods
12 pages

2 pages

25 pages

25 pages

29 pages

Clique Trees 2 Undirected Graphical Models
22 pages

Kalman Filters Switching Kalman Filter
22 pages

27 pages

15 pages

33 pages

Mean Field and Variational Methods finishing off
14 pages

4 pages

Kalman Filters Switching Kalman Filter
22 pages

14 pages

Markov networks, Factor graphs, and an unified view
13 pages

15 pages

44 pages

12 pages

40 pages
Sign up for free to view:
 This document and 3 million+ documents and flashcards
 High quality study guides, lecture notes, practice exams
 Course Packets handpicked by editors offering a comprehensive review of your courses
 Better Grades Guaranteed
Readings K F 6 1 6 2 6 3 14 1 14 2 14 3 14 4 Kalman Filters Gaussian MNs Graphical Models 10708 Carlos Guestrin Carnegie Mellon University December 1st 2008 1 Multivariate Gaussian Mean vector Covariance matrix 2 Conditioning a Gaussian Joint Gaussian p X Y N Conditional linear Gaussian p Y X N Y X 2Y X 3 Gaussian is a Linear Model Conditional linear Gaussian p Y X N 0 X 2 4 Conditioning a Gaussian Joint Gaussian p X Y N Conditional linear Gaussian p Y X N Y X YY X 5 Conditional Linear Gaussian CLG general case Conditional linear Gaussian p Y X N 0 X YY X 6 Understanding a linear Gaussian Variance increases over time the 2d case motion noise adds up Object doesn t necessarily move in a straight line 7 Tracking with a Gaussian 1 p X0 N 0 0 p Xi 1 Xi N Xi Xi 1 Xi 8 Tracking with Gaussians 2 Making observations We have p Xi Detector observes Oi oi Want to compute p Xi Oi oi Use Bayes rule Require a CLG observation model p Oi Xi N W Xi v Oi Xi 9 Operations in Kalman filter X1 O1 X2 O2 X3 O3 Compute Start with At each time step t X4 O4 X5 O5 Condition on observation Prediction Multiply transition model Roll up marginalize previous time step I ll describe one implementation of KF there are others Information filter 10 Exponential family representation of Gaussian Canonical Form 11 Canonical form Standard form and canonical forms are related Conditioning is easy in canonical form Marginalization easy in standard form 12 Conditioning in canonical form First multiply Then condition on value B y 13 Operations in Kalman filter X1 O1 X2 O2 X3 O3 Compute Start with At each time step t X4 O4 X5 O5 Condition on observation Prediction Multiply transition model Roll up marginalize previous time step 14 Prediction roll up in canonical form First multiply Then marginalize Xt 15 What if observations are not CLG Often observations are not CLG CLG if Oi Xi o Consider a motion detector Oi 1 if person is likely to be in the region Posterior is not Gaussian 16 Linearization incorporating nonlinear evidence p Oi Xi not CLG but Find a Gaussian approximation of p Xi Oi p Xi p Oi Xi Instantiate evidence Oi oi and obtain a Gaussian for p Xi Oi oi Why do we hope this would be any good Locally Gaussian may be OK 17 Linearization as integration Gaussian approximation of p Xi Oi p Xi p Oi Xi Need to compute moments E Oi E Oi2 E Oi Xi Note Integral is product of a Gaussian with an arbitrary function 18 Linearization as numerical integration Product of a Gaussian with arbitrary function Effective numerical integration with Gaussian quadrature method Approximate integral as weighted sum over integration points Gaussian quadrature defines location of points and weights Exact if arbitrary function is polynomial of bounded degree Number of integration points exponential in number of dimensions d Exact monomials requires exponentially fewer points For 2d 1 points this method is equivalent to effective Unscented Kalman filter Generalizes to many more points 19 Operations in non linear Kalman filter X1 O1 X2 O2 X3 O3 Compute Start with At each time step t X4 O4 X5 O5 Condition on observation use numerical integration Prediction Multiply transition model use numerical integration Roll up marginalize previous time step 20 Canonical form Markov Nets 21 What you need to know about Gaussians Kalman Filters Gaussian MNs Kalman filter Non linear Kalman filter Usually observation or motion model not CLG Use numerical integration to find Gaussian approximation Gaussian Markov Nets Probably most used BN Assumes Gaussian distributions Equivalent to linear system Simple matrix operations for computations Sparsity in precision matrix equivalent to graph structure Continuous and discrete hybrid model Much harder but doable and interesting see book 22
View Full Document