New version page

# Berkeley CS 188 - HMMs

Pages: 5
Documents in this Course

## This preview shows page 1-2 out of 5 pages.

View Full Document

End of preview. Want to read all 5 pages?

View Full Document
Unformatted text preview:

CS 188Fall 2019 Section Handout 10 SolutionsHMMsState variables Wtand observation (evidence) variables (Ot), which are supposed to be shaded below. Transitionmodel P (Wt+1|Wt). Sensor model P (Ot|Wt). The joint distribution of the HMM can be factorized asP (W1, ..., WT, O1, ....OT) = P (W1)T −1Yt=1P (Wt+1|Wt)TYt=1P (Ot|Wt) (1)Define the following belief distribution• B(Wt) = P(Wt|O1, ..., Ot): Belief about state Wtgiven all the observations up until (and including)timestep t.• B0(Wt) = P (Wt|O1, ..., Ot−1): Belief about state Wtgiven all the observations up until (but not including)timestep t.Forward Algorithm• Prediction update: B0(Wt+1) =PwtP (Wt+1|wt)B(wt)• Observation update: B(Wt+1) ∝ P (Ot+1|Wt+1)B0(Wt+1)Particle FilteringThe Hidden Markov Model analog to Bayes’ net sampling is called particle filtering, and involves simulatingthe motion of a set of particles through a state graph to approximate the probability (belief) distribution of therandom variable in question.Instead of storing a full probability table mapping each state to its belief probability, we’ll instead store alist of n particles, where each particle is in one of the d possible states in the domain of our time-dependentrandom variable.Once we’ve sampled an initial list of particles, the simulation takes on a similar form to the forward algorithm,with a time elapse update followed by an observation update at each timestep:1• Prediction update - Update the value of each particle according to the transition model. For a particlein state Wt, sample the updated value from the probability distribution given by P r(Wt+1|wt). Note thesimilarity of the prediction update to prior sampling with Bayes’ nets, since the frequency of particles inany given state reflects the transition probabilities.• Observation update - During the observation update for particle filtering, we use the sensor modelP r(Ot|Wt) to weight each particle according to the probability dictated by the observed evidence andthe particle’s state. Specifically, for a particle in state wtwith sensor reading ot, assign a weight ofP r(ot|wt). The algorithm for the observation update is as follows:1. Calculate the weights of all particles as described above.2. Calculate the total weight for each state.3. If the sum of all weights across all states is 0, reinitialize all particles.4. Else, normalize the distribution of total weights over states and resample your list of particles fromthis distribution.Note the similarity of the observation update to likelihood weighting, where we again downweight samplesbased on our evidence.21 HMMsConsider the following Hidden Markov Model. O1and O2are supposed to be shaded.W1P (W1)0 0.31 0.7WtWt+1P (Wt+1|Wt)0 0 0.40 1 0.61 0 0.81 1 0.2WtOtP (Ot|Wt)0 a 0.90 b 0.11 a 0.51 b 0.5Suppose that we observe O1= a and O2= b.Using the forward algorithm, compute the probability distribution P (W2|O1= a, O2= b) one step at a time.(a) Compute P (W1, O1= a).P (W1, O1= a) = P (W1)P (O1= a|W1)P (W1= 0, O1= a) = (0.3)(0.9) = 0.27P (W1= 1, O1= a) = (0.7)(0.5) = 0.35(b) Using the previous calculation, compute P (W2, O1= a).P (W2, O1= a) =Pw1P (w1, O1= a)P (W2|w1)P (W2= 0, O1= a) = (0.27)(0.4) + (0.35)(0.8) = 0.388P (W2= 1, O1= a) = (0.27)(0.6) + (0.35)(0.2) = 0.232(c) Using the previous calculation, compute P (W2, O1= a, O2= b).P (W2, O1= a, O2= b) = P (W2, O1= a)P (O2= b|W2)P (W2= 0, O1= a, O2= b) = (0.388)(0.1) = 0.0388P (W2= 1, O1= a, O2= b) = (0.232)(0.5) = 0.116(d) Finally, compute P (W2|O1= a, O2= b).Renormalizing the distribution above, we haveP (W2= 0|O1= a, O2= b) = 0.0388/(0.0388 + 0.116) ≈ 0.25P (W2= 1|O1= a, O2= b) = 0.116/(0.0388 + 0.116) ≈ 0.7532 Particle FilteringLet’s use Particle Filtering to estimate the distribution of P (W2|O1= a, O2= b). Here’s the HMM again. O1and O2are supposed to be shaded.W1P (W1)0 0.31 0.7WtWt+1P (Wt+1|Wt)0 0 0.40 1 0.61 0 0.81 1 0.2WtOtP (Ot|Wt)0 a 0.90 b 0.11 a 0.51 b 0.5We start with two particles representing our distribution for W1.P1: W1= 0P2: W1= 1Use the following random numbers to run particle filtering:[0.22, 0.05, 0.33, 0.20, 0.84, 0.54, 0.79, 0.66, 0.14, 0.96](a) Observe: Compute the weight of the two particles after evidence O1= a.w(P1) = P (Ot= a|Wt= 0) = 0.9w(P2) = P (Ot= a|Wt= 1) = 0.5(b) Resample: Using the random numbers, resample P1and P2based on the weights.We now sample from the weighted distribution we found above. Using the first two random samples, wefind:P1= sample(weights, 0.22) = 0P2= sample(weights, 0.05) = 0(c) Predict: Sample P1and P2from applying the time update.P1= sample(P (Wt+1|Wt= 0), 0.33) = 0P2= sample(P (Wt+1|Wt= 0), 0.20) = 0(d) Update: Compute the weight of the two particles after evidence O2= b.w(P1) = P (Ot= b|Wt= 0) = 0.1w(P2) = P (Ot= b|Wt= 0) = 0.1(e) Resample: Using the random numbers, resample P1and P2based on the weights.Because both of our particles have X = 0, resampling will still leave us with two particles with X = 0.P1= 0P2= 04(f) What is our estimated distribution for P (W2|O1= a, O2= b)?P (W2= 0|O1= a, O2= b) = 2/2 = 1P (W2= 1|O1= a, O2= b) = 0/2 =

View Full Document Unlocking...