DOC PREVIEW
Berkeley COMPSCI 188 - Lecture 18: HMMs: Intro and Filtering

This preview shows page 1-2 out of 5 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 5 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 5 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 5 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

1CS 188: Artificial IntelligenceFall 2011Lecture 18: HMMs: Intro and Filtering11/2/2011Dan Klein --- UC BerkeleyPresented by Woody HoburgAnnouncements Midterm back today solutions online grades also in glookup P4 out Thursday2Reasoning over Time Often, we want to reason about a sequence of observations Robot localization Medical monitoring Speech recognition Vehicle control Need to introduce time into our models Basic approach: hidden Markov models (HMMs)3[VIDEO]Outline Markov Models(last lecture) Hidden Markov Models (HMMs)RepresentationInferenceForward algorithm (special case of variable elimination)Particle filtering (next lecture)4Markov Models: recap A Markov model is a chain-structured BN Each node is identically distributed (stationarity) Value of X at a given time is called the state As a BN: Parameters: called transition probabilities or dynamics, specify how the state evolves over time (also, initial probs)X2X1X3X4Conditional Independence Basic conditional independence: Past and future independent of the present Each time step only depends on the previous This is called the (first order) Markov property Note that the chain is just a (growing) BN We can always use generic BN reasoning on it if we truncate the chain at a fixed lengthX2X1X3X462Example: Markov Chain Weather: States: X = {rain, sun} Transitions: Initial distribution: 1.0 sun What’s the probability distribution after one step?rain sun0.90.90.10.1This are two new representations of a CPT, not BNs!7sunrainsunrain0.10.90.90.1Mini-Forward Algorithm Question: What’s P(X) on some day t? An instance of variable elimination! (In order X1, X2, … )sunrainsunrainsunrainsunrainForward simulation8Example From initial observation of sun From initial observation of rainP(X1) P(X2) P(X3) P(X∞)P(X1) P(X2) P(X3) P(X∞)9Outline Markov Models(last lecture) Hidden Markov Models (HMMs)RepresentationInferenceForward algorithm (special case of variable elimination)Particle filtering (next lecture)10Hidden Markov Models Markov chains not so useful for most agents Eventually you don’t know anything anymore Need observations to update your beliefs Hidden Markov models (HMMs) Underlying Markov chain over states S You observe outputs (effects) at each time step As a Bayes’ net:X5X2E1X1X3X4E2E3E4E5Example An HMM is defined by: Initial distribution: Transitions: Emissions:3Conditional Independence HMMs have two important independence properties: Markov hidden process, future depends on past via the present Current observation independent of all else given current state Quiz: does this mean that observations are independent given no evidence? [No, correlated by the hidden state]X5X2E1X1X3X4E2E3E4E5Real HMM Examples Speech recognition HMMs: Observations are acoustic signals (continuous valued) States are specific positions in specific words (so, tens of thousands) Machine translation HMMs: Observations are words (tens of thousands) States are translation options Robot tracking: Observations are range readings (continuous) States are positions on a map (continuous)Filtering / Monitoring Filtering, or monitoring, is the task of tracking the distribution B(X) (the belief state) over time We start with B(X) in an initial setting, usually uniform As time passes, or we get observations, we update B(X) The Kalman filter was invented in the 60’s and first implemented as a method of trajectory estimation for the Apollo programExample: Robot Localizationt=0Sensor model: can read in which directions there is a wall, never more than 1 mistakeMotion model: may not execute action with small prob.10ProbExample from Michael PfeifferExample: Robot Localizationt=1Lighter grey: was possible to get the reading, but less likely b/c required 1 mistake10ProbExample: Robot Localizationt=210Prob4Example: Robot Localizationt=310ProbExample: Robot Localizationt=410ProbExample: Robot Localizationt=510ProbInference: Base Cases Observation Given: P(X1), P(e1| X1) Query: P(x1| e1) ∀ x1E1X1X2X1 Passage of Time Given: P(X1), P(X2| X1) Query: P(x2) ∀ x2Passage of Time Assume we have current belief P(X | evidence to date) Then, after one time step passes: Or, compactly: Basic idea: beliefs get “pushed” through the transitions With the “B” notation, we have to be careful about what time step t the belief is about, and what evidence it includesX2X1Example: Passage of Time As time passes, uncertainty “accumulates”T = 1 T = 2 T = 5Transition model: ghosts usually go clockwise5Observation Assume we have current belief P(X | previous evidence): Then: Or: Basic idea: beliefs reweighted by likelihood of evidence Unlike passage of time, we have to renormalizeE1X1Example: Observation As we get observations, beliefs get reweighted, uncertainty “decreases”Before observation After observationExample HMM The Forward Algorithm We are given evidence at each time and want to know We can derive the following updates = exactly variable elimination in order X1, X2, …We can normalize as we go if we want to have P(x|e) at each time step, or just once at the end…Online Belief Updates Every time step, we start with current P(X | evidence) We update for time: We update for evidence: The forward algorithm does both at once (and doesn’t normalize) Problem: space is |X| and time is |X|2per time


View Full Document

Berkeley COMPSCI 188 - Lecture 18: HMMs: Intro and Filtering

Documents in this Course
CSP

CSP

42 pages

Metrics

Metrics

4 pages

HMMs II

HMMs II

19 pages

NLP

NLP

23 pages

Midterm

Midterm

9 pages

Agents

Agents

8 pages

Lecture 4

Lecture 4

53 pages

CSPs

CSPs

16 pages

Midterm

Midterm

6 pages

MDPs

MDPs

20 pages

mdps

mdps

2 pages

Games II

Games II

18 pages

Load more
Download Lecture 18: HMMs: Intro and Filtering
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lecture 18: HMMs: Intro and Filtering and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lecture 18: HMMs: Intro and Filtering 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?