DOC PREVIEW
Berkeley COMPSCI 188 - HMMs (6PP)

This preview shows page 1-2 out of 7 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 7 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 7 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 7 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

1CS 188: Artificial IntelligenceFall 2008Lecture 19: HMMs11/4/2008Dan Klein – UC Berkeley1Announcements Midterm solutions up, submit regrade requests within a week Midterm course evaluation up on web, please fill out! Final contest is posted!2VPI ExampleWeatherForecastUmbrellaUA W Uleave sun 100leave rain 0take sun 20take rain 70MEU with no evidenceMEU if forecast is badMEU if forecast is goodF P(F)good 0.59bad 0.41Forecast distribution3VPI Properties Nonnegative in expectation Nonadditive ---consider, e.g., obtaining Ejtwice Order-independent4Reasoning over Time Often, we want to reason about a sequence of observations Speech recognition Robot localization User attention Medical monitoring Need to introduce time into our models Basic approach: hidden Markov models (HMMs) More general: dynamic Bayes’ nets9Markov Models A Markov model is a chain-structured BN Each node is identically distributed (stationarity) Value of X at a given time is called the state As a BN: Parameters: called transition probabilities or dynamics, specify how the state evolves over time (also, initial probs)X2X1X3X4[DEMO: Battleship]102Conditional Independence Basic conditional independence: Past and future independent of the present Each time step only depends on the previous This is called the (first order) Markov property Note that the chain is just a (growing) BN We can always use generic BN reasoning on it (if we truncate the chain)X2X1X3X411Example: Markov Chain Weather: States: X = {rain, sun} Transitions: Initial distribution: 1.0 sun What’s the probability distribution after one step?rain sun0.90.90.10.1This is a CPT, not a BN!12Mini-Forward Algorithm Question: probability of being in state x at time t? Slow answer: Enumerate all sequences of length t which end in s Add up their probabilities…13Mini-Forward Algorithm Better way: cached incremental belief updates An instance of variable elimination!sunrainsunrainsunrainsunrainForward simulation14Example From initial observation of sun From initial observation of rainP(X1) P(X2) P(X3) P(X∞)P(X1) P(X2) P(X3) P(X∞)15Stationary Distributions If we simulate the chain long enough: What happens? Uncertainty accumulates Eventually, we have no idea what the state is! Stationary distributions: For most chains, the distribution we end up in is independent of the initial distribution (but not always uniform!) Called the stationary distribution of the chain Usually, can only predict a short time out[DEMO: Battleship]163Web Link Analysis PageRank over a web graph Each web page is a state Initial distribution: uniform over pages Transitions: With prob. c, uniform jump to arandom page (dotted lines) With prob. 1-c, follow a randomoutlink (solid lines) Stationary distribution Will spend more time on highly reachable pages E.g. many ways to get to the Acrobat Reader download page Somewhat robust to link spam (but not immune) Google 1.0 returned the set of pages containing all your keywords in decreasing rank, now all search engines use link analysis along with many other factors17Hidden Markov Models Markov chains not so useful for most agents Eventually you don’t know anything anymore Need observations to update your beliefs Hidden Markov models (HMMs) Underlying Markov chain over states S You observe outputs (effects) at each time step As a Bayes’ net:X5X2E1X1X3X4E2E3E4E5Example An HMM is defined by: Initial distribution: Transitions: Emissions:Conditional Independence HMMs have two important independence properties: Markov hidden process, future depends on past via the present Current observation independent of all else given current state Quiz: does this mean that observations are independent given no evidence? [No, correlated by the hidden state]X5X2E1X1X3X4E2E3E4E5Real HMM Examples Speech recognition HMMs: Observations are acoustic signals (continuous valued) States are specific positions in specific words (so, tens of thousands) Machine translation HMMs: Observations are words (tens of thousands) States are translation options Robot tracking: Observations are range readings (continuous) States are positions on a map (continuous)Filtering / Monitoring Filtering, or monitoring, is the task of tracking the distribution B(X) (the belief state) We start with B(X) in an initial setting, usually uniform As time passes, or we get observations, we update B(X)4Example: Robot Localizationt=0Sensor model: never more than 1 mistakeMotion model: may not execute action with small prob.10ProbExample from Michael PfeifferExample: Robot Localizationt=110ProbExample: Robot Localizationt=210ProbExample: Robot Localizationt=310ProbExample: Robot Localizationt=410ProbExample: Robot Localizationt=510Prob5Passage of Time Assume we have current belief P(X | evidence to date) Then, after one time step passes: Or, compactly: Basic idea: beliefs get “pushed” through the transitions With the “B” notation, we have to be careful about what time step t the belief is about, and what evidence it includesExample: Passage of Time As time passes, uncertainty “accumulates”T = 1 T = 2 T = 5Transition model: ships usually go clockwiseObservation Assume we have current belief P(X | previous evidence): Then: Or: Basic idea: beliefs reweighted by likelihood of evidence Unlike passage of time, we have to renormalizeExample: Observation As we get observations, beliefs get reweighted, uncertainty “decreases”Before observation After observationExample HMM Example HMMS SESE6Updates: Time Complexity Every time step, we start with current P(X | evidence) We must update for time: We must update for observation: So, linear in time steps, quadratic in number of states |X| Of course, can do both at once, tooThe Forward Algorithm Can do belief propagation exactly as in previous slides, renormalizing each time step In the standard forward algorithm, we actually calculate P(X,e), without normalizing (it’s a special case of VE)Particle Filtering Sometimes |X| is too big to use exact inference |X| may be too big to even store B(X) E.g. X is continuous |X|2may be too big to do updates Solution: approximate inference Track samples of X, not all values Time


View Full Document

Berkeley COMPSCI 188 - HMMs (6PP)

Documents in this Course
CSP

CSP

42 pages

Metrics

Metrics

4 pages

HMMs II

HMMs II

19 pages

NLP

NLP

23 pages

Midterm

Midterm

9 pages

Agents

Agents

8 pages

Lecture 4

Lecture 4

53 pages

CSPs

CSPs

16 pages

Midterm

Midterm

6 pages

MDPs

MDPs

20 pages

mdps

mdps

2 pages

Games II

Games II

18 pages

Load more
Download HMMs (6PP)
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view HMMs (6PP) and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view HMMs (6PP) 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?