DOC PREVIEW
MSU CSE 847 - Hidden Markov Models
Course Cse 847-
Pages 65

This preview shows page 1-2-3-4-30-31-32-33-34-62-63-64-65 out of 65 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 65 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 65 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 65 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 65 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 65 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 65 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 65 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 65 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 65 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 65 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 65 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 65 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 65 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 65 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Hidden Markov ModelsA Markov SystemSlide 3Slide 4Slide 5Slide 6Markov PropertyMarkov Property: RepresentationA Blind RobotDynamics of SystemExample QuestionWhat is P(qt =s)? Too SlowWhat is P(qt =s) ? Clever AnswerWhat is P(qt =s) ? Clever answerSlide 15Slide 16Slide 17Slide 18Hidden StateSlide 20Noisy Hidden StateSlide 22Slide 23Noisy Hidden State: RepresentationSlide 25Are H.M.M.s Useful?HMM Notation (from Rabiner’s Survey)HMM Formal DefinitionHere’s an HMMSlide 30Slide 31Slide 32Slide 33Slide 34Slide 35Slide 36State EstimationProb. of a series of observationsSlide 39Slide 40Slide 41The Prob. of a given series of observations, non-exponential-cost-styleαt(i): easy to define recursivelySlide 44Slide 45Slide 46in our exampleEasy QuestionSlide 49Most probable path given observationsEfficient MPP computationThe Viterbi AlgorithmSlide 53Slide 54Slide 55What’s Viterbi used for?HMMs are used and usefulInferring an HMMMax likelihood HMM estimationHMM estimationEM for HMMsEM 4 HMMsBad NewsSlide 64What You Should KnowNov 29th, 2001Copyright © 2001-2003, Andrew W. MooreHidden Markov ModelsAndrew W. MooreProfessorSchool of Computer ScienceCarnegie Mellon Universitywww.cs.cmu.edu/[email protected] to other teachers and users of these slides. Andrew would be delighted if you found this source material useful in giving your own lectures. Feel free to use these slides verbatim, or to modify them to fit your own needs. PowerPoint originals are available. If you make use of a significant portion of these slides in your own lecture, please include this message, or the following link to the source repository of Andrew’s tutorials: http://www.cs.cmu.edu/~awm/tutorials . Comments and corrections gratefully received.Copyright © 2001-2003, Andrew W. MooreHidden Markov Models: Slide 2A Markov Systems1s3s2Has N states, called s1, s2 .. sNThere are discrete timesteps, t=0, t=1, … N = 3t=0Copyright © 2001-2003, Andrew W. MooreHidden Markov Models: Slide 3A Markov Systems1s3s2Has N states, called s1, s2 .. sNThere are discrete timesteps, t=0, t=1, … On the t’th timestep the system is in exactly one of the available states. Call it qtNote: qt {s1, s2 .. sN }N = 3t=0qt=q0=s3Current StateCopyright © 2001-2003, Andrew W. MooreHidden Markov Models: Slide 4A Markov Systems1s3s2Has N states, called s1, s2 .. sNThere are discrete timesteps, t=0, t=1, … On the t’th timestep the system is in exactly one of the available states. Call it qtNote: qt {s1, s2 .. sN }Between each timestep, the next state is chosen randomly.N = 3t=1qt=q1=s2Current StateCopyright © 2001-2003, Andrew W. MooreHidden Markov Models: Slide 5A Markov Systems1s3s2Has N states, called s1, s2 .. sNThere are discrete timesteps, t=0, t=1, … On the t’th timestep the system is in exactly one of the available states. Call it qtNote: qt {s1, s2 .. sN }Between each timestep, the next state is chosen randomly.The current state determines the probability distribution for the next state.N = 3t=1qt=q1=s2P(qt+1=s1|qt=s3) = 1/3P(qt+1=s2|qt=s3) = 2/3P(qt+1=s3|qt=s3) = 0P(qt+1=s1|qt=s1) = 0P(qt+1=s2|qt=s1) = 0P(qt+1=s3|qt=s1) = 1P(qt+1=s1|qt=s2) = 1/2P(qt+1=s2|qt=s2) = 1/2P(qt+1=s3|qt=s2) = 0Copyright © 2001-2003, Andrew W. MooreHidden Markov Models: Slide 6A Markov Systems1s3s2Has N states, called s1, s2 .. sNThere are discrete timesteps, t=0, t=1, … On the t’th timestep the system is in exactly one of the available states. Call it qtNote: qt {s1, s2 .. sN }Between each timestep, the next state is chosen randomly.The current state determines the probability distribution for the next state.N = 3t=1qt=q1=s2P(qt+1=s1|qt=s3) = 1/3P(qt+1=s2|qt=s3) = 2/3P(qt+1=s3|qt=s3) = 0P(qt+1=s1|qt=s1) = 0P(qt+1=s2|qt=s1) = 0P(qt+1=s3|qt=s1) = 1P(qt+1=s1|qt=s2) = 1/2P(qt+1=s2|qt=s2) = 1/2P(qt+1=s3|qt=s2) = 01/21/21/32/31Often notated with arcs between statesCopyright © 2001-2003, Andrew W. MooreHidden Markov Models: Slide 7Markov Propertys1s3s2qt+1 is conditionally independent of { qt-1, qt-2, … q1, q0 } given qt.In other words:P(qt+1 = sj |qt = si ) =P(qt+1 = sj |qt = si ,any earlier history)N = 3t=1qt=q1=s2P(qt+1=s1|qt=s3) = 1/3P(qt+1=s2|qt=s3) = 2/3P(qt+1=s3|qt=s3) = 0P(qt+1=s1|qt=s1) = 0P(qt+1=s2|qt=s1) = 0P(qt+1=s3|qt=s1) = 1P(qt+1=s1|qt=s2) = 1/2P(qt+1=s2|qt=s2) = 1/2P(qt+1=s3|qt=s2) = 01/21/21/32/31Copyright © 2001-2003, Andrew W. MooreHidden Markov Models: Slide 8Markov Property: Representationq0q1q2q3q4Copyright © 2001-2003, Andrew W. MooreHidden Markov Models: Slide 9A Blind RobotRHSTATE q =Location of Robot,Location of HumanA human and a robot wander around randomly on a grid…Note: N (num. states) = 18 * 18 = 324Copyright © 2001-2003, Andrew W. MooreHidden Markov Models: Slide 10Dynamics of SystemRHq0 =Typical Questions:•“What’s the expected time until the human is crushed like a bug?”•“What’s the probability that the robot will hit the left wall before it hits the human?”•“What’s the probability Robot crushes human on next time step?”Each timestep the human moves randomly to an adjacent cell. And Robot also moves randomly to an adjacent cell.Copyright © 2001-2003, Andrew W. MooreHidden Markov Models: Slide 11Example Question“It’s currently time t, and human remains uncrushed. What’s the probability of crushing occurring at time t + 1 ?”If robot is blind:We can compute this in advance.If robot is omnipotent:(I.E. If robot knows state at time t), can compute directly.If robot has some sensors, but incomplete state information …Hidden Markov Models are applicable!We’ll do this firstToo Easy. We won’t do thisMain Bodyof LectureCopyright © 2001-2003, Andrew W. MooreHidden Markov Models: Slide 12What is P(qt =s)? Too SlowStep 1: Work out how to compute P(Q) for any path Q = q0 q1 q2 q3 .. qtGiven we know the start state q0P(q0 q1 .. qt) = P(q0 q1 .. qt-1) P(qt|q0 q1 .. qt-1) = P(q0 q1 .. qt-1) P(qt|qt-1) = P(q1|q0)P(q2|q1)…P(qt|qt-1)Step 2: Use this knowledge to get P(qt =s)WHY?st QtQPsqPin endthat length of Paths)()(Computation is exponential in tCopyright © 2001-2003, Andrew W. MooreHidden Markov Models: Slide 13What is P(qt =s) ? Clever Answer•For each state si, definept(i) = Prob. state is si at time t = P(qt = si)•Easy to do inductive definition )(0ipi)()(11 jttsqPjpjCopyright © 2001-2003, Andrew W. MooreHidden Markov Models: Slide 14What is P(qt =s) ? Clever answer•For each state si, definept(i) = Prob.


View Full Document

MSU CSE 847 - Hidden Markov Models

Course: Cse 847-
Pages: 65
Download Hidden Markov Models
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Hidden Markov Models and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Hidden Markov Models 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?