DOC PREVIEW
Stanford CS 262 - Hidden Markov Models

This preview shows page 1-2-3-18-19-37-38-39 out of 39 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 39 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 39 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 39 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 39 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 39 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 39 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 39 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 39 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 39 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Hidden Markov ModelsInstructions for lecture notesOutline for our next topicExample: The Dishonest CasinoQuestion # 1 – EvaluationQuestion # 2 – DecodingQuestion # 3 – LearningThe dishonest casino modelDefinition of a hidden Markov modelA HMM is memory-lessA parse of a sequenceLikelihood of a parseExample: the dishonest casinoSlide 14Slide 15The three main questions on HMMsLet’s not be confused by notationProblem 1: DecodingDecodingDecoding – main ideaThe Viterbi AlgorithmSlide 22Viterbi Algorithm – a practical detailExampleProblem 2: EvaluationGenerating a sequence by the modelA couple of questionsEvaluationThe Forward AlgorithmThe Forward Algorithm – derivationSlide 31Relation between Forward and ViterbiMotivation for the Backward AlgorithmThe Backward Algorithm – derivationThe Backward AlgorithmComputational ComplexityPosterior DecodingSlide 38Slide 39Hidden Markov Models12K…12K…12K…………12K…x1x2x3xK21K2Instructions for lecture notes•Presentation of material covered in lecture•Complete sentences & paragraphs, readable text•Ideally, fill-in explanation on confusing points by going to the references•If in doubt, ask for helpOutline for our next topic•Hidden Markov models – the theory•Probabilistic interpretation of alignments using HMMsLater in the course:•Applications of HMMs to biological sequence modeling and discovery of features such as genesExample: The Dishonest CasinoA casino has two dice:•Fair dieP(1) = P(2) = P(3) = P(5) = P(6) = 1/6•Loaded dieP(1) = P(2) = P(3) = P(5) = 1/10P(6) = 1/2Casino player switches back-&-forth between fair and loaded die once every 20 turnsGame:1. You bet $12. You roll (always with a fair die)3. Casino player rolls (maybe with fair die, maybe with loaded die)4. Highest number wins $2Question # 1 – EvaluationGIVENA sequence of rolls by the casino player1245526462146146136136661664661636616366163616515615115146123562344QUESTIONHow likely is this sequence, given our model of how the casino works?This is the EVALUATION problem in HMMsQuestion # 2 – DecodingGIVENA sequence of rolls by the casino player1245526462146146136136661664661636616366163616515615115146123562344QUESTIONWhat portion of the sequence was generated with the fair die, and what portion with the loaded die?This is the DECODING question in HMMsQuestion # 3 – LearningGIVENA sequence of rolls by the casino player1245526462146146136136661664661636616366163616515615115146123562344QUESTIONHow “loaded” is the loaded die? How “fair” is the fair die? How often does the casino player change from fair to loaded, and back?This is the LEARNING question in HMMsThe dishonest casino modelFAIR LOADED0.050.050.950.95P(1|F) = 1/6P(2|F) = 1/6P(3|F) = 1/6P(4|F) = 1/6P(5|F) = 1/6P(6|F) = 1/6P(1|L) = 1/10P(2|L) = 1/10P(3|L) = 1/10P(4|L) = 1/10P(5|L) = 1/10P(6|L) = 1/2Definition of a hidden Markov modelDefinition: A hidden Markov model (HMM)•Alphabet  = { b1, b2, …, bM }•Set of states Q = { 1, ..., K }•Transition probabilities between any two statesaij = transition prob from state i to state jai1 + … + aiK = 1, for all states i = 1…K•Start probabilities a0ia01 + … + a0K = 1•Emission probabilities within each stateei(b) = P( xi = b | i = k)ei(b1) + … + ei(bM) = 1, for all states i = 1…KK1…2A HMM is memory-lessAt each time step t, the only thing that affects future states is the current state tP(t+1 = k | “whatever happened so far”) =P(t+1 = k | 1, 2, …, t, x1, x2, …, xt) =P(t+1 = k | t)K1…2A parse of a sequenceGiven a sequence x = x1……xN,A parse of x is a sequence of states  = 1, ……, N12K…12K…12K…………12K…x1x2x3xK21K2Likelihood of a parseGiven a sequence x = x1……xNand a parse  = 1, ……, N,To find how likely is the parse: (given our HMM)P(x, ) = P(x1, …, xN, 1, ……, N) = P(xN, N | x1…xN-1, 1, ……, N-1) P(x1…xN-1, 1, ……, N-1) = P(xN, N | N-1) P(x1…xN-1, 1, ……, N-1) = … = P(xN, N | N-1) P(xN-1, N-1 | N-2)……P(x2, 2 | 1) P(x1, 1) = P(xN | N) P(N | N-1) ……P(x2 | 2) P(2 | 1) P(x1 | 1) P(1) = a01 a12……aN-1N e1(x1)……eN(xN)12K…12K…12K…………12K…x1x2x3xK21K2Example: the dishonest casinoLet the sequence of rolls be:x = 1, 2, 1, 5, 6, 2, 1, 6, 2, 4Then, what is the likelihood of = Fair, Fair, Fair, Fair, Fair, Fair, Fair, Fair, Fair, Fair?(say initial probs a0Fair = ½, aoLoaded = ½)½  P(1 | Fair) P(Fair | Fair) P(2 | Fair) P(Fair | Fair) … P(4 | Fair) =½  (1/6)10  (0.95)9 = .00000000521158647211 = 0.5  10-9Example: the dishonest casinoSo, the likelihood the die is fair in all this runis just 0.521  10-9OK, but what is the likelihood of = Loaded, Loaded, Loaded, Loaded, Loaded, Loaded, Loaded, Loaded, Loaded, Loaded?½  P(1 | Loaded) P(Loaded, Loaded) … P(4 | Loaded) =½  (1/10)8  (1/2)2 (0.95)9 = .00000000078781176215 = 0.79  10-9Therefore, it somewhat more likely that the die is fair all the way, than that it is loaded all the wayExample: the dishonest casinoLet the sequence of rolls be:x = 1, 6, 6, 5, 6, 2, 6, 6, 3, 6Now, what is the likelihood  = F, F, …, F?½  (1/6)10  (0.95)9 = 0.5  10-9, same as beforeWhat is the likelihood = L, L, …, L?½  (1/10)4  (1/2)6 (0.95)9 = .00000049238235134735 = 0.5  10-7So, it is 100 times more likely the die is loadedThe three main questions on HMMs1. EvaluationGIVEN a HMM M, and a sequence x,FIND Prob[ x | M ]2. DecodingGIVEN a HMM M, and a sequence x,FIND the sequence  of states that maximizes P[ x,  | M ]3. LearningGIVEN a HMM M, with unspecified transition/emission probs.,and a sequence x,FIND parameters  = (ei(.), aij) that maximize P[ x |  ]Let’s not be confused by notationP[ x | M ]: The probability that sequence x was generated by the modelThe model is: architecture (#states, etc) + parameters  = aij, ei(.)So, P[x | M] is the same with P[ x |  ], and P[ x ], when the architecture, and the parameters, respectively, are impliedSimilarly, P[ x,  | M ], P[ x,  |  ] and P[ x,  ] are the same when the architecture, and the parameters, are impliedIn the LEARNING problem we always write P[ x |  ] to emphasize that we are seeking the * that maximizes P[ x |  ]Problem 1: DecodingFind the best parse of a sequenceDecodingGIVEN x =


View Full Document

Stanford CS 262 - Hidden Markov Models

Documents in this Course
Lecture 8

Lecture 8

38 pages

Lecture 7

Lecture 7

27 pages

Lecture 4

Lecture 4

12 pages

Lecture 1

Lecture 1

11 pages

Biology

Biology

54 pages

Lecture 7

Lecture 7

45 pages

Load more
Download Hidden Markov Models
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Hidden Markov Models and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Hidden Markov Models 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?