CS 188: Artificial Intelligence Fall 2008AnnouncementsHidden Markov ModelsState Path TrellisViterbi AlgorithmExampleDigitizing SpeechSpeech in an HourSpectral AnalysisAdding 100 Hz + 1000 Hz WavesSpectrumPart of [ae] from “lab”Back to SpectraAcoustic Feature SequenceState SpaceHMMs for SpeechMarkov Process with BigramsDecodingEnd of Part II!CS 188: Artificial IntelligenceFall 2008Lecture 21: Speech / Viterbi11/13/2008Dan Klein – UC Berkeley1AnnouncementsP5 up, due 11/19W9 up, due 11/21 (note off-cycle date)Final contest: download and get started!Homework solution and review sessions coming2Hidden Markov ModelsAn HMM isInitial distribution:Transitions:Emissions:Query: most likely seq:X5X2E1X1X3X4E2E3E4E59State Path TrellisState trellis: graph of states and transitions over timeEach arc represents some transitionEach arc has weightEach path is a sequence of statesThe product of weights on a path is the seq’s probabilityCan think of the Forward (and now Viterbi) algorithms as computing sums of all paths (best paths) in this graphsunrainsunrainsunrainsunrain10Viterbi Algorithmsunrainsunrainsunrainsunrain12Example13Digitizing Speech14Speech in an HourSpeech input is an acoustic wave form s p ee ch l a bGraphs from Simon Arnfield’s web tutorial on speech, Sheffield:http://www.psyc.leeds.ac.uk/research/cogn/speech/tutorial/“l” to “a”transition:15Frequency gives pitch; amplitude gives volumesampling at ~8 kHz phone, ~16 kHz mic (kHz=1000 cycles/sec)Fourier transform of wave displayed as a spectrogramdarkness indicates energy at each frequency s p ee ch l a bfrequencyamplitudeSpectral Analysis16Adding 100 Hz + 1000 Hz WavesTime (s)0 0.05–0.96540.99017Spectrum1001000Frequency in HzAmplitudeFrequency components (100 and 1000 Hz) on x-axis18Part of [ae] from “lab”Note complex wave repeating nine times in figurePlus smaller waves which repeats 4 times for every large patternLarge wave has frequency of 250 Hz (9 times in .036 seconds)Small wave roughly 4 times this, or roughly 1000 HzTwo little tiny waves on top of peak of 1000 Hz waves19Back to SpectraSpectrum represents these freq componentsComputed by Fourier transform, algorithm which separates out each frequency component of wave. x-axis shows frequency, y-axis shows magnitude (in decibels, a log measure of amplitude) Peaks at 930 Hz, 1860 Hz, and 3020 Hz.20Acoustic Feature SequenceTime slices are translated into acoustic feature vectors (~39 real numbers per slice)These are the observations, now we need the hidden states Xfrequency……………………………………………..e12e13e14e15e16………..23State SpaceP(E|X) encodes which acoustic vectors are appropriate for each phoneme (each kind of sound)P(X|X’) encodes how sounds can be strung together We will have one state for each sound in each wordFrom some state x, can only:Stay in the same state (e.g. speaking slowly)Move to the next position in the wordAt the end of the word, move to the start of the next wordWe build a little state graph for each word and chain them together to form our state space X24HMMs for Speech25Markov Process with BigramsFigure from Huang et al page 61826DecodingWhile there are some practical issues, finding the words given the acoustics is an HMM inference problemWe want to know which state sequence x1:T is most likely given the evidence e1:T:From the sequence x, we can simply read off the words27End of Part II!Now we’re done with our unit on probabilistic reasoningLast part of class: machine
View Full Document