CS 188: Artificial Intelligence Fall 2009AnnouncementsTodaySpeech and LanguageHMMs: MLE QueriesState Path TrellisViterbi AlgorithmExampleDigitizing SpeechSpeech in an HourSpectral AnalysisAdding 100 Hz + 1000 Hz WavesSpectrumPart of [ae] from “lab”Back to SpectraResonances of the vocal tractSlide 17Acoustic Feature SequenceState SpaceHMMs for SpeechTransitions with BigramsDecodingEnd of Part II!Parameter EstimationEstimation: SmoothingEstimation: Laplace SmoothingSlide 27CS 188: Artificial IntelligenceFall 2009Lecture 21: Speech Recognition11/10/2009Dan Klein – UC BerkeleyAnnouncementsWritten 3 due on Thursday nightExtra OHs before then: see web pageReview session? TBAProject 4 up!Due 11/19Course contest updateYou can qualify for the final tournament starting tonight!2TodayHMMs: Most likely explanation queriesSpeech recognitionA massive HMM!Details of this section not requiredStart machine learning3Speech and LanguageSpeech technologiesAutomatic speech recognition (ASR)Text-to-speech synthesis (TTS)Dialog systemsLanguage processing technologiesMachine translationInformation extractionWeb search, question answeringText classification, spam filtering, etc…HMMs: MLE QueriesHMMs defined byStates XObservations EInitial distr:Transitions:Emissions:Query: most likely explanation:X5X2E1X1X3X4E2E3E4E55State Path TrellisState trellis: graph of states and transitions over timeEach arc represents some transitionEach arc has weightEach path is a sequence of statesThe product of weights on a path is the seq’s probabilityCan think of the Forward (and now Viterbi) algorithms as computing sums of all paths (best paths) in this graphsunrainsunrainsunrainsunrain6Viterbi Algorithmsunrainsunrainsunrainsunrain7Example8Digitizing Speech9Speech in an HourSpeech input is an acoustic wave form s p ee ch l a bGraphs from Simon Arnfield’s web tutorial on speech, Sheffield:http://www.psyc.leeds.ac.uk/research/cogn/speech/tutorial/“l” to “a”transition:10Frequency gives pitch; amplitude gives volumesampling at ~8 kHz phone, ~16 kHz mic (kHz=1000 cycles/sec)Fourier transform of wave displayed as a spectrogramdarkness indicates energy at each frequency s p ee ch l a bfrequencyamplitudeSpectral Analysis11Adding 100 Hz + 1000 Hz WavesTime (s)0 0.05–0.96540.99012Spectrum1001000Frequency in HzAmplitudeFrequency components (100 and 1000 Hz) on x-axis13Part of [ae] from “lab”Note complex wave repeating nine times in figurePlus smaller waves which repeats 4 times for every large patternLarge wave has frequency of 250 Hz (9 times in .036 seconds)Small wave roughly 4 times this, or roughly 1000 HzTwo little tiny waves on top of peak of 1000 Hz waves14Back to SpectraSpectrum represents these freq componentsComputed by Fourier transform, algorithm which separates out each frequency component of wave. x-axis shows frequency, y-axis shows magnitude (in decibels, a log measure of amplitude) Peaks at 930 Hz, 1860 Hz, and 3020 Hz.15[ demo ]Resonances of the vocal tractThe human vocal tract as an open tubeAir in a tube of a given length will tend to vibrate at resonance frequency of tube. Constraint: Pressure differential should be maximal at (closed) glottal end and minimal at (open) lip end.Closed endOpen endLength 17.5 cm.Figure from W. Barry Speech Science slides16FromMarkLiberman’swebsite17[ demo ]Acoustic Feature SequenceTime slices are translated into acoustic feature vectors (~39 real numbers per slice)These are the observations, now we need the hidden states Xfrequency……………………………………………..e12e13e14e15e16………..18State SpaceP(E|X) encodes which acoustic vectors are appropriate for each phoneme (each kind of sound)P(X|X’) encodes how sounds can be strung together We will have one state for each sound in each wordFrom some state x, can only:Stay in the same state (e.g. speaking slowly)Move to the next position in the wordAt the end of the word, move to the start of the next wordWe build a little state graph for each word and chain them together to form our state space X19HMMs for Speech20Transitions with BigramsFigure from Huang et al page 61821DecodingWhile there are some practical issues, finding the words given the acoustics is an HMM inference problemWe want to know which state sequence x1:T is most likely given the evidence e1:T:From the sequence x, we can simply read off the words22End of Part II!Now we’re done with our unit on probabilistic reasoningLast part of class: machine learning23Parameter EstimationEstimating the distribution of a random variableElicitation: ask a human!Usually need domain experts, and sophisticated ways of eliciting probabilities (e.g. betting games)Trouble calibratingEmpirically: use training dataFor each outcome x, look at the empirical rate of that value:This is the estimate that maximizes the likelihood of the datar g gEstimation: SmoothingRelative frequencies are the maximum likelihood estimatesIn Bayesian statistics, we think of the parameters as just another random variable, with its own distribution????Estimation: Laplace SmoothingLaplace’s estimate:Pretend you saw every outcome once more than you actually didCan derive this as a MAP estimate with Dirichlet priors (see cs281a)H H TEstimation: Laplace SmoothingLaplace’s estimate (extended):Pretend you saw every outcome k extra timesWhat’s Laplace with k = 0?k is the strength of the priorLaplace for conditionals:Smooth each condition independently:H H
View Full Document