1CS 188: Artificial IntelligenceFall 2009Lecture 21: Speech Recognition11/10/2009Dan Klein – UC BerkeleyAnnouncements Written 3 due on Thursday night Extra OHs before then: see web page Review session? TBA Project 4 up! Due 11/19 Course contest update You can qualify for the final tournament starting tonight!2Today HMMs: Most likely explanation queries Speech recognition A massive HMM! Details of this section not required Start machine learning3Speech and Language Speech technologies Automatic speech recognition (ASR) Text-to-speech synthesis (TTS) Dialog systems Language processing technologies Machine translation Information extraction Web search, question answering Text classification, spam filtering, etc…HMMs: MLE Queries HMMs defined by States X Observations E Initial distr: Transitions: Emissions: Query: most likely explanation:XX2E1X1X3X4E2E3E4E5State Path Trellis State trellis: graph of states and transitions over time Each arc represents some transition Each arc has weight Each path is a sequence of states The product of weights on a path is the seq’s probability Can think of the Forward (and now Viterbi) algorithms as computing sums of all paths (best paths) in this graphsunrainsunrainsunrainsunrain62Viterbi Algorithmsunrainsunrainsunrainsunrain7Example8Digitizing Speech9Speech in an Hour Speech input is an acoustic wave forms p ee ch l a bGraphs from Simon Arnfield’s web tutorial on speech, Sheffield:http://www.psyc.leeds.ac.uk/research/cogn/speech/tutorial/“l” to “a”transition:10 Frequency gives pitch; amplitude gives volume sampling at ~8 kHz phone, ~16 kHz mic (kHz=1000 cycles/sec) Fourier transform of wave displayed as a spectrogram darkness indicates energy at each frequencys p ee ch l a bSpectral Analysis11Adding 100 Hz + 1000 Hz WavesTime (s)0 0.05–0.96540.990123Spectrum1001000Frequency in HzAmplitudeFrequency components (100 and 1000 Hz) on x-axis13Part of [ae] from “lab” Note complex wave repeating nine times in figure Plus smaller waves which repeats 4 times for every large pattern Large wave has frequency of 250 Hz (9 times in .036 seconds) Small wave roughly 4 times this, or roughly 1000 Hz Two little tiny waves on top of peak of 1000 Hz waves14Back to Spectra Spectrum represents these freq components Computed by Fourier transform, algorithm which separates out each frequency component of wave. x-axis shows frequency, y-axis shows magnitude (in decibels, a log measure of amplitude) Peaks at 930 Hz, 1860 Hz, and 3020 Hz.15[ demo ]Resonances of the vocal tract The human vocal tract as an open tube Air in a tube of a given length will tend to vibrate at resonance frequency of tube. Constraint: Pressure differential should be maximal at (closed) glottal end and minimal at (open) lip end.Closed endOpen endLength 17.5 cm.Figure from W. Barry Speech Science slides16FromMarkLiberman’swebsite17[ demo ]Acoustic Feature Sequence Time slices are translated into acoustic feature vectors (~39 real numbers per slice) These are the observations, now we need the hidden states X……………………………………………..e12e13e14e15e16………..184State Space P(E|X) encodes which acoustic vectors are appropriate for each phoneme (each kind of sound) P(X|X’) encodes how sounds can be strung together We will have one state for each sound in each word From some state x, can only: Stay in the same state (e.g. speaking slowly) Move to the next position in the word At the end of the word, move to the start of the next word We build a little state graph for each word and chain them together to form our state space X19HMMs for Speech20Transitions with BigramsFigure from Huang et al page 61821Decoding While there are some practical issues, finding the words given the acoustics is an HMM inference problem We want to know which state sequence x1:Tis most likely given the evidence e1:T: From the sequence x, we can simply read off the words22End of Part II! Now we’re done with our unit on probabilistic reasoning Last part of class: machine learning23Parameter Estimation Estimating the distribution of a random variable Elicitation: ask a human! Usually need domain experts, and sophisticated ways of eliciting probabilities (e.g. betting games) Trouble calibrating Empirically: use training data For each outcome x, look at the empirical rate of that value: This is the estimate that maximizes the likelihood of the datar g g5Estimation: Smoothing Relative frequencies are the maximum likelihood estimates In Bayesian statistics, we think of the parameters as just another random variable, with its own distribution????Estimation: Laplace Smoothing Laplace’s estimate: Pretend you saw every outcome once more than you actually did Can derive this as a MAP estimate with Dirichlet priors (see cs281a)H H TEstimation: Laplace Smoothing Laplace’s estimate (extended): Pretend you saw every outcome k extra times What’s Laplace with k = 0? k is the strength of the prior Laplace for conditionals: Smooth each condition independently:H H
View Full Document