DOC PREVIEW
Berkeley COMPSCI 188 - Lecture 21: Speech / ML

This preview shows page 1-2-3-4 out of 12 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

1CS 188: Artificial IntelligenceFall 2010Lecture 21: Speech / ML11/9/2010Dan Klein – UC BerkeleyAnnouncements Assignments: Project 2: In glookup Project 4: Due 11/17 Written 3: Out later this week Contest out now! Reminder: surveys (results next lecture)22Contest!3Today HMMs: Most likely explanation queries Speech recognition A massive HMM! Details of this section not required Start machine learning43Speech and Language Speech technologies Automatic speech recognition (ASR) Text-to-speech synthesis (TTS) Dialog systems Language processing technologies Machine translation Information extraction Web search, question answering Text classification, spam filtering, etc<HMMs: MLE Queries HMMs defined by States X Observations E Initial distr: Transitions: Emissions: Query: most likely explanation:XX2E1X1X3X4E2E3E4E64State Path Trellis State trellis: graph of states and transitions over time Each arc represents some transition Each arc has weight Each path is a sequence of states The product of weights on a path is the seq’s probability Can think of the Forward (and now Viterbi) algorithms as computing sums of all paths (best paths) in this graphsunrainsunrainsunrainsunrain7Viterbi Algorithmsunrainsunrainsunrainsunrain85Digitizing Speech9Speech in an Hour Speech input is an acoustic wave forms p ee ch l a bGraphs from Simon Arnfield’s web tutorial on speech, Sheffield:http://www.psyc.leeds.ac.uk/research/cogn/speech/tutorial/“l” to “a”transition:106 Frequency gives pitch; amplitude gives volume sampling at ~8 kHz phone, ~16 kHz mic (kHz=1000 cycles/sec) Fourier transform of wave displayed as a spectrogram darkness indicates energy at each frequencys p ee ch l a bSpectral Analysis11Part of [ae] from “lab” Complex wave repeating nine times Plus smaller wave that repeats 4x for every large cycle Large wave: freq of 250 Hz (9 times in .036 seconds) Small wave roughly 4 times this, or roughly 1000 Hz12[ demo ]7Acoustic Feature Sequence Time slices are translated into acoustic feature vectors (~39 real numbers per slice) These are the observations, now we need the hidden states X<<<<<<<<<<<<<<<<<..e12e13e14e15e16<<<..13State Space P(E|X) encodes which acoustic vectors are appropriate for each phoneme (each kind of sound) P(X|X’) encodes how sounds can be strung together  We will have one state for each sound in each word From some state x, can only: Stay in the same state (e.g. speaking slowly) Move to the next position in the word At the end of the word, move to the start of the next word We build a little state graph for each word and chain them together to form our state space X148HMMs for Speech15Transitions with BigramsFigure from Huang et al page 618198015222 the first194623024 the same168504105 the following158562063 the world<14112454 the door-----------------23135851162 the *Training Counts9Decoding While there are some practical issues, finding the words given the acoustics is an HMM inference problem We want to know which state sequence x1:Tis most likely given the evidence e1:T: From the sequence x, we can simply read off the words17End of Part II! Now we’re done with our unit on probabilistic reasoning Last part of class: machine learning1810Machine Learning Up until now: how to reason in a model and how to make optimal decisions Machine learning: how to acquire a model on the basis of data / experience Learning parameters (e.g. probabilities) Learning structure (e.g. BN graphs) Learning hidden concepts (e.g. clustering)Parameter Estimation Estimating the distribution of a random variable Elicitation: ask a human (why is this hard?) Empirically: use training data (learning!) E.g.: for each outcome x, look at the empirical rate of that value: This is the estimate that maximizes the likelihood of the datar g gr g gr ggrggr ggrgg11Estimation: Smoothing Relative frequencies are the maximum likelihood estimates In Bayesian statistics, we think of the parameters as just another random variable, with its own distribution????Estimation: Laplace Smoothing Laplace’s estimate: Pretend you saw every outcome once more than you actually did Can derive this as a MAP estimate with Dirichlet priors (see cs281a)H H T12Estimation: Laplace Smoothing Laplace’s estimate (extended): Pretend you saw every outcome k extra times What’s Laplace with k = 0? k is the strength of the prior Laplace for conditionals: Smooth each condition independently:H H


View Full Document

Berkeley COMPSCI 188 - Lecture 21: Speech / ML

Documents in this Course
CSP

CSP

42 pages

Metrics

Metrics

4 pages

HMMs II

HMMs II

19 pages

NLP

NLP

23 pages

Midterm

Midterm

9 pages

Agents

Agents

8 pages

Lecture 4

Lecture 4

53 pages

CSPs

CSPs

16 pages

Midterm

Midterm

6 pages

MDPs

MDPs

20 pages

mdps

mdps

2 pages

Games II

Games II

18 pages

Load more
Download Lecture 21: Speech / ML
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lecture 21: Speech / ML and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lecture 21: Speech / ML 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?