DOC PREVIEW
Columbia COMS W4705 - N-Grams and Corpus Linguistics

This preview shows page 1-2-3-21-22-23-42-43-44 out of 44 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 44 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 44 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 44 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 44 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 44 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 44 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 44 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 44 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 44 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 44 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Lecture 6Spelling Correction, revisitedNext Word PredictionSlide 4Human Word PredictionClaimApplicationsN-Gram Models of LanguageCounting Words in CorporaTerminologyCorporaSimple N-GramsComputing the Probability of a Word SequenceBigram ModelUsing N-GramsTraining and TestingA Simple ExampleA Bigram Grammar Fragment from BERPSlide 19Slide 20BERP Bigram CountsBERP Bigram ProbabilitiesMaximum likelihood estimation (MLE)Slide 24What do we learn about the language?Slide 26Approximating ShakespeareSlide 28DemoSlide 30N-Gram Training SensitivitySome Useful Empirical ObservationsSmoothing TechniquesAdd-one SmoothingSlide 35Witten-Bell DiscountingSlide 37Backoff methods (e.g. Katz ‘87)More advanced language modelsEvaluating language modelsLM toolkitsNew course to be offered in January 2007!!Slide 44SummaryCS 4705Lecture 6N-Grams and Corpus Linguisticsguest lecture by Dragomir [email protected]@cs.columbia.eduSpelling Correction, revisited•M$ suggests:–ngram: NorAm–unigrams: anagrams, enigmas–bigrams: begrimes–trigrams: ??–Markov: Mark–backoff: bakeoff–wn: wan, wen, win, won: wan, wen, win, won–Falstaff: FlagstaffFalstaff: FlagstaffNext Word Prediction•From a NY Times story...–Stocks ...–Stocks plunged this ….–Stocks plunged this morning, despite a cut in interest rates–Stocks plunged this morning, despite a cut in interest rates by the Federal Reserve, as Wall ...–Stocks plunged this morning, despite a cut in interest rates by the Federal Reserve, as Wall Street began–Stocks plunged this morning, despite a cut in interest rates by the Federal Reserve, as Wall Street began trading for the first time since last …–Stocks plunged this morning, despite a cut in interest rates by the Federal Reserve, as Wall Street began trading for the first time since last Tuesday's terrorist attacks.Human Word Prediction•Clearly, at least some of us have the ability to predict future words in an utterance.•How?–Domain knowledge–Syntactic knowledge–Lexical knowledgeClaim•A useful part of the knowledge needed to allow Word Prediction can be captured using simple statistical techniques•In particular, we'll rely on the notion of the probability of a sequence (a phrase, a sentence)Applications•Why do we want to predict a word, given some preceding words?–Rank the likelihood of sequences containing various alternative hypotheses, e.g. for ASRTheatre owners say popcorn/unicorn sales have doubled...–Assess the likelihood/goodness of a sentence, e.g. for text generation or machine translationThe doctor recommended a cat scan.El doctor recommendó una exploración del gato.N-Gram Models of Language•Use the previous N-1 words in a sequence to predict the next word•Language Model (LM)–unigrams, bigrams, trigrams,…•How do we train these models?–Very large corporaCounting Words in Corpora•What is a word? –e.g., are cat and cats the same word?–September and Sept?–zero and oh?–Is _ a word? * ? ‘(‘ ?–How many words are there in don’t ? Gonna ?–In Japanese and Chinese text -- how do we identify a word?Terminology•Sentence: unit of written language•Utterance: unit of spoken language•Word Form: the inflected form that appears in the corpus•Lemma: an abstract form, shared by word forms having the same stem, part of speech, and word sense•Types: number of distinct words in a corpus (vocabulary size)•Tokens: total number of wordsCorpora•Corpora are online collections of text and speech–Brown Corpus–Wall Street Journal–AP news–Hansards–DARPA/NIST text/speech corpora (Call Home, ATIS, switchboard, Broadcast News, TDT, Communicator)–TRAINS, Radio NewsSimple N-Grams•Assume a language has V word types in its lexicon, how likely is word x to follow word y?–Simplest model of word probability: 1/V–Alternative 1: estimate likelihood of x occurring in new text based on its general frequency of occurrence estimated from a corpus (unigram probability)popcorn is more likely to occur than unicorn–Alternative 2: condition the likelihood of x occurring in the context of previous words (bigrams, trigrams,…)mythical unicorn is more likely than mythical popcornComputing the Probability of a Word Sequence•Compute the product of component conditional probabilities?–P(the mythical unicorn) = P(the) P(mythical|the) P(unicorn|the mythical)•The longer the sequence, the less likely we are to find it in a training corpus P(Most biologists and folklore specialists believe that in fact the mythical unicorn horns derived from the narwhal)•Solution: approximate using n-gramsBigram Model•Approximate by –P(unicorn|the mythical) by P(unicorn|mythical)•Markov assumption: the probability of a word depends only on the probability of a limited history•Generalization: the probability of a word depends only on the probability of the n previous words–trigrams, 4-grams, …–the higher n is, the more data needed to train–backoff models)11|(nnwwP)|( 1nn wwPUsing N-Grams•For N-gram models–  –P(wn-1,wn) = P(wn | wn-1) P(wn-1)–By the Chain Rule we can decompose a joint probability, e.g. P(w1,w2,w3)P(w1,w2, ...,wn) = P(w1|w2,w3,...,wn) P(w2|w3, ...,wn) … P(wn-1|wn) P(wn)For bigrams then, the probability of a sequence is just the product of the conditional probabilities of its bigramsP(the,mythical,unicorn) = P(unicorn|mythical) P(mythical|the) P(the|<start>))11|(nnwwP)11|(nNnnwwPnkkknwwPwP111)|()(Training and Testing•N-Gram probabilities come from a training corpus–overly narrow corpus: probabilities don't generalize–overly general corpus: probabilities don't reflect task or domain•A separate test corpus is used to evaluate the model, typically using standard metrics–held out test set; development test set–cross validation–results tested for statistical significanceA Simple Example–P(I want to each Chinese food) = P(I | <start>) P(want | I) P(to | want) P(eat | to) P(Chinese | eat) P(food | Chinese)A Bigram Grammar Fragment from BERP.001Eat British.03Eat today.007Eat dessert.04Eat Indian.01Eat tomorrow.04Eat a.02Eat Mexican.04Eat at.02Eat Chinese.05Eat dinner.02Eat in.06Eat lunch.03Eat breakfast.06Eat some.03Eat Thai.16Eat on.01British lunch.05Want a.01British cuisine.65Want to.15British restaurant.04I have.60British food.08I don’t.02To be.29I would.09To spend.32I want.14To have.02<start> I’m.26To eat.04<start> Tell.01Want


View Full Document

Columbia COMS W4705 - N-Grams and Corpus Linguistics

Download N-Grams and Corpus Linguistics
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view N-Grams and Corpus Linguistics and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view N-Grams and Corpus Linguistics 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?