Lecture 10 May 14 2001 Prabhakar Raghavan Centroid nearest neighbor classification Bayesian Classification Link based classification Document summarization Given training docs for a topic compute their centroid Now have a centroid for each topic Given query doc assign to topic whose centroid is nearest Government Science Arts As before train classifier on exemplary docs from classes c1 c2 cr Given test doc d estimate Pr d belongs to class cj Pr cj d Pr cj d Pr d Pr d cj Pr cj So Pr d cj Pr cj Pr cj d Pr d Express Pr d as r i 1 Pr d ci Pr ci To compute Pr cj d all we need are Pr d ci and Pr ci for all i Will get these from training Given a set of training docs together with a class label for each training doc e g these docs belong to Physics those others to Astronomy etc r ci Pr ci Fraction of training docs that are labeled ci In practice use more sophisticated smoothing to boost probabilities of classes under represented in sample r d ci Basic assumption each occurrence of each word in each doc is independent of all others For a word w from sample docs Pr w ci Frequency of word w amongst all docs labeled ci Pr d ci Pr w ci w d Thus the probability of a doc consisting of Friends Romans Countrymen Pr Friends Pr Romans Pr Countrymen In implementations pay attention to precision underflow Extract all probabilities from term doc matrix Training Use class frequencies in training data for Pr ci Estimate word frequencies for each word and each class to estimate Pr w ci Test doc d Use the Pr w ci values to estimate Pr d ci for each class ci Determine class cj for which Pr cj d is maximized So far have relied on word counts as the features to train and classify on In general could be any statistic terms in boldface count for more authors of cited docs number of equations square of the number of commas Abstract features Many improvements used over na ve version discussed above various models for document generation varying emphasis on words in different portions of docs smoothing statistics for infrequent terms classifying into a hierarchy Uniformity of docs in training test Quality of authorship Volume of training data Training 1000 docs class Accuracy upto 90 in the very best circumstances below 50 in the worst SVM appears to beat variety of Bayesian approaches both beat centroid based methods SVM needs quadratic programming more computation than na ve Bayes at training time less at classification time Bayesian classifiers also partition vector space but not using linear decision surfaces Given a set of hyperlinked docs Class labels for some docs available Figure out class labels for remaining docs c1 c3 c3 c2 c2 c4 Besides the terms in a doc derive cues from linked docs to assign a class to test doc Cues could be any abstract features from doc and its neighbors Attempt 1 use terms in doc those in its neighbors Generally does worse than terms in doc alone Why Neighbors terms diffuse focus of doc s terms Use terms in doc plus tagged terms from neighbors E g car denotes a term occurring in d car I denotes a term occurring in a doc with a link into d car O denotes a term occurring in a doc with a link from d Generalizations possible car OIOI Key terms lose density e g car gets split into car car I car O Use class labels of in and out neighbors as features in classifying d e g docs about physics point to docs about physics Setting some neighbors have pre assigned labels need to figure out the rest Na ve Bayes gives Pr cj d based on the words in d Now consider Pr cj N where N is the set of labels of d s neighbors Can separate N into in and out neighbors Can combine conditional probs for cj from text and link based evidence As before use training data to compute Pr N cj etc Assume labels of d s neighbors independent as we did with word occurrences Also continue to assume word occurrences within d are independent Can invert probs using Bayes to derive Pr cj N Need to know class labels for all of d s neighbors What if all neighbors class labels are not known First use word content alone to assign a tentative class label to each unlabelled doc Next iteratively recompute all tentative labels using word content as well as neighbors classes some tentative This iterative relabeling will converge provided tentative labels not too far off Guarantee requires ideas from Markov random fields used in computer vision Error rates significantly below text alone classification Move on to document summarization Given a doc produce a short summary Length of summary a parameter Application form factor Very sensitive to doc quality Typically corpus independent Simplest algorithm Output the first 50 or however many words of the doc Hard to beat on high quality docs For very short summaries e g 5 words drop stop words Slightly more complex heuristics Compute an importance score for each sentence Summary output contains sentences from original doc beginning with the most important Article from WSJ 1 Tandon Corp will introduce a portable hard disk drive today that will enable personal computer owners to put all of their programs and data on a single transportable cartridge that can be plugged into other computers 2 Tandon which has reported big losses in recent quarters as it shifted its product emphasis from disk drives to personal computer systems asserts that the new device called Personal Data Pac could change the way software is sold and even the way computers are used 3 The company based in Moorpark Calif also will unveil a new personal computer compatible with International Business Machines Corp s PC AT advanced personal computer that incorporates the new portable hard disks 4 It s an idea we ve been working on for several years said Chuck Peddle president of Tandon s computer systems division 5 As the price of hard disks kept coming down we realized that if we could make them portable the floppy disk drive would be a useless accessory 6 Later we realized it could change the way people use their computers 7 Each Data Pac cartridge which will be priced at about 400 is about the size of a thick paperback book and contains a hard disk drive that can hold 30 million pieces of information or the equivalent of about four Bibles 8 To use the Data Pacs they must be plugged into a cabinet called the Ad Pac 2 That device to be priced at about 500 contains circuitry to connect the cartridge to an IBMcompatible personal computer and holds the cartridge steadily in place 9 The cartridges which weigh about two pounds are
View Full Document