DOC PREVIEW
Stanford CS 224 - Study Notes

This preview shows page 1 out of 3 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 3 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 3 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Matt GinztonCS224N, Spring 2001Final Project6/11/2001Project: Usenet (or email) message thread analyzer1. Description of problemI wanted to work on live, real-world data, so for my domain I chose threads of Usenet articles. The idea is to examine the parent/child relationships between messages and analyze the degree to which child messages really respond to topicsraised in their parent message(s); an approach based on Latent Semantic Analysis (LSA) seemed most promising, especially, one based on Pocklington and Best’s paper which used a similar idea to track the “replicatory success” of messages versus the concepts (found via LSA) that they contained.Related attempts have been made by Berry and Dumais (with their work on LSA in general), and by Pocklington and Best (who actually worked on Usenet articles themselves). My project performs computations similar to those of Pocklington and Best, but uses the results to examine possible conceptual relations between messages (where they used the results to track the “replication” of messages containing conceptual “genes”).2. Algorithms/methodsThe core algorithm I use is LSA, Latent Semantic Analysis. I create a term/document matrix where each column corresponds to a term in the whole corpus (several threads from one newsgroup), and each row corresponds to a document (an individual message from the threads). Each entry is a weight derived from the frequency of that term in the current document, and the number of documents containing that term relative to the entire corpus (Inverse DocumentFrequency, or IDF). This matrix is then decomposed via SVD (Singular Value Decomposition), yielding an orthonormal basis set of eigenvectors – the right orthonormal matrix is taken as the semantic subspace matrix, an indication of semantic concepts distilled from the term co-ocurrence information via SVD. Projecting the term/document matrix on to the semantic subspace matrix yields the term-subspace/document matrix, showing to what degree each document expresses each semantic concept. (This far, the algorithms are derived purely fromPocklington and Best’s LSA implementation as described in their paper, except for the IDF which I calculate slightly differently.)I then take the entries from the term-subspace/document matrix (henceforth TSD),treating each thread individually, and for each message, look for the “most related” documents both older (taken to be possible parents) and newer (taken tobe possible children). “Most related” could be calculated in several senses of similarity; I try taking messages whose TSD vector dot product is maximal (corresponding to the smallest angle between their vectors).3. Running my programYou invoke the thread analyzer as “thread.py”, which takes one optional parameter (the number of dimensions to track during SVD) and one mandatory parameter (the filesystem directory containing the thread data).Note: you will need the optional “Numeric” package for Python to run this; unfortunately, it doesn’t seem to be installed on the Leland workstations. It’s necessary for the SVD algorithm which I didn’t want to reimplement, though. You can find information and downloadables for Numeric at http://numpy.sourceforge.net/.Datasets in “pseudothreaded” order (complete threads, ordered chronologically but with no parent/child information between messages) were obtained via cut-and-paste from Google’s Usenet archive (http://groups.google.com/); I grabbed 3 threads each from the groups sci.skeptic and comp.sys.be.advocacy. The sample datasets are in directories named after the group, each containing one text file per thread.Example command line: thread.py –k=100 sci.skepticThis will read all the messages, perform LSA, and create a number of .html files in the data directory (sci.skeptic in this case), with various results of the calculations. Weights.html shows the term/document matrix with weights (after including the IDF), LSI.html shows the U matrix (right orthogonal matrix) output from SVD (each of the k rows here corresponds to one semantic “concept” found by LSA; for each concept, you can see the weight that each possible term contributes, as well as a sorted list of the terms in order of decreasing contribution), and TSD.html shows the term-subspace/document matrix, giving for each message (by thread) the degree to which it expresses each semantic concept. Warning: these files (especially weights.html and LSA.html) can be huge, and can cripple many HTML browsers!Finally, one .html file is output for each thread (with the same base name as the .thread file) detailing an estimated reconstruction for the thread; each message is shown in complete text along with links to possible (deemed to be related) parent and child topics.4. Testing and results5. DiscussionThis was a hard test domain because the language is real and unprocessed (misspellings, for example, are common) and infested with garbage data (suchas .sig files at the end of each posting that will throw off frequency counts). Worse, the individual data samples are quite small (a post could be as short as a few words, and is rarely as long as one page).Beyond that, it’s even hard to find threads that continue long enough to be worth analysis without degenerating into a flame war (Pocklington and Best noted this phenomenon as well).A strong stemming implementation would also increase the quality of the results, by increasing true collisions between semantically identical words. I tried various naïve implementations of a stemmer but found they didn’t help, and a sophisticated approach would have been too complicated for the scope of this project; I do however throw out several stop words (as well as all words 3 letters or shorter, and all words containing numeric digits) which helped greatly.To really track the flow of ideas, it would be interesting to treat each message as aconjunction of possibly separate remarks (perhaps, separated by quoted text, so that each unbroken response to a chunk of quoted text is seen individually), instead of lumping all the text of each response into the same document space. However, this would further exacerbate the problem of small data samples and would likely not be helpful in practice.Overall, this was an interesting project but the problem domain was hard enough that good results are hard to see through the noise resulting from the Usenet data sets.6. ReferencesBerry, Michael W. and


View Full Document

Stanford CS 224 - Study Notes

Documents in this Course
Load more
Download Study Notes
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Study Notes and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Study Notes 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?