CS347 Lecture 4 April 18 2001 Prabhakar Raghavan Today s topics Computing cosine based ranking Speeding up cosine ranking reducing the number of cosine computations Union of term wise candidates Sampling and pre grouping reducing the number of dimensions Random projection Latent semantic indexing Recall doc as vector Each doc j is a vector of tf idf values one component for each term Can normalize to unit length So we have a vector space terms are axes docs live in this space even with stemming may have 10000 dimensions Intuition t3 D2 D3 D1 x y t1 t2 D4 Postulate Documents that are close together in vector space talk about the same things Cosine similarity Cosine similarity of D j Dk m sim D j Dk wij w ik i 1 Aka normalized inner product Can also compute cosine similarity from a query vector of terms e g truth forever to each document Exercises How would you augment the inverted index built in lectures 1 3 to support cosine ranking computations Walk through the steps of serving a query Why use vector spaces Key A user s query can be viewed as a very short document Query becomes a vector in the same space as the docs Can measure each doc s cosine proximity to query ranking Efficient cosine ranking Ranking consists of computing the k docs in the corpus nearest to the query k largest query doc cosines Efficient ranking Computing a single cosine efficiently Choosing the k largest cosine values efficiently Computing a single cosine For every term i with each doc j store term frequency tfij Tradeoffs on whether to store term count freq or weighted by idfi Coding possibilities Accumulate component wise sum m sim D j Dk wij w ik i 1 More on speeding up a single cosine later in this lecture Computing the k largest cosines selection vs sorting Typically we want to retrieve the top k docs in the cosine ranking for the query not totally order all docs in the corpus just pick off docs with k highest cosines Use heap for selecting top k Binary tree in which each node s value values of children Takes 2n operations to construct then each of klog n winners read off in 2log n steps For n 1M k 100 this is about 10 of the cost of sorting Bottleneck Still need to first compute cosines from query to each of n docs several seconds for n 1M Can select from only non zero cosines should be 1M Can we avoid this Yes but may occasionally get an answer wrong a doc not in the top k may creep into the answer Term wise candidates Preprocess Pre compute for each term its k nearest docs Treat each term as a 1 term query lots of preprocessing Result preferred list for each term Search For a t term query take the union of their t preferred lists call this set S Compute cosines from the query to only the docs in S and choose top k Exercises Fill in the details of the calculation Which docs go into the preferred list for a term Devise a small example where this method gives an incorrect ranking Sampling and pre grouping First run a pre processing phase pick n docs at random call these leaders For each other doc pre compute nearest leader Docs attached to a leader its followers Likely each leader has n followers Process a query as follows Given query Q find its nearest leader L Seek k nearest docs from among L s followers Visualization Query Leader Follower Why use random sampling Fast Leaders reflect data distribution General variants Have each follower attached to a 3 say nearest leaders From query find b 4 say nearest leaders and their followers Can recur on leader follower construction Exercises To find the nearest leader in step 1 how many cosine computations do we do What is the effect of the constants a b on the previous slide Devise an example where this is likely to fail we miss one of the k nearest docs Likely under random sampling Dimensionality reduction What if we could take our vectors and pack them into fewer dimensions say 10000 100 while preserving distances Well almost Speeds up cosine computations Two methods Random projection Latent semantic indexing Random projection onto k m axes Choose a random direction x1 in the vector space For i 2 to k Choose a random direction xi that is orthogonal to x1 x2 xi 1 Project each doc vector into the subspace x1 x2 xk E g from 3 to 2 dimensions t3 D2 x2 x2 D2 D1 D1 t1 t2 x1 x1 is a random direction in t1 t2 t3 space x2 is chosen randomly but orthogonal to x1 x1 Guarantee With high probability relative distances are approximately preserved by projection Pointer to precise theorem in Resources Computing the random projection Projecting n vectors from m dimensions down to k dimensions Start with m n matrix of terms docs A Find random k m orthogonal projection matrix R Compute matrix product W R A jth column of W is the vector corresponding to doc j but now in k m dimensions Cost of computation This takes a total of kmn multiplications Expensive see Resources for ways to do essentially the same thing quicker Exercise by projecting from 10000 dimensions down to 100 are we really going to make each cosine computation faster Why Latent semantic indexing LSI Another technique for dimension reduction Random projection was data independent LSI on the other hand is data dependent Eliminate redundant axes Pull together related axes car and automobile Notions from linear algebra Matrix vector Matrix transpose and product Rank Eigenvalues and eigenvectors Overview of LSI Pre process docs using a technique from linear algebra called Singular Value Decomposition Have control over the granularity of this process create new vector space details to follow Queries handled in this new vector space Singular Value Decomposition Recall m n matrix of terms docs A A has rank r m n Define term term correlation matrix T AAt At denotes the matrix transpose of A T is a square symmetric m m matrix Doc doc correlation matrix D AtA D is a square symmetric n n matrix Why Eigenvectors Denote by P the m r matrix of eigenvectors of T Denote by R the n r matrix of eigenvectors of D It turns out A can be expressed decomposed as A PQRt Q is a diagonal matrix with the eigenvalues of AAt in sorted order Visualization m n A m r P r r Q r n Rt Dimension reduction For some s r zero out all but the s biggest eigenvalues in Q Denote by Qs this new version of Q Typically s in the hundreds while r could be in the tens of thousands Let As P Qs Rt Turns out As is a pretty good approximation to A We ll explain what this means Visualization 0 As P 0 0 Qs Rt The columns of As represent the docs but in s m
View Full Document