DOC PREVIEW
UT Dallas CS 6359 - Lecture5

This preview shows page 1-2-3-22-23-24-45-46-47 out of 47 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 47 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 47 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 47 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 47 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 47 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 47 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 47 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 47 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 47 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 47 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

CS6322: Information Retrieval Sanda HarabagiuLecture 5: Index CompressionCS 6322: Information RetrievalCS 6322: Information RetrievalLast lecture – index construction Sort-based indexing Naïve in-memory inversion Blocked Sort-Based Indexing Merge sort is effective for disk-based sorting (avoid seeks!) Single-Pass In-Memory Indexing No global dictionary Generate separate dictionary for each block Don’t sort postings Accumulate postings in postings lists as they occur Distributed indexing using MapReduce Dynamic indexing: Multiple indices, logarithmic mergeCS 6322: Information RetrievalCS 6322: Information RetrievalToday Collection statistics in more detail (with RCV1) How big will the dictionary and postings be? Dictionary compression Postings compressionCh. 5CS 6322: Information RetrievalCS 6322: Information RetrievalWhy compression (in general)? Use less disk space Saves a little money Keep more stuff in memory Increases speed Increase speed of data transfer from disk to memory [read compressed data | decompress] is faster than [read uncompressed data] Premise: Decompression algorithms are fast True of the decompression algorithms we useCh. 5CS 6322: Information RetrievalCS 6322: Information RetrievalWhy compression for inverted indexes? Dictionary Make it small enough to keep in main memory Make it so small that you can keep some postings lists in main memory too Postings file(s) Reduce disk space needed Decrease time needed to read postings lists from disk Large search engines keep a significant part of the postings in memory. Compression lets you keep more in memory We will devise various IR-specific compression schemesCh. 5CS 6322: Information RetrievalCS 6322: Information RetrievalRecall Reuters RCV1 symbol statistic value N documents 800,000 L avg. # tokens per doc 200 M terms (= word types) ~400,000 avg. # bytes per token 6(incl. spaces/punct.) avg. # bytes per token 4.5(without spaces/punct.) avg. # bytes per term 7.5 non-positional postings 100,000,000Sec. 5.1CS 6322: Information RetrievalCS 6322: Information RetrievalIndex parameters vs. what we index (details IIR Table 5.1, p.80)size of word types (terms) non-positionalpostingspositional postingsdictionary non-positional index positional indexSize (K)∆% cumul %Size (K) ∆ %cumul %Size (K) ∆ %cumul %Unfiltered 484 109,971 197,879No numbers 474 -2 -2 100,680 -8 -8 179,158 -9 -9Case folding 392 -17 -19 96,969 -3 -12 179,158 0 -930 stopwords 391 -0 -19 83,390 -14 -24 121,858 -31 -38150 stopwords 391 -0 -19 67,002 -30 -39 94,517 -47 -52stemming 322 -17 -33 63,812 -4 -42 94,517 0 -52Exercise: give intuitions for all the ‘0’ entries. Why do some zero entries correspond to big deltas in other columns? Sec. 5.1CS 6322: Information RetrievalCS 6322: Information RetrievalLossless vs. lossy compression Lossless compression: All information is preserved. What we mostly do in IR. Lossy compression: Discard some information Several of the preprocessing steps can be viewed as lossy compression: case folding, stop words, stemming, number elimination. Chap/Lecture 7: Prune postings entries that are unlikely to turn up in the top k list for any query. Almost no loss quality for top k list.Sec. 5.1CS 6322: Information RetrievalCS 6322: Information RetrievalVocabulary vs. collection size How big is the term vocabulary? That is, how many distinct words are there? Can we assume an upper bound? Not really: At least 7020= 1037different words of length 20 In practice, the vocabulary will keep growing with the collection size Especially with Unicode ☺Sec. 5.1CS 6322: Information RetrievalCS 6322: Information RetrievalVocabulary vs. collection size Heaps’ law: M = kTb M is the size of the vocabulary, T is the number of tokens in the collection Typical values: 30 ≤ k ≤ 100 and b ≈ 0.5 In a log-log plot of vocabulary size M vs. T, Heaps’ law predicts a line with slope about ½ It is the simplest possible relationship between the two in log-log space An empirical finding (“empirical law”)Sec. 5.1CS 6322: Information RetrievalCS 6322: Information RetrievalHeaps’ LawFor RCV1, the dashed linelog10M = 0.49 log10T + 1.64is the best least squares fit.Thus, M = 101.64T0.49so k = 101.64 ≈ 44 and b = 0.49.Good empirical fit for Reuters RCV1 !For first 1,000,020 tokens,law predicts 38,323 terms;actually, 38,365 termsFig 5.1 p81Sec. 5.1CS 6322: Information RetrievalCS 6322: Information RetrievalExercises What is the effect of including spelling errors, vs. automatically correcting spelling errors on Heaps’ law? Compute the vocabulary size M for this scenario: Looking at a collection of web pages, you find that there are 3000 different terms in the first 10,000 tokens and 30,000 different terms in the first 1,000,000 tokens. Assume a search engine indexes a total of 20,000,000,000 (2 × 1010) pages, containing 200 tokens on average What is the size of the vocabulary of the indexed collection as predicted by Heaps’ law?Sec. 5.1CS 6322: Information RetrievalCS 6322: Information RetrievalZipf’s law Heaps’ law gives the vocabulary size in collections. We also study the relative frequencies of terms. In natural language, there are a few very frequent terms and very many very rare terms. Zipf’s law: The ith most frequent term has frequency proportional to 1/i . cfi∝ 1/i = K/i where K is a normalizing constant cfiis collection frequency: the number of occurrences of the term tiin the collection.Sec. 5.1CS 6322: Information RetrievalCS 6322: Information RetrievalZipf consequences If the most frequent term (the) occurs cf1times  then the second most frequent term (of) occurs cf1/2 times the third most frequent term (and) occurs cf1/3 times …  Equivalent: cfi= K/i where K is a normalizing factor, so log cfi= log K - log i Linear relationship between log cfiand log i Another power law relationshipSec. 5.1CS 6322 Information RetrievalCS 6322 Information RetrievalZipf’s law for Reuters RCV115Sec. 5.1CS 6322: Information RetrievalCS 6322: Information RetrievalCompression Now, we will consider compressing the space for the dictionary and postings Basic Boolean index only No study of positional indexes, etc. We will consider compression schemesCh. 5CS


View Full Document

UT Dallas CS 6359 - Lecture5

Documents in this Course
Lecture2

Lecture2

63 pages

Lecture3

Lecture3

49 pages

Lecture4

Lecture4

48 pages

Lecture6

Lecture6

45 pages

Lecture7

Lecture7

63 pages

Lecture8

Lecture8

77 pages

Lecture9

Lecture9

48 pages

Lecture10

Lecture10

84 pages

Lecture11

Lecture11

45 pages

Lecture12

Lecture12

134 pages

Lecture13

Lecture13

62 pages

Lecture14

Lecture14

76 pages

Project

Project

2 pages

Chapter_1

Chapter_1

25 pages

Load more
Download Lecture5
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lecture5 and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lecture5 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?