Unformatted text preview:

Index Construction Adapted from Lectures by Prabhakar Raghavan Yahoo and Stanford and Christopher Manning Stanford Prasad L06IndexConstruction 1 Plan Last lectures Dictionary data structures Tolerant retrieval Wildcards Spell correction Soundex This time Prasad a hu hy m n z m mace mo among on abandon madden amortize among Index construction L06IndexConstruction 2 Index construction How do we construct an index What strategies can we use with limited main memory Hardware Basics Prasad Many design decisions in information retrieval are based on the characteristics of hardware We begin by reviewing hardware basics L06IndexConstruction 3 Hardware basics Access to data in memory is much faster than access to data on disk Disk seeks No data is transferred from disk while the disk head is being positioned Therefore Transferring one large chunk of data from disk to memory is faster than transferring many small chunks Disk I O is block based Reading and writing of entire blocks as opposed to smaller chunks Prasad Block sizes 8KB to 256 KB L06IndexConstruction 4 Hard disk geometry and terminology Prasad L06IndexConstruction 5 Hardware basics Servers used in IR systems now typically have several GB of main memory sometimes tens of GB Available disk space is several 2 3 orders of magnitude larger Fault tolerance is very expensive It s much cheaper to use many regular machines rather than one fault tolerant machine Prasad L06IndexConstruction 6 Hardware assumptions symbol statistic s average seek time b transfer time per byte processor s clock rate p lowlevel operation value 5 ms 5 x 10 3 s 0 02 s 2 x 10 8 s 109 s 1 0 01 s 10 8 s e g compare swap a word size of main memory size of disk space Mem trans time per byte Prasad several GB 1 TB or more 5 ns L06IndexConstruction 7 RCV1 Our corpus for this lecture Shakespeare s collected works definitely aren t large enough The corpus we ll use isn t really large enough either but it s publicly available and is at least a more plausible example As an example for applying scalable index construction algorithms we will use the Reuters RCV1 collection Approx 1GB Prasad This is one year of Reuters newswire part of 1996 and 1997 L06IndexConstruction 8 A Reuters RCV1 document Prasad L06IndexConstruction 9 Reuters RCV1 statistics symbol N L M statistic documents avg tokens per doc terms word types avg bytes per token value 800 000 200 400 000 6 incl spaces punct avg bytes per token 4 5 without spaces punct avg bytes per term 7 5 non positional postings 100 000 000 4 5 bytes per word token vs 7 5 bytes per word type why Recall IIR1 index construction Documents are parsed to extract words and these are saved with the Document ID Doc 1 I did enact Julius Caesar I was killed i the Capitol Brutus killed me Prasad Doc 2 So let it be with Caesar The noble Brutus hath told you Caesar was ambitious L06IndexConstruction Term Doc I 1 did 1 enact 1 julius 1 caesar 1 I 1 was 1 killed 1 i 1 the 1 capitol 1 brutus 1 killed 1 me 1 so 2 let 2 it 2 be 2 with 2 caesar 2 the 2 noble 2 brutus 2 hath 2 told 2 you 2 caesar 2 11 was 2 ambitious 2 Key step After all documents have been parsed the inverted file is sorted by terms We focus on this sort step We have 100M items to sort Prasad L06IndexConstruction Term I did enact julius caesar I was killed i the capitol brutus killed me so let it be with caesar the noble brutus hath told you caesar was ambitious Doc 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 Term ambitious be brutus brutus capitol caesar caesar caesar did enact hath I I i it julius killed killed let me noble so the the told you was was with 12 Doc 2 2 1 2 1 1 2 2 1 1 1 1 1 1 2 1 1 1 2 1 2 2 1 2 2 2 1 2 2 Scaling index construction In memory index construction does not scale How can we construct an index for very large collections Taking into account the hardware constraints we just learned about Memory disk speed etc Prasad L06IndexConstruction 13 Sort based Index construction As we build the index we parse docs one at a time While building the index we cannot easily exploit compression tricks you can but much more complex The final postings for any term are incomplete until the end At 12 bytes per postings entry demands a lot of space for large collections T 100 000 000 in the case of RCV1 So we can do this in memory in 2008 but typical collections are much larger E g New York Times provides index of 150 years of newswire Thus We need to store intermediate results on disk Use the same algorithm for disk Can we use the same index construction algorithm internal sorting algorithms for larger collections but by using disk instead of memory No Sorting T 100 000 000 records on disk is too slow too many disk seeks We need an external sorting algorithm Prasad L06IndexConstruction 15 Bottleneck Parse and build postings entries one doc at a time Now sort postings entries by term then by doc within each term Doing this with random disk seeks would be too slow must sort T 100M records If every comparison took 2 disk seeks and N items could be sorted with N log2N comparisons how long would this take Prasad L06IndexConstruction 16 BSBI Blocked sort based Indexing Sorting with fewer disk seeks 12 byte 4 4 4 records term doc freq These are generated as we parse docs Must now sort 100M such 12 byte records by term Define a Block 10M such records Can easily fit a couple into memory Will have 10 such blocks to start with Basic idea of algorithm Accumulate postings for each block sort write to disk Then merge the blocks into one long sorted order Prasad L06IndexConstruction 18 Sorting 10 blocks of 10M records First read each block and sort within Quicksort takes 2N log N expected steps In our case 2 x 10M log 10M steps Exercise estimate total time to read each block from disk and quicksort it 10 times this estimate gives us 10 sorted runs of 10M records each Done straightforwardly need 2 copies of data on disk Prasad But can optimize this L06IndexConstruction 19 Prasad L06IndexConstruction 20 How to merge the sorted runs Can do binary merges with a merge tree of log210 4 layers During each layer read into memory runs in blocks of 10M merge write back 1 2 1 Merged run 2 3 4 3 Runs being merged Prasad 4 Disk How to merge the sorted runs But it is more efficient to do a n way merge where you are reading from all blocks simultaneously Providing you read decent sized chunks …


View Full Document

Wright CS 707 - Index Construction

Download Index Construction
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Index Construction and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Index Construction 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?