DOC PREVIEW
Berkeley COMPSCI 186 - Hash-Based Indexes

This preview shows page 1-2 out of 5 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 5 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 5 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 5 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Hash-Based IndexesR&G Chapter 10Lecture 18Administrivia• The new Homework 3 now available– Due 1 week from Sunday– Homework 4 available the week after• Midterm exams available hereReview• Last time discussed Tree Indexes– ISAM – static, efficient if data never changes– B-Trees – dynamic, good for changing data– Both good for range queries, o.k. for equality• Today: Hash Indexes– Useless for range queries– Best for equality queriesReview (2)•As for any index, 3 alternatives for data entries k*:– Data record with key value k–<k, rid of data record with search key value k>–<k, list of rids of data records with search key k>– Choice orthogonal to the indexing technique•Hash-basedindexes are best for equality selections. Cannotsupport range searches.Today: Hashing• Static and dynamic hashing techniques exist; trade-offs similar to ISAM vs. B+ trees.• Static Hashing– Good if data never changes (like ISAM)• Extendable Hashing – Uses directory to handle changing data• Linear Hashing– Avoids directory, usually fasterStatic Hashing• # primary pages fixed, allocated sequentially, never de-allocated; overflow pages if needed.•h(k) mod M = bucket to which data entry withkeyk belongs. (M = # of buckets)h(key) mod NhkeyPrimary bucket pagesOverflow pages20N-1Static Hashing (Contd.)• Buckets contain data entries.• Hash fn works on search key field of record r. Must distribute values over range 0 ... M-1.– h(key) = (a * key+ b) usually works well.– a and b are constants; lots known about how to tune h.• Long overflow chains can develop and degrade performance. –Extendibleand Linear Hashing: Dynamic techniques to fix this problem.Extendible Hashing• Situation: Bucket (primary page) becomes full. • Why not re-organize file by doubling # of buckets?– Reading and writing all pages is expensive!•Idea: Use directory of pointers to buckets, – double # of buckets by doubling the directory, – splitting just the bucket that overflowed!– Directory much smaller than file, so doubling much cheaper. –No overflow pages!– Trick lies in how hash function is adjusted!Extendible Hashing Details• Need directory with pointer to each bucket• Need hash function to incrementally double range– can just use increasingly more LSBs of h(key) • Must keep track of global “depth”– how many times directory doubled so far• Must keep track of local “depth”– how many time each bucket has been splitExample• Directory is array of size 4.• To find bucket for r, take last `global depth’ # bits of h(r); we denote rby h(r).– If h(r) = 5 = binary 101, it is in bucket pointed to by 01. Insert: If bucket is full, split it (allocate new page, re-distribute). If necessary, double the directory. (As we will see, splitting abucket does not always require doubling; we can tell by comparing global depth with local depth for the split bucket.)13*0001101122222LOCAL DEPTHGLOBAL DEPTHDIRECTORYBucket ABucket BBucket CBucket DDATA PAGES10*1* 21*4* 12* 32*16*15* 7* 19*5*Insert h(r)=20 (Causes Doubling)20*000110112222LOCAL DEPTH22DIRECTORYGLOBAL DEPTHBucket ABucket BBucket CBucket DBucket A2(`split image'of Bucket A)1* 5* 21*13*32*16*10*15* 7* 19*4* 12*19*222000001010011100101110111333DIRECTORYBucket ABucket BBucket CBucket DBucket A2(`split image'of Bucket A)32*1* 5* 21*13*16*10*15*7*4*20*12*LOCAL DEPTHGLOBAL DEPTHNow Insert h(r)=9 (Causes Split Only)19*322000001010011100101110111333DIRECTORYBucket ABucket BBucket CBucket DBucket A232*1* 9*16*10*15*7*4*20*12*LOCAL DEPTHGLOBAL DEPTH3Bucket B25* 21* 13*19*222000001010011100101110111333DIRECTORYBucket ABucket BBucket CBucket DBucket A2(`split image'of Bucket A)32*1* 5* 21*13*16*10*15*7*4*20*12*LOCAL DEPTHGLOBAL DEPTHPoints to Note• 20 = binary 10100. Last 2 bits (00) tell us r belongs in A or A2. Last 3bits needed to tell which.–Global depth of directory: Max # of bits needed to tell which bucket an entry belongs to.–Local depth of a bucket: # of bits used to determine if splitting bucket will also double directory• When does bucket split cause directory doubling?– If, before insert, local depth of bucket = global depth. • Insert causes local depth to become > global depth; • directory is doubled by copying it over and `fixing’ pointer to split image page. • (Use of least significant bits enables efficient doubling via copying of directory!)Directory DoublingWhy use least significant bits in directory?Ù Allows for doubling via copying!13*000110112222210*1* 21*4* 12* 32*16*15* 7* 19*5*13*0000010100113222210*1* 21*4* 12* 32*16*15* 7* 19*5*100101110111Deletion• Delete: If removal of data entry makes bucket empty, can be merged with `split image’. If each directory element points to same bucket as its split image, can halve directory. 13*000110112222210*1* 21*4* 12* 32*16*15*5*13*0001101121221* 21*4* 12* 32*16*15*5*Delete 10Deletion (cont)• Delete: If removal of data entry makes bucket empty, can be merged with `split image’. If each directory element points to same bucket as its split image, can halve directory. 13*000110112111* 21*4* 12* 32*16*5*Delete 1513*0001101121221* 21*4* 12* 32*16*15*5*13*011111* 21*4* 12* 32*16*5*Comments on Extendible Hashing• If directory fits in memory, equality search answered with one disk access; else two.– 100MB file, 100 bytes/rec, 4K pages contains 1,000,000 records (as data entries) and 25,000 directory elements; chances are high that directory will fit in memory.– Directory grows in spurts, and, if the distribution of hash values is skewed, directory can grow large.• Biggest problem:– Multiple entries with same hash value cause problems!– If bucket already full of same hash value, will keep doubling forever!Linear Hashing• This is another dynamic hashing scheme, an alternative to Extendible Hashing.• LH handles the problem of long overflow chains without using a directory, and handles duplicates.•Idea: Use a family of hash functions h0, h1, h2, ...– hi(key) = h(key) mod(2iN); N = initial # buckets– h is some hash function (range is not0 to N-1)– If N = 2d0, for some d0, hiconsists of applying h and looking at the last dibits, where di= d0+ i.– hi+1 doubles the range of hi (similar to directory doubling)• Let’s start with N = 4 Buckets– Start at “round” 0, “next” 0, have 2roundbuckets– Each time any bucket fills, split “next” bucket–If (O ≤ hround(key) <


View Full Document

Berkeley COMPSCI 186 - Hash-Based Indexes

Documents in this Course
Load more
Download Hash-Based Indexes
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Hash-Based Indexes and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Hash-Based Indexes 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?