DOC PREVIEW
UT Dallas CS 6350 - 05.Lecture_on_HadoopBigData#3

This preview shows page 1-2-3-20-21-22-41-42-43 out of 43 pages.

Save
View full document
Premium Document
Do you want full access? Go Premium and unlock all 43 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Big Data Mining and Analytics MapReduce Algorithm Design Dr Latifur Khan Department of Computer Science University of Texas at Dallas Chapter 3 Jimmy Lin and Chris Dyer DataIntensive Text Processing with MapReduce Morgan Claypool Publishers 2010 http lintool github com MapReduceAlgorithms This work is licensed under a Creative Commons Attribution Noncommercial Share Alike 3 0 United States See http creativecommons org licenses by nc sa 3 0 us for details Source Wikipedia Mahout MapReduce Recap Programmers must specify map k v k v reduce k v k v All values with the same key are reduced together Optionally also partition k number of partitions partition for k Often a simple hash of the key e g hash k mod n Divides up key space for parallel reduce operations combine k v k v Mini reducers that run in memory after the map phase Used as an optimization to reduce network traffic The execution framework handles everything else k1 v1 k2 v2 map a 1 k4 v4 map b 2 c 3 combine a 1 k3 v3 c 6 a 5 c map 2 b 7 combine c 9 partition k6 v6 map combine b 2 k5 v5 a 5 partition c 1 5 2 b 7 partition b 2 7 8 combine c partition Shuffle and Sort aggregate values by keys a c c 2 9 8 reduce reduce reduce r1 s1 r2 s2 r3 s3 8 Everything Else The execution framework handles everything else Limited control over data and execution flow Scheduling assigns workers to map and reduce tasks Data distribution moves processes to data Synchronization gathers sorts and shuffles intermediate data Errors and faults detects worker failures and restarts All algorithms must expressed in m r c p You don t know Where mappers and reducers run When a mapper or reducer begins or finishes Which input a particular mapper is processing Which intermediate key a particular reducer is processing Tools for Synchronization Cleverly constructed data structures Sort order of intermediate keys Control order in which reducers process keys Partitioner Bring partial results together Control which reducer processes which keys Preserving state in mappers and reducers Capture dependencies across multiple keys and values Preserving State Mapper object Reducer object one object per task state configure map state API initialization hook one call per input key value pair configure reduce one call per intermediate key close API cleanup hook close Scalable Hadoop Algorithms Themes Avoid object creation Inherently costly operation Garbage collection Avoid buffering Limited heap size Works for small datasets but won t scale Importance of Local Aggregation Ideal scaling characteristics Why can t we achieve this Twice the data twice the running time Twice the resources half the running time Synchronization requires communication Communication kills performance Thus avoid communication Reduce intermediate data via local aggregation Combiners can help Mapper Mapper Mapper Mapper Mapper Intermediates Intermediates Intermediates Intermediates Intermediates Partitioner Partitioner Partitioner Partitioner Partitioner combiners omitted here Intermediates Intermediates Intermediates Reducer Reducer Reduce Source redrawn from a slide by Cloduera cc k1 v1 k2 v2 map a 1 k4 v4 map b 2 c 3 combine a 1 k3 v3 c 6 a 5 c map 2 b 7 combine c 9 partition k6 v6 map combine b 2 k5 v5 a 5 partition c 1 5 2 b 7 partition b 2 7 8 combine c partition Shuffle and Sort aggregate values by keys a c c 2 9 8 reduce reduce reduce r1 s1 r2 s2 r3 s3 8 Word Count Baseline What s the impact of combiners Word Count Version 1 Are combiners still needed Word Count Version 2 ss o r ac e t sta irs e rv pa e s e e pr valu y Ke t key u inp Are combiners still needed Design Pattern for Local Aggregation In mapper combining Advantages Fold the functionality of the combiner into the mapper by preserving state across multiple map calls Speed Why is this faster than actual combiners Disadvantages Explicit memory management required Potential for order dependent bugs Combiner Design Combiners and reducers share same method signature Remember combiner are optional optimizations Sometimes reducers can serve as combiners Often not Should not affect algorithm correctness May be run 0 1 or multiple times Example find average of all integers associated with the same key Computing the Mean Version 1 Why can t we use reducer as combiner Computing the Mean Version 2 Why doesn t this work Computing the Mean Version 3 Fixed Computing the Mean Version 4 Are combiners still needed Algorithm Design Running Example Term co occurrence matrix for a text collection M N x N matrix N vocabulary size Mij number of times i and j co occur in some context for concreteness let s say context sentence Why Distributional profiles as a way of measuring semantic distance Semantic distance useful for many language processing tasks MapReduce Large Counting Problems Term co occurrence matrix for a text collection specific instance of a large counting problem A large event space number of terms A large number of observations the collection itself Goal keep track of interesting statistics about the events Basic approach Mappers generate partial counts Reducers aggregate partial counts How do we aggregate partial counts efficiently First Try Pairs Each mapper takes a sentence Generate all co occurring term pairs For all pairs emit a b count Reducers sum up counts associated with these pairs Use combiners Pairs Pseudo Code Pairs Analysis Advantages Easy to implement easy to understand Disadvantages Lots of pairs to sort and shuffle around upper bound Not many opportunities for combiners to work Another Try Stripes Idea group together pairs into an associative array a b 1 a c 2 a d 5 a e 3 a f 2 Each mapper takes a sentence a b 1 c 2 d 5 e 3 f 2 Generate all co occurring term pairs For each term emit a b countb c countc d countd re u t c Reducersa perform element wise sum of associative arrays b 1 d 5 e 3 stru at a d d ucte ults r t s n s e o r c l a y rt i e rl a v p e l r e c Key s togeth g brin a b 1 c 2 d 2 f 2 a b 2 c 2 d 7 e 3 f 2 Stripes Pseudo Code Stripes Analysis Advantages Far less sorting and shuffling of key value pairs Can make better use of combiners Disadvantages More difficult to implement Underlying object more heavyweight Fundamental limitation in terms of size of event space Cluster size 38 cores Data Source Associated Press Worldstream APW of the English Gigaword Corpus v3 which contains 2 27 million documents 1 8 GB compressed 5 7 GB uncompressed Relative Frequencies How do we estimate relative frequencies from


View Full Document

UT Dallas CS 6350 - 05.Lecture_on_HadoopBigData#3

Documents in this Course
HW3

HW3

5 pages

NOSQL-CAP

NOSQL-CAP

23 pages

BigTable

BigTable

39 pages

HW3

HW3

5 pages

Load more
Download 05.Lecture_on_HadoopBigData#3
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view 05.Lecture_on_HadoopBigData#3 and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view 05.Lecture_on_HadoopBigData#3 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?