Carnegie Mellon Thread Level Parallelism 15 213 Introduc0on to Computer Systems 26th Lecture Nov 30 2010 Instructors Randy Bryant and Dave O Hallaron 1 Carnegie Mellon Today Parallel Compu ng Hardware Mul0core Mul0ple separate processors on single chip Hyperthreading Replicated instruc0on execu0on hardware in each processor Maintaining cache consistency Thread Level Parallelism SpliMng program into independent tasks Example Parallel summa0on Some performance ar0facts Divide and conquer parallelism Example Parallel quicksort 2 Carnegie Mellon Mul core Processor Core 0 Core n 1 Regs Regs L1 d cache L1 i cache L2 unified cache L1 d cache L1 i cache L2 unified cache L3 unified cache shared by all cores Main memory Intel Nehalem Processor E g Shark machines Mul0ple processors opera0ng with coherent view of memory 3 Carnegie Mellon Memory Consistency int a 1 int b 100 Thread1 Wa a 2 Rb print b Thread2 Wb b 200 Ra print a Thread consistency constraints Wa Rb Wb Ra What are the possible values printed Depends on memory consistency model Abstract model of how hardware handles concurrent accesses Sequen al consistency Overall e ect consistent with each individual thread Otherwise arbitrary interleaving 4 Carnegie Mellon Sequen al Consistency Example Thread consistency constraints Wa Rb int a 1 int b 100 Thread1 Wa a 2 Rb print b Wb Thread2 Wb b 200 Ra print a Rb Wa Wb Ra Wb Impossible outputs Wa Ra Wb Ra 100 2 Rb Ra 200 2 Ra Wa Rb 2 200 Rb 1 200 Ra Rb 2 200 Rb Ra 200 2 100 1 and 1 100 Would require reaching both Ra and Rb before Wa and Wb 5 Carnegie Mellon Non Coherent Cache Scenario Write back caches without coordina on between them int a 1 int b 100 Thread1 Wa a 2 Rb print b Thread1 Cache a 2 Thread2 Cache a 1 b 100 Thread2 Wb b 200 Ra print a b 200 print 1 print 100 Main Memory a 1 b 100 6 Carnegie Mellon Snoopy Caches Tag each cache block with state Invalid Shared Exclusive Cannot use value Readable copy Writeable copy Thread1 Cache E int a 1 int b 100 Thread1 Wa a 2 Rb print b Thread2 Wb b 200 Ra print a Thread2 Cache a 2 E b 200 Main Memory a 1 b 100 7 Carnegie Mellon Snoopy Caches Tag each cache block with state Invalid Shared Exclusive Cannot use value Readable copy Writeable copy Thread1 Cache E S a 2 Thread1 Wa a 2 Rb print b a 2 print 2 E b 200 S b 200 Main Memory a 1 Thread2 Wb b 200 Ra print a Thread2 Cache S S b 200 int a 1 int b 100 b 100 print 200 When cache sees request for one of its E tagged blocks Supply value from cache Set tag to S 8 Carnegie Mellon Out of Order Processor Structure Instruction Control Instruction Decoder Registers Op Queue Instruction Cache PC Functional Units Integer Arith Integer Arith FP Arith Load Store Data Cache Instruc on control dynamically converts program into stream of opera ons Opera ons mapped onto func onal units to execute in parallel 9 Carnegie Mellon Hyperthreading Instruction Control Reg A Instruction Decoder Op Queue A Reg B Op Queue B PC A Instruction Cache PC B Functional Units Integer Arith Integer Arith FP Arith Load Store Data Cache Replicate enough instruc on control to process K instruc on streams K copies of all registers Share func onal units 10 Carnegie Mellon Summary Crea ng Parallel Machines Mul core Separate instruc0on logic and func0onal units Some shared some private caches Must implement cache coherency Hyperthreading Also called simultaneous mul0threading Separate program state Shared func0onal units caches No special control needed for coherency Combining Shark machines 8 cores each with 2 way hyperthreading Theore0cal speedup of 16X Never achieved in our benchmarks 11 Carnegie Mellon Summa on Example Sum numbers 0 N 1 Should add up to N 1 N 2 Par on into K ranges N K values each Accumulate lebover values serially Method 1 All threads update single global variable 1A No synchroniza0on 1B Synchronize with pthread semaphore 1C Synchronize with pthread mutex Binary semaphore Only values 0 1 12 Carnegie Mellon Accumula ng in Single Global Variable Declara ons typedef unsigned long data t Single accumulator volatile data t global sum Mutex semaphore for global sum sem t semaphore pthread mutex t mutex Number of elements summed by each thread size t nelems per thread Keep track of thread IDs pthread t tid MAXTHREADS Identify each thread int myid MAXTHREADS 13 Carnegie Mellon Accumula ng in Single Global Variable Opera on nelems per thread nelems nthreads Set global value global sum 0 Create threads and wait for them to finish for i 0 i nthreads i myid i i Pthread create tid i NULL thread fun myid i for i 0 i nthreads i Pthread join tid i NULL result global sum Add leftover elements for e nthreads nelems per thread e nelems e result e 14 Carnegie Mellon Thread Func on No Synchroniza on void sum race void vargp int myid int vargp size t start myid nelems per thread size t end start nelems per thread size t i for i start i end i global sum i return NULL 15 Carnegie Mellon Unsynchronized Performance N 230 Best speedup 2 86X Gets wrong answer when 1 thread 16 Carnegie Mellon Thread Func on Semaphore Mutex Semaphore void sum sem void vargp int myid int vargp size t start myid nelems per thread size t end start nelems per thread size t i for i start i end i sem wait semaphore sem wait semaphore global sum i i global sum sem post semaphore sem post semaphore return NULL Mutex pthread mutex lock mutex global sum i pthread mutex unlock mutex 17 Carnegie Mellon Semaphore Mutex Performance Terrible Performance 2 5 seconds 10 minutes Mutex 3X faster than semaphore Clearly neither is successful 18 Carnegie Mellon Separate Accumula on Method 2 Each thread accumulates into separate variable 2A Accumulate in con0guous array elements 2B Accumulate in spaced apart array elements 2C Accumulate in registers Partial sum computed by each thread data t psum MAXTHREADS MAXSPACING Spacing between accumulators size t spacing 1 19 Carnegie Mellon Separate Accumula on Opera on nelems per thread nelems nthreads Create threads and wait for them to finish for i 0 i nthreads i myid i i psum i spacing 0 Pthread create tid i NULL thread fun myid i for i 0 i nthreads i Pthread join tid i NULL result 0 Add up the partial sums computed by each thread for i 0 i nthreads i result psum i spacing Add leftover elements for e nthreads nelems per thread e nelems e result e 20 Carnegie Mellon Thread Func on Memory Accumula on void sum global void vargp int myid int vargp size t start myid nelems per thread size t end start nelems per thread size t i size t index
View Full Document