Carnegie Mellon Thread Level Parallelism 15 213 Introduction to Computer Systems 26th Lecture Nov 30 2010 Instructors Randy Bryant and Dave O Hallaron 1 Carnegie Mellon Today Parallel Computing Hardware Multicore Multiple separate processors on single chip Hyperthreading Replicated instruction execution hardware in each processor Maintaining cache consistency Thread Level Parallelism Splitting program into independent tasks Example Parallel summation Some performance artifacts Divide and conquer parallelism Example Parallel quicksort 2 Carnegie Mellon Multicore Processor Core 0 Core n 1 Regs Regs L1 d cache L1 i cache L2 unified cache L1 d cache L1 i cache L2 unified cache L3 unified cache shared by all cores Main memory Intel Nehalem Processor E g Shark machines Multiple processors operating with coherent view of memory 3 Carnegie Mellon Memory Consistency int a 1 int b 100 Thread1 Wa a 2 Rb print b Thread2 Wb b 200 Ra print a Thread consistency constraints Wa Rb Wb Ra What are the possible values printed Depends on memory consistency model Abstract model of how hardware handles concurrent accesses Sequential consistency Overall effect consistent with each individual thread Otherwise arbitrary interleaving 4 Carnegie Mellon Sequential Consistency Example Thread consistency constraints Wa Rb int a 1 int b 100 Thread1 Wa a 2 Rb print b Wb Thread2 Wb b 200 Ra print a Rb Wa Wb Ra Wb Impossible outputs Wa Ra Wb Ra 100 2 Rb Ra 200 2 Ra Wa Rb 2 200 Rb 1 200 Ra Rb 2 200 Rb Ra 200 2 100 1 and 1 100 Would require reaching both Ra and Rb before Wa and Wb 5 Carnegie Mellon Non Coherent Cache Scenario Write back caches without coordination between them int a 1 int b 100 Thread1 Wa a 2 Rb print b Thread1 Cache a 2 b 100 Thread2 Wb b 200 Ra print a Thread2 Cache a 1 b 200 print 1 print 100 Main Memory a 1 b 100 6 Carnegie Mellon Snoopy Caches int a 1 int b 100 Tag each cache block with state Invalid Cannot use value Shared Readable copy Exclusive Writeable copy Thread1 Cache E Thread1 Wa a 2 Rb print b Thread2 Wb b 200 Ra print a Thread2 Cache a 2 E b 200 Main Memory a 1 b 100 7 Carnegie Mellon Snoopy Caches int a 1 int b 100 Tag each cache block with state Invalid Cannot use value Shared Readable copy Exclusive Writeable copy Thread1 Cache E S a 2 a 2 print 2 E b 200 S b 200 Main Memory a 1 Thread2 Wb b 200 Ra print a Thread2 Cache S S b 200 Thread1 Wa a 2 Rb print b b 100 print 200 When cache sees request for one of its E tagged blocks Supply value from cache Set tag to S 8 Carnegie Mellon Out of Order Processor Structure Instruction Control Instruction Decoder Registers Op Queue Instruction Cache PC Functional Units Integer Arith Integer Arith FP Arith Load Store Data Cache Instruction control dynamically converts program into stream of operations Operations mapped onto functional units to execute in parallel 9 Carnegie Mellon Hyperthreading Instruction Control Reg A Instruction Decoder Op Queue A Reg B Op Queue B PC A Instruction Cache PC B Functional Units Integer Arith Integer Arith FP Arith Load Store Data Cache Replicate enough instruction control to process K instruction streams K copies of all registers Share functional units 10 Carnegie Mellon Summary Creating Parallel Machines Multicore Separate instruction logic and functional units Some shared some private caches Must implement cache coherency Hyperthreading Also called simultaneous multithreading Separate program state Shared functional units caches No special control needed for coherency Combining Shark machines 8 cores each with 2 way hyperthreading Theoretical speedup of 16X Never achieved in our benchmarks 11 Carnegie Mellon Summation Example Sum numbers 0 N 1 Should add up to N 1 N 2 Partition into K ranges N K values each Accumulate leftover values serially Method 1 All threads update single global variable 1A No synchronization 1B Synchronize with pthread semaphore 1C Synchronize with pthread mutex Binary semaphore Only values 0 1 12 Carnegie Mellon Accumulating in Single Global Variable Declarations typedef unsigned long data t Single accumulator volatile data t global sum Mutex semaphore for global sum sem t semaphore pthread mutex t mutex Number of elements summed by each thread size t nelems per thread Keep track of thread IDs pthread t tid MAXTHREADS Identify each thread int myid MAXTHREADS 13 Carnegie Mellon Accumulating in Single Global Variable Operation nelems per thread nelems nthreads Set global value global sum 0 Create threads and wait for them to finish for i 0 i nthreads i myid i i Pthread create tid i NULL thread fun myid i for i 0 i nthreads i Pthread join tid i NULL result global sum Add leftover elements for e nthreads nelems per thread e nelems e result e 14 Carnegie Mellon Thread Function No Synchronization void sum race void vargp int myid int vargp size t start myid nelems per thread size t end start nelems per thread size t i for i start i end i global sum i return NULL 15 Carnegie Mellon Unsynchronized Performance N 230 Best speedup 2 86X Gets wrong answer when 1 thread 16 Carnegie Mellon Thread Function Semaphore Mutex Semaphore void sum sem void vargp int myid int vargp size t start myid nelems per thread size t end start nelems per thread size t i for i start i end i sem wait semaphore sem wait semaphore global sum i i global sum sem post semaphore sem post semaphore return NULL Mutex pthread mutex lock mutex global sum i pthread mutex unlock mutex 17 Carnegie Mellon Semaphore Mutex Performance Terrible Performance 2 5 seconds 10 minutes Mutex 3X faster than semaphore Clearly neither is successful 18 Carnegie Mellon Separate Accumulation Method 2 Each thread accumulates into separate variable 2A Accumulate in contiguous array elements 2B Accumulate in spaced apart array elements 2C Accumulate in registers Partial sum computed by each thread data t psum MAXTHREADS MAXSPACING Spacing between accumulators size t spacing 1 19 Carnegie Mellon Separate Accumulation Operation nelems per thread nelems nthreads Create threads and wait for them to finish for i 0 i nthreads i myid i i psum i spacing 0 Pthread create tid i NULL thread fun myid i for i 0 i nthreads i Pthread join tid i NULL result 0 Add up the partial sums computed by each thread for i 0 i nthreads i result psum i spacing Add leftover elements for e nthreads nelems per thread e nelems e result e 20 Carnegie Mellon Thread Function Memory Accumulation void sum global void vargp int myid int vargp size t start myid nelems per thread size t
View Full Document