Recitation 7:Memory Access PatternsAndrew FaulringToday’s PlanLoop UnrollingPractice ProblemSolution 5.12Solution 5.12Solution 5.13Solution 5.13Summary of Matrix MultiplicationImproving Temporal Locality by BlockingBlocked Matrix Multiply (bijk)Blocked Matrix Multiply AnalysisPentium Blocked Matrix Multiply PerformanceSummaryRecitation 7:Memory Access PatternsAndrew Faulring15213 Section A21 October 2002Andrew Faulring• [email protected]• Office hours:– NSH 2504 (lab) / 2507 (conference room)– Thursday 5-6•Lab 4– due Thursday, 24 Oct @ 11:59pm– Submission is online• http://www2.cs.cmu.edu/afs/cs/academic/class/15213-f02/www/L4.htmlToday’s Plan• Loop Unrolling• BlockingLoop Unrolling•Optimization– Combine multiple iterations into single loop body– Amortizes loop overhead across multiple iterations– Finish extras at endvoid combine5(vec_ptr v, int *dest){int length = vec_length(v);int limit = length-2;int *data = get_vec_start(v);int sum = 0;int i;/* Combine 3 elements at a time */for (i = 0; i < limit; i+=3) {sum += data[i] + data[i+1]+ data[i+2];}/* Finish any remaining elements */for (; i < length; i++) {sum += data[i];}*dest = sum;}Practice Problem• Problem 5.12 and 5.13Solution 5.12void inner5(vec_ptr u, vec_ptr v, data_t *dest){int i;int length = vec_length(u);int limit = length-3;data_t *udata = get_vec_start(u);data_t *vdata = get_vec_start(v);data_t sum = (data_t) 0;/* Do four elements at a time */for (i = 0; i < limit; i += 4) {sum += udata[i] * vdata[i] + udata[i+1] * vdata[i+1]+ udata[i+2] * vdata[i+2] + udata[i+3] * vdata[i+3];}/* Finish off any remaining elements */for (; i < length; i++) sum += udata[i] * vdata[i]; *dest = sum;}Solution 5.12A. We must perform two loads per element to read values for udata and vdata. There is only one unit to perform these loads, and it requires one cycle.B. The performance for floating point is still limited by the 3 cycle latency of the floating-point adder.Solution 5.13void inner6(vec_ptr u, vec_ptr v, data_t *dest){int i;int length = vec_length(u);int limit = length-3;data_t *udata = get_vec_start(u);data_t *vdata = get_vec_start(v);data_t sum0 = (data_t) 0;data_t sum1 = (data_t) 0;/* Do four elements at a time */for (i = 0; i < limit; i+=4) {sum0 += udata[i] * vdata[i];sum1 += udata[i+1] * vdata[i+1];sum0 += udata[i+2] * vdata[i+2];sum1 += udata[i+3] * vdata[i+3];}/* Finish off any remaining elements */for (; i < length; i++) sum0 = sum0 + udata[i] * vdata[i];*dest = sum0 + sum1;}Solution 5.13• For each element, we must perform two loads with a unit that can only load one value per clock cycle. • We must also perform one floating-point multiplication with a unit that can only perform one multiplication every two clock cycles. • Both of these factors limit the CPE to 2.Summary of Matrix Multiplicationijk (& jik):• 2 loads, 0 stores• misses/iter = 1.25kij (& ikj):• 2 loads, 1 store• misses/iter = 0.5jki (& kji):• 2 loads, 1 store• misses/iter = 2.0for (i=0; i<n; i++) {for (j=0; j<n; j++) {sum = 0.0;for (k=0; k<n; k++) sum += a[i][k]* b[k][j];c[i][j] = sum;}} for (k=0; k<n; k++) {for (i=0; i<n; i++) {r = a[i][k];for (j=0; j<n; j++)c[i][j] += r* b[k][j]; }}for (j=0; j<n; j++) {for (k=0; k<n; k++) {r = b[k][j];for (i=0; i<n; i++)c[i][j] +=a[i][k] * r;}}Improving Temporal Locality by Blocking• Example: Blocked matrix multiplication– “block” (in this context) does not mean “cache block”.– Instead, it mean a sub-block within the matrix.– Example: N = 8; sub-block size = 4C11= A11B11+ A12B21 C12= A11B12+ A12B22C21= A21B11+ A22B21 C22= A21B12+ A22B22A11A12A21A22B11B12B21B22X=C11C12C21C22Key idea: Sub-blocks (i.e., Axy) can be treated just like scalars.Blocked Matrix Multiply (bijk)for (jj=0; jj<n; jj+=bsize) {for (i=0; i<n; i++)for (j=jj; j < min(jj+bsize,n); j++)c[i][j] = 0.0;for (kk=0; kk<n; kk+=bsize) { for (i=0; i<n; i++) {for (j=jj; j < min(jj+bsize,n); j++) { sum = 0.0for (k=kk; k < min(kk+bsize,n); k++) {sum += a[i][k] * b[k][j];}c[i][j] += sum;}}}}– Provides temporal locality as block is reused multiple times– Constant cache performanceBlocked Matrix Multiply Analysis– Innermost loop pair multiplies a 1 X bsizesliver of Aby a bsize X bsizeblock of Band accumulates into 1 X bsizesliver of C– Loop over isteps through nrow slivers of A& C, using same BABCblock reused ntimes in successionrow sliver accessedbsizetimesUpdate successiveelements of sliveri ikkkk jjjjfor (i=0; i<n; i++) {for (j=jj; j < min(jj+bsize,n); j++) { sum = 0.0for (k=kk; k < min(kk+bsize,n); k++) {sum += a[i][k] * b[k][j];}c[i][j] += sum;}InnermostLoop PairPentium Blocked Matrix Multiply Performance• Blocking (bijk and bikj) improves performance by a factor of two over unblocked versions (ijk and jik)– relatively insensitive to array size.0102030405060255075100125150175200225250275300325350375400Array size (n)Cycles/iterationkjijkikijikjjikijkbijk (bsize = 25)bikj (bsize = 25)Summary• All systems favor “cache friendly code”– Getting absolute optimum performance is very platform specific•Cache sizes, line sizes, associativities, etc.– Can get most of the advantage with generic code•Keep working set reasonably small (temporal locality)•Use small strides (spatial
View Full Document