analyzing data access of algorithms and how to make them
play

Analyzing Data Access of Algorithms and How to Make Them - PowerPoint PPT Presentation

Analyzing Data Access of Algorithms and How to Make Them Cache-Friendly? Kenjiro Taura 1 / 79 Contents 1 Introduction 2 Analyzing data access complexity of serial programs Overview Model of a machine An analysis methodology 3 Applying the


  1. Applying the methodology we will apply the methodology to some of the algorithms we have studied so far the key is to find subproblems (intervals) that fit in the cache 16 / 79

  2. Analyzing the triply nested loop ✞ I loop (= entire computation) for (i = 0; i < n; i++) { 1 for (j = 0; j < n; j++) { 2 += for (k = 0; k < n; k++) { 3 C(i,j) += A(i,k) * B(k,j); 4 } 5 an I iteration = J loop } 6 } 7 i += key question: which iteration a J iteration = K loop j j fits the cache? i += 17 / 79

  3. Working sets ✞ for (i = 0; i < n; i++) { 1 I loop (= entire computation) for (j = 0; j < n; j++) { 2 for (k = 0; k < n; k++) { 3 += C(i,j) += A(i,k) * B(k,j); 4 } 5 } 6 an I iteration = J loop } 7 i += level working set size 3 n 2 I loop a J iteration = K loop 2 n + n 2 j j J loop K loop 1 + 2 n i += K iteration 3 18 / 79

  4. Cases to consider I loop (= entire computation) += an I iteration = J loop i += a J iteration = K loop j j i += 19 / 79

  5. Cases to consider Case 1: the three matrices all fit in the cache (3 n 2 ≤ C ) I loop (= entire computation) += ≤ C an I iteration = J loop i += a J iteration = K loop j j i += 19 / 79

  6. Cases to consider Case 1: the three matrices all fit in the cache (3 n 2 ≤ C ) I loop (= entire computation) Case 2: a single i iteration += ( ≈ a matrix) fits in the cache (2 n + n 2 ≤ C ) an I iteration = J loop i += ≤ C a J iteration = K loop j j i += 19 / 79

  7. Cases to consider Case 1: the three matrices all fit in the cache (3 n 2 ≤ C ) I loop (= entire computation) Case 2: a single i iteration += ( ≈ a matrix) fits in the cache (2 n + n 2 ≤ C ) an I iteration = J loop i Case 3: a single j iteration += ( ≈ two vectors) fits in the a J iteration = K loop j j cache (1 + 2 n ≤ C ) i += ≤ C 19 / 79

  8. Cases to consider Case 1: the three matrices all fit in the cache (3 n 2 ≤ C ) I loop (= entire computation) Case 2: a single i iteration += ( ≈ a matrix) fits in the cache (2 n + n 2 ≤ C ) an I iteration = J loop i Case 3: a single j iteration += ( ≈ two vectors) fits in the a J iteration = K loop j j cache (1 + 2 n ≤ C ) i += ≤ C Case 4: none of the above (1 + 2 n > C ) 19 / 79

  9. Case 1 (3 n 2 ≤ C ) trivially, each element misses the cache only once. thus, += ≤ C R ( n ) ≤ 3 n 2 = 3 n · n 3 interpretation: each element of A , B , and C are reused n times 20 / 79

  10. Case 2 (2 n + n 2 ≤ C ) the maximum number of i -iterations that fit in the cache is: a ≈ C − n 2 n 2 ≤ C n += n 2 n × a each such set of iterations . . . . . . a n 2 ≈ C += n n transfer ≤ C words, so R ( n ) ≤ n aC = n ( 1 a + 2 ) a ( n 2 +2 an ) = n 3 n interpretation: each element of B is reused a times in the cache; each element in A or C many ( ∝ n ) times 21 / 79

  11. Case 3 (1 + 2 n ≤ C ) the maximum number of j -iterations that fit in the cache is: b ≈ C − n += ≤ C n n + 1 × b each such set of iterations += ≈ C n transfer ≤ C words, so R ( n ) ≤ n 2 b C = n 2 ( 1 b + 1 + 1 ) n 3 b ( n + b ( n +1)) = n interpretation: each element in B is never reused; each element in A b times; each clement in C many ( ∝ n ) times 22 / 79

  12. Case 4 (1 + 2 n > C ) the maximum number of k -iterations that fit in the cache is: c ≈ C − 1 += ≤ C 2 × c each such set of iterations += ≈ C c transfer ≤ C words, so R ( n ) ≤ n 3 ( 2 + 1 ) n 3 c C = c interpretation: each element of B or A never reused; each element of C reused c times 23 / 79

  13. Summary summarize R ( n ) /n 3 , the number of misses per multiply-add (0 ∼ 3) R ( n ) /n 3 condition range 3 n 2 ≤ C 3 ∼ 0 n 2 n + n 2 ≤ C a + 2 1 0 ∼ 1 n 1 b + 1 + 1 1 + 2 n ≤ C 1 ∼ 2 n 2 + 1 1 + 2 n > C 2 ∼ 3 c 24 / 79

  14. So how to improve it? in general, the traffic increases when the same amount of computation has a large working set 25 / 79

  15. So how to improve it? in general, the traffic increases when the same amount of computation has a large working set to reduce the traffic, you arrange the computation ( order subcomputations) so that you do a lot of computation on the same amount of data 25 / 79

  16. So how to improve it? in general, the traffic increases when the same amount of computation has a large working set to reduce the traffic, you arrange the computation ( order subcomputations) so that you do a lot of computation on the same amount of data the notion is so important that it is variously called compute/data ratio, flops/byte, compute intensity, or arithmetic intensity 25 / 79

  17. So how to improve it? in general, the traffic increases when the same amount of computation has a large working set to reduce the traffic, you arrange the computation ( order subcomputations) so that you do a lot of computation on the same amount of data the notion is so important that it is variously called compute/data ratio, flops/byte, compute intensity, or arithmetic intensity the key is to identify the unit of computation (task) whose compute intensity is high (compute-intensive task) 25 / 79

  18. The straightforward loop in light of compute intensity level flops working set size ratio 2 n 3 3 n 2 I loop 2 / 3 n 2 n 2 2 n + n 2 J loop ∼ 2 K loop 2 n 1 + 2 n ∼ 1 K iteration 2 3 2 / 3 the outermost loop has an O ( n ) compute intensity yet each iteration of which has only an O (1) compute intensity 26 / 79

  19. Cache blocking (tiling) for matrix multiplication, let l be the maximum number that satisfies 3 l 2 ≤ C (i.e., l ≈ √ C/ 3) and form a subcomputation that performs a ( l × l ) matrix multiplication ignoring remainder iterations, it looks like: ✞ √ C/ 3 ; l = 1 for (ii = 0; ii < n; ii += l) 2 for (jj = 0; jj < n; jj += l) 3 for (kk = 0; kk < n; kk += l) 4 /* working set fits in the cache below */ 5 l += l for (i = ii; i < ii + l; i++) 6 for (j = jj; j < jj + l; j++) 7 for (k = kk; k < kk + l; k++) 8 A(i,j) += B(i,k) * C(k,j); 9 27 / 79

  20. Cache blocking (tiling) each subcomputation: performs 2 l 3 flops and touches 3 l 2 distinct words ✞ √ l = C/ 3 ; 1 for (ii = 0; ii < n; ii += l) 2 for (jj = 0; jj < n; jj += l) 3 for (kk = 0; kk < n; kk += l) 4 /* working set fits in the cache below */ 5 for (i = ii; i < ii + l; i++) 6 for (j = jj; j < jj + l; j++) 7 for (k = kk; k < kk + l; k++) 8 A(i,j) += B(i,k) * C(k,j); 9 28 / 79

  21. Cache blocking (tiling) each subcomputation: performs 2 l 3 flops and touches 3 l 2 distinct words it thus has the compute intensity: ✞ √ l = C/ 3 ; 1 √ 2 l 3 3 l 2 = 2 3 l ≈ 2 C for (ii = 0; ii < n; ii += l) 2 for (jj = 0; jj < n; jj += l) 3 3 3 for (kk = 0; kk < n; kk += l) 4 /* working set fits in the cache below */ 5 for (i = ii; i < ii + l; i++) 6 for (j = jj; j < jj + l; j++) 7 for (k = kk; k < kk + l; k++) 8 A(i,j) += B(i,k) * C(k,j); 9 28 / 79

  22. Cache blocking (tiling) each subcomputation: performs 2 l 3 flops and touches 3 l 2 distinct words it thus has the compute intensity: ✞ √ l = C/ 3 ; 1 √ 2 l 3 3 l 2 = 2 3 l ≈ 2 C for (ii = 0; ii < n; ii += l) 2 for (jj = 0; jj < n; jj += l) 3 3 3 for (kk = 0; kk < n; kk += l) 4 /* working set fits in the cache below */ 5 or, the traffic is for (i = ii; i < ii + l; i++) 6 for (j = jj; j < jj + l; j++) 7 for (k = kk; k < kk + l; k++) 8 √ A(i,j) += B(i,k) * C(k,j); 9 ( n ) 3 = 3 3 n 3 √ C · l C 28 / 79

  23. Effect of cache blocking R ( n ) /n 3 condition range 3 3 n 2 ≤ C ∼ 0 n a + 2 1 2 n + n 2 ≤ C 0 ∼ 1 n 1 b + 1 + 1 1 + 2 n ≤ C 1 ∼ 2 n 2 + 1 1 + 2 n > C 2 ∼ 3 c √ 3 3 blocking √ below C assume a word = 4 bytes ( float ) R ( n ) /n 3 bytes C l 32K 8K 52 0.72 256K 64K 147 0.43 3MB 768K 886 0.0059 29 / 79

  24. Recursive blocking the tiling technique just mentioned targets a cache of a particular size (= level) need to do this at all levels (12 deep nested loop)? we also (for the sake of simplicity) assumed all matrices are square for generality, portability, simplicity, recursive blocking may apply 30 / 79

  25. Recursively blocked matrix multiply ✞ gemm( A, B, C ) { M M 1 if (( M, N, K ) = (1 , 1 , 1)) { C 1 A 1 2 += c 11 += a 11 ∗ b 11 ; 3 C 2 A 2 } else if (max( M, N, K ) = M ) { 4 gemm( A 1 , B, C 1 ); 5 gemm( A 2 , B, C 2 ); N N 6 } else if (max( M, N, K ) = N ) { 7 += gemm( A, B 1 , C 1 ); 8 B 1 B 2 gemm( A, B 1 , C 2 ); C 1 C 2 9 } else { / ∗ max( M, N, K ) = K ∗ / 10 gemm( A 1 , B 1 , C ); 11 K gemm( A 2 , B 2 , C ); 12 } B 1 13 += K } 14 A 1 A 2 B 2 it divides flops into two it divides two of the three matrices, along the longest axis 31 / 79

  26. Settings a single word = a single floating point number cache size = C words 32 / 79

  27. Settings a single word = a single floating point number cache size = C words let R ( M, N, K ) be the number of words transferred between cache and memory when multiplying M × K and K × N matrices (the cache is initially empty) 32 / 79

  28. Settings a single word = a single floating point number cache size = C words let R ( M, N, K ) be the number of words transferred between cache and memory when multiplying M × K and K × N matrices (the cache is initially empty) let R ( w ) be the maximum number of words transferred for any matrix multiply of up to w words in total: R ( w ) ≡ MK + KN + MN ≤ w R ( M, N, K ) max we want to bound R ( w ) from the above 32 / 79

  29. Settings a single word = a single floating point number cache size = C words let R ( M, N, K ) be the number of words transferred between cache and memory when multiplying M × K and K × N matrices (the cache is initially empty) let R ( w ) be the maximum number of words transferred for any matrix multiply of up to w words in total: R ( w ) ≡ MK + KN + MN ≤ w R ( M, N, K ) max we want to bound R ( w ) from the above to avoid making analysis tedious, assume all matrices are “nearly square” max( M, N, K ) ≤ 2 min( M, N, K ) 32 / 79

  30. The largest subproblem that fits in the cache the working set of gemm(A,B,C) is ( MK + KN + MN ) (words) it fits in the cache if this is ≤ C thus we have: ∴ R ( w ) ≤ C ( w ≤ C ) 33 / 79

  31. Analyzing cases that do not fit in the cache when MK + KN + MN > C , the M M C 1 A 1 += interval doing gemm(A,B,C) is two C 2 A 2 subintervals, each of which does N N gemm for slightly smaller matrices += B 1 B 2 C 1 C 2 K B 1 += K A 1 A 2 B 2 34 / 79

  32. Analyzing cases that do not fit in the cache when MK + KN + MN > C , the M M C 1 A 1 += interval doing gemm(A,B,C) is two C 2 A 2 subintervals, each of which does N N gemm for slightly smaller matrices += in the “nearly square” assumption, B 1 B 2 C 1 C 2 the working set becomes ≤ 1 / 4 K when we divide 3 times B 1 += K A 1 A 2 B 2 34 / 79

  33. Analyzing cases that do not fit in the cache when MK + KN + MN > C , the M M C 1 A 1 += interval doing gemm(A,B,C) is two C 2 A 2 subintervals, each of which does N N gemm for slightly smaller matrices += in the “nearly square” assumption, B 1 B 2 C 1 C 2 the working set becomes ≤ 1 / 4 K when we divide 3 times B 1 += K to make math simpler, we take it A 1 A 2 B 2 that the working set becomes 1 4(= 2 − 2 / 3 ) of the original size ≤ √ 3 on each recursion. i.e., 34 / 79

  34. Analyzing cases that do not fit in the cache when MK + KN + MN > C , the M M C 1 A 1 += interval doing gemm(A,B,C) is two C 2 A 2 subintervals, each of which does N N gemm for slightly smaller matrices += in the “nearly square” assumption, B 1 B 2 C 1 C 2 the working set becomes ≤ 1 / 4 K when we divide 3 times B 1 += K to make math simpler, we take it A 1 A 2 B 2 that the working set becomes 1 4(= 2 − 2 / 3 ) of the original size ≤ √ 3 on each recursion. i.e., √ 3 ∴ R ( w ) ≤ 2 R ( w/ 4) ( w > C ) 34 / 79

  35. Combined we have: { w ( w ≤ C ) √ R ( w ) ≤ 3 2 R ( w/ 4) ( w > C ) when w > C , it takes up to d ≈ log 3 √ 4 ( w/C ) recursion steps until the working set becomes ≤ C the whole computation is essentially 2 d consecutive intervals, each transferring ≤ C words 35 / 79

  36. Illustration log 3 w √ 4 ( w/C ) ... ... √ × 1 / 3 4 √ × 1 / 3 4 } ≤ C w ≤ C w ≤ C w ≤ C · · · 2 d · C ∴ R ( w ) < 4 ( w/C ) · C 2 log 3 √ = 1 ( w √ ) log 3 4 = C C 1 w 3 / 2 = √ C 36 / 79

  37. Result we have: 1 w 3 / 2 R ( w ) ≤ √ C for square ( n × n ) matrices ( w = 3 n 2 ), √ ∴ R ( n ) = R (3 n 2 ) = 3 3 n 3 √ C the same as the blocking we have seen before (not surprising), but we achieved this for all cache levels 37 / 79

  38. A practical remark in practice we stop ✞ gemm( A, B, C ) { 1 recursion when the if ( A, B, C together fit in the cache) { 2 for ( i, j, k ) ∈ [0 ..M ] × [0 ..N ] × [0 ..K ] 3 matrices become “small c ij += a ik ∗ b kj ; 4 enough” } else if (max( M, N, K ) = M ) { 5 gemm( A 1 , B, C 1 ); 6 gemm( A 2 , B, C 2 ); 7 } else if (max( M, N, K ) = N ) { 8 gemm( A, B 1 , C 1 ); 9 gemm( A, B 1 , C 2 ); 10 } else { / ∗ max( M, N, K ) = K ∗ / 11 gemm( A 1 , B 1 , C ); 12 gemm( A 2 , B 2 , C ); 13 } 14 } 15 38 / 79

  39. A practical remark in practice we stop ✞ gemm( A, B, C ) { 1 recursion when the if ( A, B, C together fit in the cache) { 2 for ( i, j, k ) ∈ [0 ..M ] × [0 ..N ] × [0 ..K ] 3 matrices become “small c ij += a ik ∗ b kj ; 4 enough” } else if (max( M, N, K ) = M ) { 5 gemm( A 1 , B, C 1 ); 6 but how small is small gemm( A 2 , B, C 2 ); 7 } else if (max( M, N, K ) = N ) { 8 enough? gemm( A, B 1 , C 1 ); 9 gemm( A, B 1 , C 2 ); 10 } else { / ∗ max( M, N, K ) = K ∗ / 11 gemm( A 1 , B 1 , C ); 12 gemm( A 2 , B 2 , C ); 13 } 14 } 15 38 / 79

  40. A practical remark in practice we stop ✞ gemm( A, B, C ) { 1 recursion when the if ( A, B, C together fit in the cache) { 2 for ( i, j, k ) ∈ [0 ..M ] × [0 ..N ] × [0 ..K ] 3 matrices become “small c ij += a ik ∗ b kj ; 4 enough” } else if (max( M, N, K ) = M ) { 5 gemm( A 1 , B, C 1 ); 6 but how small is small gemm( A 2 , B, C 2 ); 7 } else if (max( M, N, K ) = N ) { 8 enough? gemm( A, B 1 , C 1 ); 9 gemm( A, B 1 , C 2 ); when the threshold ≤ 10 } else { / ∗ max( M, N, K ) = K ∗ / 11 level x cache, the gemm( A 1 , B 1 , C ); 12 gemm( A 2 , B 2 , C ); analysis holds for all 13 } 14 levels x and lower } 15 38 / 79

  41. A practical remark in practice we stop ✞ gemm( A, B, C ) { 1 recursion when the if ( A, B, C together fit in the cache) { 2 for ( i, j, k ) ∈ [0 ..M ] × [0 ..N ] × [0 ..K ] 3 matrices become “small c ij += a ik ∗ b kj ; 4 enough” } else if (max( M, N, K ) = M ) { 5 gemm( A 1 , B, C 1 ); 6 but how small is small gemm( A 2 , B, C 2 ); 7 } else if (max( M, N, K ) = N ) { 8 enough? gemm( A, B 1 , C 1 ); 9 gemm( A, B 1 , C 2 ); when the threshold ≤ 10 } else { / ∗ max( M, N, K ) = K ∗ / 11 level x cache, the gemm( A 1 , B 1 , C ); 12 gemm( A 2 , B 2 , C ); analysis holds for all 13 } 14 levels x and lower } 15 on the other hand, we like to make it large, to reduce control overhead 38 / 79

  42. Contents 1 Introduction 2 Analyzing data access complexity of serial programs Overview Model of a machine An analysis methodology 3 Applying the methodology to matrix multiply 4 Tools to measure cache/memory traffic perf command PAPI library 5 Matching the model and measurements 6 Analyzing merge sort 39 / 79

  43. Tools to measure cache/memory traffic analyzing data access performance is harder than analyzing computational efficiency (ignoring caches) the code reflects how much computation you do you can experimentally confirm your understanding by counting cycles (or wall-clock time) caches are complex and subtle the same data access expression (e.g., a[i] ) may or may not count as the traffic gaps are larger between our model and the real machines (associativity, prefetches, local variables and stacks we often ignore, etc.) we like to have a tool to measure what happened on the machine → performance counters 40 / 79

  44. Performance counters recent CPUs equip with performance counters , which count the number of times various events happen in the processor OS exposes it to users (e.g., Linux perf event open system call) there are tools to access them more conveniently command: Linux perf ( man perf ) library: PAPI http://icl.cs.utk.edu/papi/ GUI: hpctoolkit http://hpctoolkit.org/ , VTunes, . . . 41 / 79

  45. Contents 1 Introduction 2 Analyzing data access complexity of serial programs Overview Model of a machine An analysis methodology 3 Applying the methodology to matrix multiply 4 Tools to measure cache/memory traffic perf command PAPI library 5 Matching the model and measurements 6 Analyzing merge sort 42 / 79

  46. perf command perf command is particularly easy to use ✞ perf stat command line 1 will show you cycles, instructions, and some other info to access performance counters of your interest (e.g., cache misses), specify them with -e ✞ perf stat -e counter -e counter ... command line 1 to know the list of available counters ✞ perf list 1 43 / 79

  47. perf command many interesting counters are not listed by perf list we often need to resort to “raw” events (defined on each CPU model) consult intel document 1 if the table says Event Num = 2EH , Umask Value = 41H , then you can access it via perf by -e r412e (umask; event num) 1 Intel 64 and IA-32 Architectures Developer’s Manual: Volume 3B: System Programming Guide, Part 2. Chapter 19 “Performance Monitoring Events” http://www.intel.com/content/www/us/en/architecture-and-technology/ 64-ia-32-architectures-software-developer-vol-3b-part-2-manual.html 44 / 79

  48. Contents 1 Introduction 2 Analyzing data access complexity of serial programs Overview Model of a machine An analysis methodology 3 Applying the methodology to matrix multiply 4 Tools to measure cache/memory traffic perf command PAPI library 5 Matching the model and measurements 6 Analyzing merge sort 45 / 79

  49. PAPI library library for accessing performance counters http://icl.cs.utk.edu/papi/index.html basic concepts create an empty “event set” add events of interest to the event set start counting do whatever you want to measure stop counting visit http://icl.cs.utk.edu/papi/docs/index.html and see “Low Level Functions” 46 / 79

  50. PAPI minimum example (single thread) A minimum example with a single thread and no error checks ✞ #include <papi.h> 1 int main() { 2 3 4 5 6 7 8 { do whatever(); } 9 10 11 return 0; 12 } 13 47 / 79

  51. PAPI minimum example (single thread) A minimum example with a single thread and no error checks ✞ #include <papi.h> 1 int main() { 2 PAPI library init(PAPI_VER_CURRENT); 3 int es = PAPI_NULL; 4 PAPI create eventset(&es); 5 PAPI add named event(es, "ix86arch::LLC_MISSES"); 6 PAPI start(es); 7 long long values[1]; 8 { do whatever(); } 9 PAPI stop(es, values); 10 printf("%lld\n", values[0]); 11 return 0; 12 } 13 48 / 79

  52. Compiling and running PAPI programs compile and run ✞ $ gcc ex.c -lpapi 1 $ ./a.out 2 33 3 papi avail and papi native avail list available event names (to pass to PAPI add named event ) perf raw::r nnnn for raw counters (same as perf command) 49 / 79

  53. Error checks be prepared to handle errors (never assume you know what works)! many routines return PAPI OK on success and a return code on error, which can then be passed to PAPI strerror( return code ) to convert it into an error message encapsulate such function calls with this ✞ void check_(int ret, const char * fun) { 1 if (ret != PAPI_OK) { 2 fprintf(stderr, "%s failed (%s)\n", fun, PAPI strerror(ret)); 3 exit(1); 4 } 5 } 6 7 #define check(call) check_(call, #call) 8 50 / 79

  54. A complete example with error checks ✞ 1 #include <stdio.h> 2 #include <stdlib.h> 3 #include <papi.h> 4 5 void check_(int ret, const char * fun) { if (ret != PAPI_OK) { 6 fprintf(stderr, "%s failed (%s)\n", fun, PAPI strerror(ret)); 7 exit(1); 8 } 9 } 10 #define check(f) check_(f, #f) 11 12 13 int main() { 14 int ver = PAPI_library_init(PAPI_VER_CURRENT); 15 if (ver != PAPI_VER_CURRENT) { 16 fprintf(stderr, "PAPI_library_init(%d) failed (returned %d)\n", 17 PAPI_VER_CURRENT, ver); 18 exit(1); 19 } 20 int es = PAPI_NULL; 21 check(PAPI_create_eventset(&es)); 22 check(PAPI_add_named_event(es, "ix86arch::LLC_MISSES")); 23 check(PAPI_start(es)); 24 { do whatever(); } 25 long long values[1]; 26 check(PAPI_stop(es, values)); 27 printf("%lld\n", values[0]); 28 return 0; 29 } 51 / 79

  55. Multithreaded programs must call PAPI thread init( id fun ) in addition to PAPI library init(PAPI VER CURRENT) id fun is a function that returns identity of a thread (e.g., pthread self, omp get thread num ) each thread must call PAPI register thread event set is private to a thread (each thread must call PAPI create eventset(), PAPI start(), PAPI stop() ) 52 / 79

  56. Multithreaded example ✞ #include <stdio.h> 1 #include <stdlib.h> 2 #include <omp.h> 3 #include <papi.h> 4 / ∗ check and check omitted (same as single thread) ∗ / 5 int main() { 6 / ∗ error check for PAPI library init omitted (same as single thread) ∗ / 7 PAPI_library_init(PAPI_VER_CURRENT); 8 check(PAPI thread init((unsigned long(*)()) omp_get_thread_num)); 9 #pragma omp parallel 10 { 11 check(PAPI register thread()); / ∗ each thread must do this ∗ / 12 int es = PAPI_NULL; 13 check(PAPI_create_eventset(&es)); / ∗ each thread must create its own set ∗ / 14 check(PAPI_add_named_event(es, "ix86arch::LLC_MISSES")); 15 check(PAPI_start(es)); 16 { do whatever(); } 17 long long values[1]; 18 check(PAPI_stop(es, values)); 19 printf("thread %d: %lld\n", omp_get_thread_num(), values[0]); 20 } 21 return 0; 22 } 23 53 / 79

  57. Several ways to obtain counter values PAPI stop( es , values ) : get current values and stop counting PAPI read( es , values ) : get current values and continue counting PAPI accum( es , values ) : accumulate current values, reset counters, and continue counting 54 / 79

  58. Useful PAPI commands papi avail, papi native avail : list event counter names papi mem info : report information about caches and TLB (size, line size, associativity, etc.) 55 / 79

  59. Contents 1 Introduction 2 Analyzing data access complexity of serial programs Overview Model of a machine An analysis methodology 3 Applying the methodology to matrix multiply 4 Tools to measure cache/memory traffic perf command PAPI library 5 Matching the model and measurements 6 Analyzing merge sort 56 / 79

  60. Matching the model and measurements (measurements) warnings: counters are highly CPU model-specific do not expect portability too much; always check perf list , perf native avail , and the Intel manual some counters or combination thereof cannot be monitored even if listed in perf native avail (you fail to add it to event set; never forget to check return code) virtualized environments have none or limited support of performance counters; Amazon EC2 environment shows no counters available (; ;) (I don’t know if there is a workaround) the following experiments were conducted on my Haswell (Core i7-4500U) laptop L1 : 32KB, L2 : 256KB, L3 : 4MB 57 / 79

  61. Matching the model and measurements (measurements) relevant counters L1D:REPLACEMENT L2 TRANS:L2 FILL MEM LOAD UOPS RETIRED:L3 MISS cache miss counts do not include line transfers hit thanks to prefetches L1D:REPLACEMENT and L2 TRANS:L2 FILL seem closer to what we want to match our model against I could not find good counters for L3 caches, so measure ix86arch::LLC MISSES 58 / 79

  62. Matching the model and measurements counters give the number of cache lines transferred a line is 64 bytes and a word is 4 bytes, so we assume: words transferred ≈ 16 × cache lines transferred recall we have: 1 w 3 / 2 R ( w ) ≤ √ C and R ( w ) is the number of words transferred so we calculate: the number of cache lines transferred w 3 / 2 1 √ (and expect it to be close to for w > C ) C 59 / 79

  63. √ 1 / C 1 level C √ C L1 8K 0 . 011048 · · · L2 64K 0 . 003906 · · · L3 1M 0 . 000976 · · · 60 / 79

  64. L1 (recursive blocking) (16 × L1D:replacement) / words 3 / 2 0 . 08 recursive blocking 0 . 07 0 . 06 0 . 05 0 . 04 0 . 03 0 . 02 0 . 01 0 0 200 400 600 800 1000 1200 1400 1600 1800 2000 2200 M (= N = K ) they are not constant as we expected 61 / 79

  65. What are the spikes? it spikes when M = a large power of two (128 | M , to be more specific) analyzing why it’s happening is a good exercise for you whatever it is, I told you avoid it! let’s remove M ’s that are multiple of 128 62 / 79

  66. L1 (remove multiples of 128) (16 × L1D:replacement) / words 3 / 2 M value 0 . 035 recursive blocking 1808 0.0187 0 . 03 1856 0.0178 1872 0.0177 0 . 025 1936 0.0170 0 . 02 1984 0.0159 2000 0.0167 0 . 015 0 . 01 1 √ ≈ 0 . 011048 · · · C 0 . 005 0 0 200 400 600 800 100012001400160018002000 M (= N = K ) 63 / 79

  67. L1 (compare w/ and w/o recursive blocking) (16 × L1D:replacement) / words 3 / 2 0 . 5 recursive blocking 0 . 45 no blocking 0 . 4 0 . 35 0 . 3 0 . 25 0 . 2 0 . 15 0 . 1 0 . 05 0 0 200 400 600 800 100012001400160018002000 M (= N = K ) 64 / 79

  68. L1 (remove multiples of 64) (16 × L1D:replacement) / words 3 / 2 0 . 5 recursive blocking 0 . 45 no blocking 0 . 4 0 . 35 0 . 3 0 . 25 0 . 2 0 . 15 0 . 1 0 . 05 0 0 200 400 600 800 100012001400160018002000 M (= N = K ) 65 / 79

  69. L1 (remove multiples of 32) (16 × L1D:replacement) / words 3 / 2 0 . 5 recursive blocking 0 . 45 no blocking 0 . 4 0 . 35 0 . 3 0 . 25 0 . 2 0 . 15 0 . 1 0 . 05 0 0 200 400 600 800 100012001400160018002000 M (= N = K ) 66 / 79

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend