High Performance Combinatorial Algorithm Design on the Cell/B.E. - - PowerPoint PPT Presentation

high performance combinatorial algorithm design on the
SMART_READER_LITE
LIVE PREVIEW

High Performance Combinatorial Algorithm Design on the Cell/B.E. - - PowerPoint PPT Presentation

High Performance Combinatorial Algorithm Design on the Cell/B.E. David A. Bader, Virat Agarwal Kamesh Madduri Seunghwa Kang Sulabh Patel Kamesh Madduri, Seunghwa Kang, Sulabh Patel Cell System Features Heterogeneous multi-core


slide-1
SLIDE 1

High Performance Combinatorial Algorithm Design on the Cell/B.E.

David A. Bader, Virat Agarwal Kamesh Madduri Seunghwa Kang Sulabh Patel Kamesh Madduri, Seunghwa Kang, Sulabh Patel

slide-2
SLIDE 2

Cell System Features

Heterogeneous multi-core system architecture Synergistic Processor Element (SPE) consists of Power Processor Element for control tasks Synergistic Processor Synergistic Processor Unit (SPU) Synergistic Memory Flow Control (MFC) Data movement & synchronization Interface to high-performance

Virat Agarwal, STI Cell workshop 2

y g Elements for data- intensive processing Interface to high performance Element Interconnect Bus

slide-3
SLIDE 3

Cellbuzz @ Georgia Tech.

  • List ranking
  • Fast Fourier Transform
  • Zlib Compression/Decompression

p / p

  • RC5 Encryption/Decryption
  • MPEG2
  • MPEG2

Open-source, can be obtained from : http://sourceforge.net/projects/cellbuzz

Virat Agarwal, STI Cell workshop 3

p // g /p j /

slide-4
SLIDE 4

List Ranking on Cell

  • Cell performs well for applications with predictable memory

access patterns [Williams et. al. 2006] p [ ]

  • Conjecture: Can Cell architecture also perform well for

li ti th t hibit i l tt ? applications that exhibit irregular memory access patterns?

– Non-contiguous accesses to global data structures with low degrees of locality List ranking is a special case f P ll l P fi h h

  • f Parallel Prefix where the

values are initially set to 1 (except for the head) and addition is used as the

Virat Agarwal, STI Cell workshop 4

addition is used as the

  • perator.
slide-5
SLIDE 5

A parallel algorithm for List Ranking

  • SMP algorithm [Helman & JaJa, 1999]
  • 1. Partition the input list into s sublists, by randomly choosing s sublist

head nodes, one from each memory block of n/(s − 1) nodes.

  • 2. Traverse each sublist computing the prefix sum of each node within

the sublists the sublists.

  • 3. Calculate prefix sums of the sublist head nodes.
  • 4. Traverse the sublists again, summing the prefix sum values of each

node with the value of its sublist head node.

  • Design Issues

– Frequent DMA transfers required to fetch successor elements. – No significant computation in the algorithm, thus communication creates a bottleneck.

Virat Agarwal, STI Cell workshop 5

– Need to hide DMA latency by overlapping computation with communication.

slide-6
SLIDE 6

A Generic Latency-hiding technique

  • Cell supports non-blocking

memory transfers

  • Requires identification of

another level of parallelism within each SPE.

  • Concept of software-managed-threads (SM-Threads)

– SPE computation is distributed among these threads – SM-Threads are scheduled according to Round Robin policy. g p y

  • Instruction level profiling to determine the minimum number of SM-

Threads needed to hide latency

Virat Agarwal, STI Cell workshop 6

Threads needed to hide latency.

– Tradeoff between latency and number of SM-threads.

slide-7
SLIDE 7

List Ranking: Performance Analysis

Tuning the DMA parameter Running Time (PPE-only vs PPE+SPE’s)

120 5

Ordered List

120 6

Random List

ning Time (msec)

60 80 100

provement factor

2 3 4

ning Time(msec)

60 80 100

provement factor

3 4 5

Number of DMA buffers

1 2 4 8

Runn

20 40

Imp

1

Number of DMA buffers

1 2 4 8

Run

20 40

Imp

1 2

c)

2000 2500 Cell (PPE-only) Cell Optimized

Listranking on Cell - Random Lists PPE-only vs Cell Optimized Listranking on Cell - Ordered Lists PPE-only vs Cell Optimized

ec)

100 120 140 Cell (PPE-only) Cell Optimized

Running time (msec

500 1000 1500

Running Time (mse

20 40 60 80

Virat Agarwal, STI Cell workshop 7

log(list size)

16 17 18 19 20 21 22 23

log(list size)

16 17 18 19 20 21 22 23

slide-8
SLIDE 8

List Ranking : Performance Analysis g y

Comparison with other architectures

2.5 times faster than an optimized parallel implementation on a d l W d t (I t l X 5150)

Virat Agarwal, STI Cell workshop 8

dual core Woodcrest (Intel Xeon 5150)

  • 4.6 times faster than single core.
slide-9
SLIDE 9

ZLIB: Data compression/decompression

  • LZ77 algorithm [J. Ziv & A. Lempel, 1977]

– Identify the longest repeating string from the previous data. Replace the duplicated string – Replace the duplicated string with the reference to its previous

  • ccurrence.

– A reference is represented by a p y length-distance pair. – Length-distance pairs and literals produced by LZ77 l ith H ff d d t algorithm are Huffman coded to enhance the compression ratio.

Virat Agarwal, STI Cell workshop 9

slide-10
SLIDE 10

ZLIB: Optimization for the Cell

  • Optimizing on the SPE (most compute intensive parts)

– Compression Compression

  • Finding longest matches in LZ77 algorithm

– Decompression

  • Converting Huffman coded data to length distance pairs
  • Converting Huffman coded data to length-distance pairs

– Reduce memory requirement

P ll li i f th SPE

  • Parallelizing for the SPEs

– Full flushing to break data dependency – Work queue to achieve load balancing – Extending Gzip header format to enable faster decompression

  • Include information on flush points
  • Keep it compatible with legacy gzip decompressor

Virat Agarwal, STI Cell workshop 10

eep t co pat b e t egacy g p deco p esso

slide-11
SLIDE 11

GZIP Performance results

5 Compression Level 1 Compression Level 5

Speedup of gzip compression Cell optimized vs sequential gzip for a single SPE Speedup of gzip decompression Cell optimized vs sequential gzip for a single SPE

5 Compression Level 1

eedup

3 4 Compression Level 5 Compression Level 9

dup

3 4 Compression Level 1 Compression Level 5 Compression Level 9

Spe

1 2

Spee

1 2 Compressed Text Bitmap1 Bitmap2 Bitmap3 File Type Compressed Bitmap File 1 Text File Bitmap File 2 Bitmap File 3 Virat Agarwal, STI Cell workshop 11

slide-12
SLIDE 12

GZIP Performance results

Speedup of Cell optimized gzip compression with varying number of SPEs

7 8 Obtained Speedup

Performance Comparison of Cell optimized gzip with other Architectures

600

2 9

Speedup

2 3 4 5 6

Time (sec)

300 400 500

2.9

1 2 3 4 5 6 7 8 1 2

Number of SPEs

S d f C ll ti i d i d i

Running

100 200

5 6 7 8 Obtained Speedup

Speedup of Cell optimized gzip decompression with varying number of SPEs Cell Optimized Intel 3.2 Ghz

Speedup

1 2 3 4

Virat Agarwal, STI Cell workshop 12

Number of SPEs

1 2 3 4 5 6 7 8

slide-13
SLIDE 13

FFT on Cell

  • Williams et al. analyzed peak performance of

FFT of various types FFT of various types.

  • Green and Cooper showed impressive results

for an FFT of size 64K for an FFT of size 64K.

  • Chow et al. developed a design for 16 million

complex samples complex samples.

  • FFTW supports FFT of various size, type and

accuracy accuracy.

  • None exhibit good performance for small

input size

Virat Agarwal, STI Cell workshop 13

input size

slide-14
SLIDE 14

FFT Algorithm used : Cooley Tukey

Out of Place 1D FFT requires two arrays

Butterflies of ordered DIF

Virat Agarwal, STI Cell workshop 14

Out of Place 1D FFT requires two arrays A & B for computation at each stage Saves bit-reversal stage

FFT Algorithm

slide-15
SLIDE 15

FFT Algorithm for the Cell/B.E.

Parallelization Number of chunks = 2*p, where p : Number of SPEs Chunk i and i+p are allocated to SPE i Each chunk is fetched using DMA get with multibuffering with multibuffering Tree Synchronization Tree Synchronization Synchronization after every stage using Inter-SPE DMA communication, Achieved in (2*logn) stages. Each synchronization stage takes 1 microsec

  • PPU-coordinated synchronization

takes 20 microsec.

Virat Agarwal, STI Cell workshop 15

slide-16
SLIDE 16

FFT: Optimization for SPE

Loop duplication for Stages 1 & 2

  • For vectorization of these stages we

need to use spu_shuffle on output vector.

Loop duplication for NP<buffersize and otherwise.

Need to stall for DMA get at different places

  • Need to stall for DMA get at different places

within the loop.

Code size increases which limits the size

  • f FFT that can be computed.

Virat Agarwal, STI Cell workshop 16

slide-17
SLIDE 17

FFT: Design Challenges

Synchronization step after every stage (log N) leads to significant overhead

  • minimize the sync. time by using tree based approach using

inter SPE comm.

Limited local store

  • require space for twiddle factors and input data.
  • loop unrolling and duplication increases size of the code.

Algorithm is memory-bound

  • use multi-buffering - further increases the required space in a

g q p limited local store.

Code is branchy with a doubly nested for loop within the outer

Virat Agarwal, STI Cell workshop 17

y y p while loop, lack

  • f

branch predictor compromises performance.

slide-18
SLIDE 18

FFT: Performance analysis

FFT Size 1K Number of SPEs vs Running Time

50 8

Performance Comparison of our optimized FFT implementation as compared with other architectures.

IBM Power5 AMD Opteron

ng Time (microseconds)

20 30 40

formance Improvement

4 6

12 14 16 p Intel Pentium 4 FFTW on Cell Our implementation Intel Core Duo

Number of SPEs

1 2 4 8

Runni

10

Perf

2

FFT Size 8K

GigaFlop/s

6 8 10

FFT Size 8K Number of SPEs vs Running Time

econds)

300 400

ement

6 8

2 4 6

Running Time (microse

100 200

Performance Improve

2 4

Input size

1024 2048 4096 8192 16384

Operation Count : (5*N log N) fl i i i

Virat Agarwal, STI Cell workshop 18

Number of SPEs

1 2 4 8

floating point operations

slide-19
SLIDE 19

FFT: Performance Analysis Pipeline utilization Pipeline utilization

Analysis of Pipeline utilization using asmviz tool from IBM.

Have a few stalls that still need Have a few stalls that still need to be optimized.

Virat Agarwal, STI Cell workshop 19

slide-20
SLIDE 20

RC5: Encryption/Decryption on the Cell/B.E.

  • Symmetric block cipher

– Used in message encryption digital signatures stream Used in message encryption, digital signatures, stream encryption, internet e-commerce, file encryption, electronic cash.

  • Three parameters
  • Three parameters

– Word size w : 16, 32 or 64. (each block has 2 words) – Number of rounds r : 0,1,2,…,255 – Number of bytes in key b :0,1,2,..,255

  • Mixing the secret key

Mixing the secret key

– Encryption : Left rotate operation encrypts the input – Decryption : Right rotate operation is used to generate the output.

Virat Agarwal, STI Cell workshop 20

slide-21
SLIDE 21

RC5: Optimizing for the Cell

  • Divide input array into p equal chunks, where p is the

number of SPEs. Each SPE (i) is allocated chunk i.

  • The chunks are fetched into the SPEs using double-

buffering. g

– The data indices in the fetched buffer are shuffled so as to form to separate chunks of input data. These 2 separate arrays are then used as the 2 blocks for RC5 encryption.

  • The for loop in both RC5 encryption/decryption is vectorized

and unrolled for best pipeline utilization.

  • RC5 assumes a little-endian input, for Cell we need to

convert to the required convention, which compromises performance

Virat Agarwal, STI Cell workshop 21

performance.

slide-22
SLIDE 22

RC5: Performance Analysis

Running Time of RC5 encryption with varying number of SPEs

25 30 7 8

Running Time of RC5 decryption with varying number of SPEs

25 30 7 8

Running Time (msec)

10 15 20

Improvement factor

3 4 5 6

Running Time (msec)

10 15 20

Improvement factor

3 4 5 6

Number of SPEs

1 2 4 8 5 1 2

Number of SPEs

1 2 4 8 5 1 2

RC5 d ti C ll RC5 encryption on Cell PPE-only vs Cell optimized

msec)

100 1000 PPE-only Cell optimized

msec)

100 1000 PPE-only Cell optimized

RC5 decryption on Cell PPE-only vs Cell optimized

Running Time (m

1 10

Running Time (m

1 10

Virat Agarwal, STI Cell workshop 22

log (size)

16 17 18 19 20 21 0.1

log (size)

16 17 18 19 20 21 0.1

slide-23
SLIDE 23

MPEG-2 decoding : Parallel Algorithm

  • SMP algorithm [Bilas, Fritts, & Singh, 1996]

– Authors examined various points of parallelization (Groups of Pictures vs. Slices) – Determined that parallelizing on Slices is most efficient when running on multiple Determined that parallelizing on Slices is most efficient when running on multiple processors (low bandwidth communication, high amount of local storage)

1. Add pictures to the picture task queue as they are encountered in the bitstream 2. For each picture, decode the slices and add them to a task queue that worker processors take work from 3. Decode the slices in parallel, synchronizing at the end of each picture 4. When all slices of a picture are decoded, remove it from the picture task queue and add it to the display queue

  • Design Considerations

Since a slice can contain an arbitrary number of macroblocks workload – Since a slice can contain an arbitrary number of macroblocks, workload imbalance can occur (usually not an issue since the number of slices per picture is limited) – Can significantly improve performance by synchronizing at the end of specific i t

Virat Agarwal, STI Cell workshop 23

pictures – Other points of parallelization are dismissed by the authors due to load imbalance

slide-24
SLIDE 24

MPEG-2 Decoding on Cell

  • We parallelize on macroblocks

– The SMP algorithm authors dismissed other points of parallelization since they created workload imbalance, assuming all workers can perform all tasks at l ffi i equal efficiency – For macroblock parallelization, all workers must wait while one processor completely decodes a picture

  • 1. One PPE thread decodes each picture and adds all macroblocks to a work queue
  • 2. Another PPE thread assigns work to the SPEs from the work queue
  • 3. Each SPE is used to perform Inverse Discrete Cosine Transform and scatter/gather
  • perations (computationally expensive operations)
  • Design Considerations

– SPE performs scatter/gather operations on small amounts of memory, which is inefficient – Workload is divided unevenly between the PPE and SPEs (SPEs are used exclusively as accelerators)

  • Initial performance results

Virat Agarwal, STI Cell workshop 24

  • Initial performance results
slide-25
SLIDE 25

Acknowledgment of Support

  • National Science Foundation

– CSR CSR: A Framework for Optimizing Scientific Applications (06-14915) – CAREE CAREER: High-Performance Algorithms for Scientific Applications (06-11589; 00-93039) CAREE CAREER: High Performance Algorithms for Scientific Applications (06 11589; 00 93039) – ITR ITR: Building the Tree of Life -- A National Resource for Phyloinformatics and Computational Phylogenetics (EF/BIO 03-31654) – ITR/AP: ITR/AP: Reconstructing Complex Evolutionary Histories (01-21377) – DEB DEB Comparative Chloroplast Genomics: Integrating Computational Methods, Molecular Evolution, and Phylogeny (01-20709) Evolution, and Phylogeny (01 20709) – ITR/AP(DEB): ITR/AP(DEB): Computing Optimal Phylogenetic Trees under Genome Rearrangement Metrics (01-13095) – DBI: DBI: Acquisition of a High Performance Shared-Memory Computer for Computational Science and Engineering (04-20513).

  • IBM PERCS / DARPA High Productivity Computing Systems (HPCS)

IBM PERCS / DARPA High Productivity Computing Systems (HPCS)

– DARPA Contract NBCH30390004

  • IBM Shared University Research (SUR) Grant
  • Sony-Toshiba-IBM (STI)
  • Microsoft Research

Microsoft Research

  • Sun Academic Excellence Grant

Virat Agarwal, STI Cell workshop 25

slide-26
SLIDE 26

Thank you Questions? Questions?

Virat Agarwal, STI Cell workshop 26