Sorting Algorithms Ananth Grama, Anshul Gupta, George Karypis, and - - PowerPoint PPT Presentation

sorting algorithms
SMART_READER_LITE
LIVE PREVIEW

Sorting Algorithms Ananth Grama, Anshul Gupta, George Karypis, and - - PowerPoint PPT Presentation

Sorting Algorithms Ananth Grama, Anshul Gupta, George Karypis, and Vipin Kumar To accompany the text Introduction to Parallel Computing, Addison Wesley, 2003. Topic Overview Issues in Sorting on Parallel Computers Sorting Networks


slide-1
SLIDE 1

Sorting Algorithms

Ananth Grama, Anshul Gupta, George Karypis, and Vipin Kumar To accompany the text “Introduction to Parallel Computing”, Addison Wesley, 2003.

slide-2
SLIDE 2

Topic Overview

  • Issues in Sorting on Parallel Computers
  • Sorting Networks
  • Bubble Sort and its Variants
  • Quicksort
  • Bucket and Sample Sort
  • Other Sorting Algorithms
slide-3
SLIDE 3

Sorting: Overview

  • One of the most commonly used and well-studied kernels.
  • Sorting can be comparison-based or noncomparison-based.
  • The fundamental operation of comparison-based sorting is

compare-exchange.

  • The lower bound on any comparison-based sort of n numbers

is Θ(n log n).

  • We focus here on comparison-based sorting algorithms.
slide-4
SLIDE 4

Sorting: Basics

What is a parallel sorted sequence? Where are the input and

  • utput lists stored?
  • We assume that the input and output lists are distributed.
  • The sorted list is partitioned with the property that each

partitioned list is sorted and each element in processor Pi’s list is less than that in Pj’s list if i < j.

slide-5
SLIDE 5

Sorting: Parallel Compare Exchange Operation

Step 1 Step 2 Step 3

ai aj Pi Pi Pi Pj Pj Pj max{ai, aj} min{ai, aj} aj, ai ai, aj

A parallel compare-exchange operation. Processes Pi and Pj send their elements to each other. Process Pi keeps min{ai, aj}, and Pj keeps max{ai, aj}.

slide-6
SLIDE 6

Sorting: Basics

What is the parallel counterpart to a sequential comparator?

  • If each processor has one element, the compare exchange
  • peration stores the smaller element at the processor with

smaller id. This can be done in ts + tw time.

  • If we have more than one element per processor, we call this
  • peration a compare split.

Assume each of two processors have n/p elements.

  • After the compare-split operation, the smaller n/p elements are

at processor Pi and the larger n/p elements at Pj, where i < j.

  • The time for a compare-split operation is (ts + twn/p), assuming

that the two partial lists were initially sorted.

slide-7
SLIDE 7

Sorting: Parallel Compare Split Operation

8 6 1 11 13 1 2 6 7 8 2 6 9 12 13 13 11 1 6 9 12 13 7 8 11 10 10 11 8 6 1 13 2 9 7 12 10 8 7 1 8 6 2 11 1 2 9 7 12 2 9 7 12 10 10

Step 2

9 12 13 11 10

Step 4 Step 3 Step 1

Pi Pi Pi Pi Pj Pj Pj Pj A compare-split operation. Each process sends its block of size n/p to the other process. Each process merges the received block with its own block and retains only the appropriate half of the merged block. In this example, process Pi retains the smaller elements and process Pj retains the larger elements.

slide-8
SLIDE 8

Sorting Networks

  • Networks of comparators designed specifically for sorting.
  • A comparator is a device with two inputs x and y and two
  • utputs x′ and y′. For an increasing comparator, x′ = min{x, y}

and y′ = max{x, y}; and vice-versa.

  • We denote an increasing comparator by ⊕ and a decreasing

comparator by ⊖.

  • The speed of the network is proportional to its depth.
slide-9
SLIDE 9

Sorting Networks: Comparators

(a) (b)

x x x x y y y y x′ = min{x, y} y′ = max{x, y} x′ = max{x, y} y′ = min{x, y} x′ = min{x, y} y′ = max{x, y} x′ = max{x, y} y′ = min{x, y}

A schematic representation of comparators: (a) an increasing comparator, and (b) a decreasing comparator.

slide-10
SLIDE 10

Sorting Networks

Columns of comparators Input wires Output wires Interconnection network

A typical sorting network. Every sorting network is made up of a series of columns, and each column contains a number of comparators connected in parallel.

slide-11
SLIDE 11

Sorting Networks: Bitonic Sort

  • A bitonic sorting network sorts n elements in Θ(log2 n) time.
  • A

bitonic sequence has two tones – increasing and decreasing, or vice versa. Any cyclic rotation of such networks is also considered bitonic.

  • 1, 2, 4, 7, 6, 0 is a bitonic sequence, because it first increases

and then decreases. 8, 9, 2, 1, 0, 4 is another bitonic sequence, because it is a cyclic shift of 0, 4, 8, 9, 2, 1.

  • The kernel of the network is the rearrangement of a bitonic

sequence into a sorted sequence.

slide-12
SLIDE 12

Sorting Networks: Bitonic Sort

  • Let s = a0, a1, . . . , an−1 be a bitonic sequence such that a0 ≤

a1 ≤ . . . ≤ an/2−1 and an/2 ≥ an/2+1 ≥ . . . ≥ an−1.

  • Consider the following subsequences of s:

s1 = min{a0, an/2}, min{a1, an/2+1}, . . . , min{an/2−1, an−1} s2 = max{a0, an/2}, max{a1, an/2+1}, . . . , max{an/2−1, an−1} (1)

  • Note that s1 and s2 are both bitonic and each element of s1 is

less that every element in s2.

  • We can apply the procedure recursively on s1 and s2 to get the

sorted sequence.

slide-13
SLIDE 13

Sorting Networks: Bitonic Sort

Original sequence 3 5 8 9 10 12 14 20 95 90 60 40 35 23 18 1st Split 3 5 8 9 10 12 14 95 90 60 40 35 23 18 20 2nd Split 3 5 8 10 12 14 9 35 23 18 20 95 90 60 40 3rd Split 3 8 5 10 9 14 12 18 20 35 23 60 40 95 90 4th Split 3 5 8 9 10 12 14 18 20 23 35 40 60 90 95

Merging a 16-element bitonic sequence through a series of log 16 bitonic splits.

slide-14
SLIDE 14

Sorting Networks: Bitonic Sort

  • We can easily build a sorting network to implement this bitonic

merge algorithm.

  • Such a network is called a bitonic merging network.
  • The network contains log n columns. Each column contains n/2

comparators and performs one step of the bitonic merge.

  • We denote a bitonic merging network with n inputs by ⊕BM[n].
  • Replacing the ⊕ comparators by ⊖ comparators results in a

decreasing output sequence; such a network is denoted by ⊖BM[n].

slide-15
SLIDE 15

Sorting Networks: Bitonic Sort

18 23 35 40 60 90 95 20 14 12 10 9 8 5 3 95 90 60 40 35 23 20 18 14 12 10 9 8 5 3 90 95 40 60 23 35 20 18 12 14 9 10 5 8 3 40 90 60 95 20 18 23 35 9 12 10 5 8 3 3 5 10 14 8 9 12 14 95 90 40 35 23 20 18 60 0011 1100 1110 1101 1011 1010 1001 1000 0111 0110 0101 0100 0010 0001 0000 Wires 1111

A bitonic merging network for n = 16. The input wires are numbered 0, 1 . . . , n − 1, and the binary representation of these numbers is shown. Each column of comparators is drawn separately; the entire figure represents a ⊕BM[16] bitonic merging network. The network takes a bitonic sequence and

  • utputs it in sorted order.
slide-16
SLIDE 16

Sorting Networks: Bitonic Sort

How do we sort an unsorted sequence using a bitonic merge?

  • We must first build a single bitonic sequence from the given

sequence.

  • A sequence of length 2 is a bitonic sequence.
  • A bitonic sequence of length 4 can be built by sorting the first

two elements using ⊕BM[2] and next two, using ⊖BM[2].

  • This process can be repeated to generate larger bitonic

sequences.

slide-17
SLIDE 17

Sorting Networks: Bitonic Sort

BM[2] BM[2] BM[2] BM[2] BM[2] BM[2] BM[2] BM[2] BM[16] BM[4] BM[4] BM[4] BM[4] BM[8] BM[8]

0001 0100 0101 0000 0010 0011 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111

Wires

A schematic representation of a network that converts an input sequence into a bitonic sequence. In this example, ⊕BM[k] and ⊖BM[k] denote bitonic merging networks of input size k that use ⊕ and ⊖ comparators, respectively. The last merging network (⊕BM[16]) sorts the input. In this example, n = 16.

slide-18
SLIDE 18

Sorting Networks: Bitonic Sort

23 90 60 40 3 8 12 14 20 10 9 5 18 23 35 40 60 90 95 20 14 12 10 9 8 18 95 5 3 18 95 35 23 40 60 90 12 14 8 3 5 9 20 10 18 95 35 23 40 60 90 14 12 8 3 9 5 20 10 35 1000 1111 1110 1101 1100 1011 1010 1001 0111 0110 0101 0100 0011 0010 0001 0000 Wires

The comparator network that transforms an input sequence of 16 unordered numbers into a bitonic sequence.

slide-19
SLIDE 19

Sorting Networks: Bitonic Sort

  • The depth of the network is Θ(log2 n).
  • Each stage of the network contains n/2 comparators.

A serial implementation of the network would have complexity Θ(n log2 n).

slide-20
SLIDE 20

Mapping Bitonic Sort to Hypercubes

  • Consider the case of one item per processor.

The question becomes one of how the wires in the bitonic network should be mapped to the hypercube interconnect.

  • Note from our earlier examples that the compare-exchange
  • peration is performed between two wires only if their labels

differ in exactly one bit!

  • This implies a direct mapping of wires to processors.

All communication is nearest neighbor!

slide-21
SLIDE 21

Mapping Bitonic Sort to Hypercubes

Step 2 Step 4 Step 3 Step 1

1101 1100 1010 1110 1111 1011 0000 0001 0101 0100 0110 0011 0111 1000 1001 0010 0100 1100 1101 1110 0000 0001 0101 0010 0110 0111 0011 1001 1000 1010 1011 1111 1100 1101 1010 0100 0000 0001 0101 0010 0110 0111 0011 1000 1001 1011 1111 1110 1100 1101 0100 0000 0001 0101 0010 0110 0111 1000 1001 1011 1111 1110 0011

1010

Communication during the last stage of bitonic sort. Each wire is mapped to a hypercube process; each connection represents a compare-exchange between processes.

slide-22
SLIDE 22

Mapping Bitonic Sort to Hypercubes

1110 1100 1101 1011 1001 1000 1010 0110 0101 0111 0100 0011 0010 0001 1111 0000

Stage 3 1 1 1 1 1 1 1 1 2,1 2,1 2,1 2,1 3,2,1 3,2,1 4,3,2,1 Processors Stage 4 Stage 2 Stage 1

Communication characteristics of bitonic sort on a hypercube. During each stage of the algorithm, processes communicate along the dimensions shown.

slide-23
SLIDE 23

Mapping Bitonic Sort to Hypercubes

1. procedure BITONIC SORT(label, d) 2. begin 3. for i := 0 to d − 1 do 4. for j := i downto 0 do 5. if (i + 1)st bit of label = jth bit of label then 6. comp exchange max(j); 7. else 8. comp exchange min(j); 9. end BITONIC SORT Parallel formulation of bitonic sort on a hypercube with n = 2d processes.

slide-24
SLIDE 24

Mapping Bitonic Sort to Hypercubes

  • During each step of the algorithm, every process performs

a compare-exchange operation (single nearest neighbor communication of one word).

  • Since each step takes Theta(1) time, the parallel time is

TP = Θ(log2 n) (2)

  • This algorithm is cost optimal w.r.t. its serial counterpart, but not

w.r.t. the best sorting algorithm.

slide-25
SLIDE 25

Mapping Bitonic Sort to Meshes

  • The connectivity of a mesh is lower than that of a hypercube,

so we must expect some overhead in this mapping.

  • Consider

the row-major shuffled mapping

  • f

wires to processors.

slide-26
SLIDE 26

Mapping Bitonic Sort to Meshes

1101 1001 1010 0100 1000 1011 1100 1110 1111 0101 0110 0111 0000 0001 0010 0011 1110 1001 0010 1010 1101 1100 0000 0001 0011 0111 0110 0101 0100 1000 1011 1111 1000 1110 1100 1111 0000 0001 0100 0101 0010 0011 0110 0111 1001 1101 1010 1011

(a) (b) (c)

Different ways of mapping the input wires of the bitonic sorting network to a mesh of processes: (a) row-major mapping, (b) row-major snakelike mapping, and (c) row-major shuffled mapping.

slide-27
SLIDE 27

Mapping Bitonic Sort to Meshes

Step 1 Step 2 Step 3 Step 4

Stage 4

The last stage of the bitonic sort algorithm for n = 16 on a mesh, using the row-major shuffled mapping. During each step, process pairs compare-exchange their elements. Arrows indicate the pairs of processes that perform compare-exchange operations.

slide-28
SLIDE 28

Mapping Bitonic Sort to Meshes

  • In the row-major shuffled mapping, wires that differ at the ith

least-significant bit are mapped onto mesh processes that are 2⌊(i−1)/2⌋ communication links away.

  • The total amount of communication performed by each

process is log n

i=1

i

j=1 2⌊(j−1)/2⌋ ≈ 7√n, or Θ(√n).

The total computation performed by each process is Θ(log2 n).

  • The parallel runtime is:

TP =

comparisons

  • Θ(log2 n)

+

communication

Θ(√n).

  • This is not cost optimal.
slide-29
SLIDE 29

Block of Elements Per Processor

  • Each process is assigned a block of n/p elements.
  • The first step is a local sort of the local block.
  • Each subsequent compare-exchange operation is replaced

by a compare-split operation.

  • We can effectively view the bitonic network as having (1 +

log p)(log p)/2 steps.

slide-30
SLIDE 30

Block of Elements Per Processor: Hypercube

  • Initially the processes sort their n/p elements (using merge sort)

in time Θ((n/p) log(n/p)) and then perform Θ(log2 p) compare- split steps.

  • The parallel run time of this formulation is

TP =

local sort

  • Θ

n p log n p

  • +

comparisons

  • Θ

n p log2 p

  • +

communication

  • Θ

n p log2 p

  • .
  • Comparing to an optimal sort, the algorithm can efficiently use

up to p = Θ(2

√log n) processes.

  • The isoefficiency function due to both communication and

extra work is Θ(plog p log2 p).

slide-31
SLIDE 31

Block of Elements Per Processor: Mesh

  • The parallel runtime in this case is given by:

TP =

local sort

  • Θ

n p log n p

  • +

comparisons

  • Θ

n p log2 p

  • +

communication

  • Θ

n √p

  • This formulation can efficiently use up to p = Θ(log2 n) processes.
  • The isoefficiency function is Θ(2

√p√p).

slide-32
SLIDE 32

Performance of Parallel Bitonic Sort

The performance of parallel formulations of bitonic sort for n elements on p processes.

Maximum Number of Corresponding Isoefficiency Architecture Processes for E = Θ(1) Parallel Run Time Function Hypercube Θ(2

√log n)

Θ(n/(2

√log n) log n)

Θ(plog p log2 p) Mesh Θ(log2 n) Θ(n/ log n) Θ(2

√p√p)

Ring Θ(log n) Θ(n) Θ(2pp)

slide-33
SLIDE 33

Bubble Sort and its Variants

The sequential bubble sort algorithm compares and exchanges adjacent elements in the sequence to be sorted:

1. procedure BUBBLE SORT(n) 2. begin 3. for i := n − 1 downto 1 do 4. for j := 1 to i do 5. compare-exchange(aj, aj+1); 6. end BUBBLE SORT Sequential bubble sort algorithm.

slide-34
SLIDE 34

Bubble Sort and its Variants

  • The complexity of bubble sort is Θ(n2).
  • Bubble sort is difficult to parallelize since the algorithm has no

concurrency.

  • A simple variant, though, uncovers the concurrency.
slide-35
SLIDE 35

Odd-Even Transposition

1. procedure ODD-EVEN(n) 2. begin 3. for i := 1 to n do 4. begin 5. if i is odd then 6. for j := 0 to n/2 − 1 do 7. compare-exchange(a2j+1, a2j+2); 8. if i is even then 9. for j := 1 to n/2 − 1 do 10. compare-exchange(a2j, a2j+1); 11. end for 12. end ODD-EVEN Sequential odd-even transposition sort algorithm.

slide-36
SLIDE 36

Odd-Even Transposition

3 3 4 5 6 8 Phase 6 (even) 2 1 5 3 2 8 6 4 2 3 8 5 6 4 5 4 8 6 2 3 3 5 8 4 6 3 5 4 8 6 3 4 5 6 8 Phase 1 (odd) Unsorted Phase 2 (even) Phase 3 (odd) Phase 4 (even) Phase 5 (odd) 3 1 1 3 3 1 1 2 2 2 3 3 3 1 1 Sorted 3 3 4 5 6 8 3 3 4 5 6 8 1 2 1 2 Phase 8 (even) Phase 7 (odd)

Sorting n = 8 elements, using the odd-even transposition sort

  • algorithm. During each phase, n = 8 elements are compared.
slide-37
SLIDE 37

Odd-Even Transposition

  • After n phases of odd-even exchanges, the sequence is sorted.
  • Each phase of the algorithm (either odd or even) requires Θ(n)

comparisons.

  • Serial complexity is Θ(n2).
slide-38
SLIDE 38

Parallel Odd-Even Transposition

  • Consider the one item per processor case.
  • There are n iterations, in each iteration, each processor does
  • ne compare-exchange.
  • The parallel run time of this formulation is Θ(n).
  • This is cost optimal with respect to the base serial algorithm but

not the optimal one.

slide-39
SLIDE 39

Parallel Odd-Even Transposition

1. procedure ODD-EVEN PAR(n) 2. begin 3. id := process’s label 4. for i := 1 to n do 5. begin 6. if i is odd then 7. if id is odd then 8. compare-exchange min(id + 1); 9. else 10. compare-exchange max(id − 1); 11. if i is even then 12. if id is even then 13. compare-exchange min(id + 1); 14. else 15. compare-exchange max(id − 1); 16. end for 17. end ODD-EVEN PAR Parallel formulation of odd-even transposition.

slide-40
SLIDE 40

Parallel Odd-Even Transposition

  • Consider a block of n/p elements per processor.
  • The first step is a local sort.
  • In each subsequent step, the compare exchange operation is

replaced by the compare split operation.

  • The parallel run time of the formulation is

TP =

local sort

  • Θ

n p log n p

  • +

comparisons

Θ(n) +

communication

Θ(n).

slide-41
SLIDE 41

Parallel Odd-Even Transposition

  • The parallel formulation is cost-optimal for p = O(log n).
  • The isoefficiency function of this parallel formulation is Θ(p 2p).
slide-42
SLIDE 42

Shellsort

  • Let n be the number of elements to be sorted and p be the

number of processes.

  • During the first phase, processes that are far away from each
  • ther in the array compare-split their elements.
  • During the second phase, the algorithm switches to an odd-

even transposition sort.

slide-43
SLIDE 43

Parallel Shellsort

  • Initially, each process sorts its block of n/p elements internally.
  • Each process is now paired with its corresponding process in

the reverse order of the array. That is, process Pi, where i < p/2, is paired with process Pp−i−1.

  • A compare-split operation is performed.
  • The processes are split into two groups of size p/2 each and the

process repeated in each group.

slide-44
SLIDE 44

Parallel Shellsort

1 2 3 4 5 6 7 1 2 3 4 5 6 7 1 2 3 4 5 6 7

An example of the first phase of parallel shellsort on an eight-process array.

slide-45
SLIDE 45

Parallel Shellsort

  • Each process performs d = log p compare-split operations.
  • With O(p) bisection width, the each communication can be

performed in time Θ(n/p) for a total time of Θ((n log p)/p).

  • In the second phase, l odd and even phases are performed,

each requiring time Θ(n/p).

  • The parallel run time of the algorithm is:

TP =

local sort

  • Θ

n p log n p

  • +

first phase

  • Θ

n p log p

  • +

second phase

  • Θ
  • ln

p

  • .

(3)

slide-46
SLIDE 46

Quicksort

  • Quicksort is one of the most common sorting algorithms for

sequential computers because of its simplicity, low overhead, and optimal average complexity.

  • Quicksort selects one of the entries in the sequence to be the

pivot and divides the sequence into two – one with all elements less than the pivot and other greater.

  • The process is recursively applied to each of the sublists.
slide-47
SLIDE 47

Quicksort

1. procedure QUICKSORT (A, q, r) 2. begin 3. if q < r then 4. begin 5. x := A[q]; 6. s := q; 7. for i := q + 1 to r do 8. if A[i] ≤ x then 9. begin 10. s := s + 1; 11. swap(A[s], A[i]); 12. end if 13. swap(A[q], A[s]); 14. QUICKSORT (A, q, s); 15. QUICKSORT (A, s + 1, r); 16. end if 17. end QUICKSORT The sequential quicksort algorithm.

slide-48
SLIDE 48

Quicksort

1 2 3 3 4 5 8 7 1 2 3 3 4 5 7 8 3 2 1 5 8 4 3 7 (a) (b) (c) (d) (e) 1 2 3 5 8 4 3 7 1 2 3 3 4 5 7 8 Final position Pivot

Example of the quicksort algorithm sorting a sequence of size n = 8.

slide-49
SLIDE 49

Quicksort

  • The performance of quicksort depends critically on the quality
  • f the pivot.
  • In the best case, the pivot divides the list in such a way that the

larger of the two lists does not have more than αn elements (for some constant α).

  • In this case, the complexity of quicksort is O(n log n).
slide-50
SLIDE 50

Parallelizing Quicksort

  • Lets start with recursive desomposition – the list is partitioned

serially and each of the subproblems is handled by a different processor.

  • The time for this algorithm is lower-bounded by Ω(n)!
  • Can we parallelize the partitioning step – in particular, if we can

use n processors to partition a list of length n around a pivot in O(1) time, we have a winner.

  • This is difficult to do on real machines, though.
slide-51
SLIDE 51

Parallelizing Quicksort: PRAM Formulation

  • We assume a CRCW (concurrent read, concurrent write)

PRAM with concurrent writes resulting in an arbitrary write succeeding.

  • The formulation works by creating pools of processors. Every

processor is assigned to the same pool initially and has one element.

  • Each processor attempts to write its element to a common

location (for the pool).

  • Each processor tries to read back the location.

If the value read back is greater than the processor’s value, it assigns itself to the ‘left’ pool, else, it assigns itself to the ‘right’ pool.

  • Each pool performs this operation recursively.
  • Note that the algorithm generates a tree of pivots. The depth
  • f the tree is the expected parallel runtime. The average value

is O(log n).

slide-52
SLIDE 52

Parallelizing Quicksort: PRAM Formulation

3 5 3 7 1 2 4 8

A binary tree generated by the execution of the quicksort

  • algorithm. Each level of the tree represents a different

array-partitioning iteration. If pivot selection is optimal, then the height of the tree is Θ(log n), which is also the number of iterations.

slide-53
SLIDE 53

Parallelizing Quicksort: PRAM Formulation

1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 2 6 7 3 5 8 2 3 6 7 3 7 8 1 2 3 4 5 6 7 8 [4] {54} [1] {33} [6] {33} [5] {82} [2] {21} [3] {13} [7] {40} [8] {72} 54 82 40 72 13 21 33 33

(a) (b) (c)

1 5

(f) (d) (e)

2 6 1 5 8 2 6 3 1 5 8 7 leftchild rightchild leftchild rightchild leftchild rightchild 1

root = 4

The execution of the PRAM algorithm on the array shown in (a).

slide-54
SLIDE 54

Parallelizing Quicksort: Shared Address Space Formulation

  • Consider a list of size n equally divided across p processors.
  • A pivot is selected by one of the processors and made known

to all processors.

  • Each processor partitions its list into two, say Li and Ui, based
  • n the selected pivot.
  • All of the Li lists are merged and all of the Ui lists are merged

separately.

  • The set of processors is partitioned into two (in proportion of the

size of lists L and U). The process is recursively applied to each

  • f the lists.
slide-55
SLIDE 55

Shared Address Space Formulation

after global rearrangement pivot=7 pivot selection after local rearrangement First Step after local rearrangement Fourth Step pivot=5 pivot=17 after local rearrangement after global rearrangement after local rearrangement after global rearrangement Second Step pivot selection Third Step pivot selection pivot=11 Solution

7 2 1 6 3 4 5 18 13 17 14 20 10 15 9 19 16 12 11 8 1 3 13 10 14 9 20 17 7 4 18 15 2 19 6 11 16 12 5 8 3 14 9 20 7 15 19 6 2 18 13 1 17 10 4 16 5 11 12 8 10 9 8 12 11 13 17 15 14 16 7 2 1 6 3 4 5 18 13 17 14 20 10 15 9 19 16 12 11 8 2 6 3 4 5 7 1 13 20 10 15 9 19 16 12 11 8 17 14 18 2 1 13 17 14 3 4 5 6 7 10 15 9 16 12 11 8 18 20 19 10 9 8 12 11 13 17 15 14 16 2 1 13 17 14 3 4 5 6 7 10 15 9 16 12 11 8 18 20 19 2 1 3 4 5 6 7 18 19 20 13 17 15 9 12 11 14 10 8 16 2 1 3 4 5 6 7 18 19 20 13 14 15 16 17 9 10 8 11 12

P0 P0 P0 P0 P0 P0 P0 P1 P1 P1 P1 P1 P1 P1 P2 P2 P2 P2 P2 P2 P2 P2 P3 P3 P3 P3 P3 P3 P3 P3 P4 P4 P4 P4 P4 P4 P4

slide-56
SLIDE 56

Parallelizing Quicksort: Shared Address Space Formulation

  • The
  • nly

thing we have not described is the global reorganization (merging) of local lists to form L and U.

  • The problem is one of determining the right location for each

element in the merged list.

  • Each processor computes the number of elements locally less

than and greater than pivot.

  • It computes two sum-scans to determine the starting location

for its elements in the merged L and U lists.

  • Once it knows the starting locations, it can write its elements

safely.

slide-57
SLIDE 57

Parallelizing Quicksort: Shared Address Space Formulation

pivot=7 pivot selection after local rearrangement after global rearrangement

1 3 13 10 14 9 20 17 7 4 18 15 2 19 6 11 16 12 5 8 3 14 9 20 7 15 19 6 2 18 13 1 17 10 4 16 5 11 12 8 2 1 6 3 4 5 18 13 17 14 20 10 15 9 19 16 12 11 8 7

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19

2 1 1 2 1 2 3 3 2 3 2 5 8 10 2 3 4 6 7 13

P0 P0 P1 P1 P2 P2 P3 P3 P4 P4 |Si| |Li| Prefix Sum Prefix Sum

Efficient global rearrangement of the array.

slide-58
SLIDE 58

Parallelizing Quicksort: Shared Address Space Formulation

  • The parallel time depends on the split and merge time, and the

quality of the pivot.

  • The latter is an issue independent of parallelism, so we focus on

the first aspect, assuming ideal pivot selection.

  • The algorithm executes in four steps:

(i) determine and broadcast the pivot; (ii) locally rearrange the array assigned to each process; (iii) determine the locations in the globally rearranged array that the local elements will go to; and (iv) perform the global rearrangement.

  • The first step takes time Θ(log p), the second, Θ(n/p), the third,

Θ(log p), and the fourth, Θ(n/p).

  • The overall complexity of splitting an n-element array is Θ(n/p)+

Θ(log p).

slide-59
SLIDE 59

Parallelizing Quicksort: Shared Address Space Formulation

  • The process recurses until there are p lists, at which point, the

lists are sorted locally.

  • Therefore, the total parallel time is:

TP =

local sort

  • Θ

n p log n p

  • +

array splits

  • Θ

n p log p

  • + Θ(log2 p).

(4)

  • The corresponding isoefficiency is Θ(p log2 p) due to broadcast

and scan operations.

slide-60
SLIDE 60

Parallelizing Quicksort: Message Passing Formulation

  • A simple message passing formulation is based on the recursive

halving of the machine.

  • Assume that each processor in the lower half of a p processor

ensemble is paired with a corresponding processor in the upper half.

  • A designated processor selects and broadcasts the pivot.
  • Each processor splits its local list into two lists, one less (Li), and
  • ther greater (Ui) than the pivot.
  • A processor in the low half of the machine sends its list Ui to the

paired processor in the other half. The paired processor sends its list Li.

  • It is easy to see that after this step, all elements less than the

pivot are in the low half of the machine and all elements greater than the pivot are in the high half.

slide-61
SLIDE 61

Parallelizing Quicksort: Message Passing Formulation

  • The above process is recursed until each processor has its own

local list, which is sorted locally.

  • The time for a single reorganization is Θ(log p) for broadcasting

the pivot element, Θ(n/p) for splitting the locally assigned portion

  • f

the array, Θ(n/p) for exchange and local reorganization.

  • We note that this time is identical to that of the corresponding

shared address space formulation.

  • It is important to remember that the reorganization of elements

is a bandwidth sensitive operation.

slide-62
SLIDE 62

Bucket and Sample Sort

  • In Bucket sort, the range [a, b] of input numbers is divided into m

equal sized intervals, called buckets.

  • Each element is placed in its appropriate bucket.
  • If the numbers are uniformly divided in the range, the buckets

can be expected to have roughly identical number of elements.

  • Elements in the buckets are locally sorted.
  • The run time of this algorithm is Θ(n log(n/m)).
slide-63
SLIDE 63

Parallel Bucket Sort

  • Parallelizing bucket sort is relatively simple. We can select m =

p.

  • In this case, each processor has a range of values it is

responsible for.

  • Each processor runs through its local list and assigns each of its

elements to the appropriate processor.

  • The elements are sent to the destination processors using a

single all-to-all personalized communication.

  • Each processor sorts all the elements it receives.
slide-64
SLIDE 64

Parallel Bucket and Sample Sort

  • The critical aspect of the above algorithm is one of assigning

ranges to processors. This is done by suitable splitter selection.

  • The splitter selection method divides the n elements into m

blocks of size n/m each, and sorts each block by using quicksort.

  • From each sorted block it chooses m − 1 evenly spaced

elements.

  • The m(m − 1) elements selected from all the blocks represent

the sample used to determine the buckets.

  • This scheme guarantees that the number of elements ending

up in each bucket is less than 2n/m.

slide-65
SLIDE 65

Parallel Bucket and Sample Sort

Initial element distribution Local sort & sample selection Global splitter selection Final element assignment Sample combining

1 13 10 14 20 17 18 2 6 7 22 24 3 19 16 15 23 4 11 12 5 8 21 9 1 2 13 18 17 14 22 3 6 9 24 10 15 20 21 4 5 8 11 12 16 23 19 7 7 17 9 20 8 16 7 8 9 16 17 20 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 20 21 22 23 24 19

P0 P0 P0 P1 P1 P1 P2 P2 P2

An example of the execution of sample sort on an array with 24 elements on three processes.

slide-66
SLIDE 66

Parallel Bucket and Sample Sort

  • The splitter selection scheme can itself be parallelized.
  • Each processor generates the p − 1 local splitters in parallel.
  • All processors share their splitters using a single all-to-all

broadcast operation.

  • Each processor sorts the p(p−1) elements it receives and selects

p − 1 uniformly spaces splitters from them.

slide-67
SLIDE 67

Parallel Bucket and Sample Sort: Analysis

  • The internal sort of n/p elements requires time Θ((n/p) log(n/p)),

and the selection of p − 1 sample elements requires time Θ(p).

  • The time for an all-to-all broadcast is Θ(p2), the time to internally

sort the p(p−1) sample elements is Θ(p2 log p), and selecting p−1 evenly spaced splitters takes time Θ(p).

  • Each process can insert these p − 1 splitters in its local sorted

block of size n/p by performing p − 1 binary searches in time Θ(p log(n/p)).

  • The time for reorganization of the elements is O(n/p).
slide-68
SLIDE 68

Parallel Bucket and Sample Sort: Analysis

  • The total time is given by:

TP =

local sort

  • Θ

n p log n p

  • +

sort sample

  • Θ
  • p2 log p
  • +

block partition

  • Θ
  • p log n

p

  • +

communication

  • Θ(n/p).

(5)

  • The isoefficiency of the formulation is Θ(p3 log p).