Parallel Numerical Algorithms Chapter 6 Matrix Models Section 6.2 - - PowerPoint PPT Presentation

parallel numerical algorithms
SMART_READER_LITE
LIVE PREVIEW

Parallel Numerical Algorithms Chapter 6 Matrix Models Section 6.2 - - PowerPoint PPT Presentation

Low Rank Approximation by SVD Computing Low Rank Approximations Randomness and Approximation Hierarchical Low-Rank Structure Parallel Numerical Algorithms Chapter 6 Matrix Models Section 6.2 Low Rank Approximation Edgar Solomonik


slide-1
SLIDE 1

Low Rank Approximation by SVD Computing Low Rank Approximations Randomness and Approximation Hierarchical Low-Rank Structure

Parallel Numerical Algorithms

Chapter 6 – Matrix Models Section 6.2 – Low Rank Approximation Edgar Solomonik

Department of Computer Science University of Illinois at Urbana-Champaign

CS 554 / CSE 512

Edgar Solomonik Parallel Numerical Algorithms 1 / 37

slide-2
SLIDE 2

Low Rank Approximation by SVD Computing Low Rank Approximations Randomness and Approximation Hierarchical Low-Rank Structure

Outline

1

Low Rank Approximation by SVD Truncated SVD Fast Algorithms with Truncated SVD

2

Computing Low Rank Approximations Direct Computation Indirect Computation

3

Randomness and Approximation Randomized Approximation Basics Structured Randomized Factorization

4

Hierarchical Low-Rank Structure HSS Matrix–Vector Multiplication Parallel HSS Matrix–Vector Multiplication

Edgar Solomonik Parallel Numerical Algorithms 2 / 37

slide-3
SLIDE 3

Low Rank Approximation by SVD Computing Low Rank Approximations Randomness and Approximation Hierarchical Low-Rank Structure Truncated SVD Fast Algorithms with Truncated SVD

Rank-k Singular Value Decomposition (SVD)

For any matrix A ∈ Rm×n of rank k there exists a factorization A = UDV T U ∈ Rm×k is a matrix of orthonormal left singular vectors D ∈ Rk×k is a nonnegative diagonal matrix of singular values in decreasing order σ1 ≥ · · · ≥ σk V ∈ Rn×k is a matrix of orthonormal right singular vectors

Edgar Solomonik Parallel Numerical Algorithms 3 / 37

slide-4
SLIDE 4

Low Rank Approximation by SVD Computing Low Rank Approximations Randomness and Approximation Hierarchical Low-Rank Structure Truncated SVD Fast Algorithms with Truncated SVD

Truncated SVD

Given A ∈ Rm×n seek its best k < rank(A) approximation B = argmin

B∈Rm×n,rank(B)≤k

(||A − B||2) Eckart-Young theorem: given SVD A =

  • U1

U2 D1 D2 V1 V2 T ⇒ B = U1D1V T

1

where D1 is k × k. U1D1V T

1 is the rank-k truncated SVD of A and

||A − U1D1V T

1 ||2 =

min

B∈Rm×n,rank(B)≤k(||A − B||2) = σk+1

Edgar Solomonik Parallel Numerical Algorithms 4 / 37

slide-5
SLIDE 5

Low Rank Approximation by SVD Computing Low Rank Approximations Randomness and Approximation Hierarchical Low-Rank Structure Truncated SVD Fast Algorithms with Truncated SVD

Computational Cost

Given a rank k truncated SVD A ≈ UDV T of A ∈ Rm×n with m ≥ n Performing approximately y = Ax requires O(mk) work y ≈ U(D(V T x)) Solving Ax = b requires O(mk) work via approximation x ≈ V D−1U T b

Edgar Solomonik Parallel Numerical Algorithms 5 / 37

slide-6
SLIDE 6

Low Rank Approximation by SVD Computing Low Rank Approximations Randomness and Approximation Hierarchical Low-Rank Structure Direct Computation Indirect Computation

Computing the Truncated SVD

Reduction to upper-Hessenberg form via two-sided orthogonal updates can compute full SVD Given full SVD can obtain truncated SVD by keeping only largest singular value/vector pairs Given set of transformations Q1, . . . , Qs so that U = Q1 · · · Qs, can obtain leading k columns of U by computing U1 = Q1

  • · · ·
  • Qs

I This method requires O(mn2) work for the computation of singular values and O(mnk) for k singular vectors

Edgar Solomonik Parallel Numerical Algorithms 6 / 37

slide-7
SLIDE 7

Low Rank Approximation by SVD Computing Low Rank Approximations Randomness and Approximation Hierarchical Low-Rank Structure Direct Computation Indirect Computation

Computing the Truncated SVD by Krylov Subspace Methods

Seek k ≪ m, n leading right singular vectors of A Find a basis for Krylov subspace of B = AT A Rather than computing B, compute products Bx = AT (Ax) For instance, do k′ ≥ k + O(1) iterations of Lanczos and compute k Ritz vectors to estimate singular vectors V Left singular vectors can be obtained via AV = UD This method requires O(mnk) work for k singular vectors However, Θ(k) sparse-matrix-vector multiplications are needed (high latency and low flop/byte ratio)

Edgar Solomonik Parallel Numerical Algorithms 7 / 37

slide-8
SLIDE 8

Low Rank Approximation by SVD Computing Low Rank Approximations Randomness and Approximation Hierarchical Low-Rank Structure Direct Computation Indirect Computation

Generic Low-Rank Factorizations

A matrix A ∈ Rm×n is rank k, if for someX ∈ Rm×k, Y ∈ Rn×k with k ≤ min(m, n), A = XY T If A = XY T (exact low rank factorization), we can obtain reduced SVD A = UDV T via

1

[U1, R] = QR(X)

2

[U2, D, V ] = SVD(RY T )

3

U = U1U2

with cost O(mk2) using an SVD of a k × k rather than m × n matrix If instead ||A − XY T ||2 ≤ ε then ||A − UDV T ||2 ≤ ε So we can obtain a truncated SVD given an optimal generic low-rank approximation

Edgar Solomonik Parallel Numerical Algorithms 8 / 37

slide-9
SLIDE 9

Low Rank Approximation by SVD Computing Low Rank Approximations Randomness and Approximation Hierarchical Low-Rank Structure Direct Computation Indirect Computation

Rank-Revealing QR

If A is of rank k and its first k columns are linearly independent A = Q   R11 R12   where R11 is upper-triangular and k × k and Q = Y T Y T with n × k matrix Y For arbitrary A we need column ordering permutation P A = QRP QR with column pivoting (due to Gene Golub) is an effective method for this

pivot so that the leading column has largest 2-norm method can break in the presence of roundoff error (see Kahan matrix), but is very robust in practice

Edgar Solomonik Parallel Numerical Algorithms 9 / 37

slide-10
SLIDE 10

Low Rank Approximation by SVD Computing Low Rank Approximations Randomness and Approximation Hierarchical Low-Rank Structure Direct Computation Indirect Computation

Low Rank Factorization by QR with Column Pivoting

QR with column pivoting can be used to either determine the (numerical) rank of A compute a low-rank approximation with a bounded error performs only O(mnk) rather than O(mn2) work for a full QR or SVD

Edgar Solomonik Parallel Numerical Algorithms 10 / 37

slide-11
SLIDE 11

Low Rank Approximation by SVD Computing Low Rank Approximations Randomness and Approximation Hierarchical Low-Rank Structure Direct Computation Indirect Computation

Parallel QR with Column Pivoting

In distributed-memory, column pivoting poses further challenges Need at least one message to decide on each pivot column, which leads to Ω(k) synchronizations Existing work tries to pivot many columns at a time by finding subsets of them that are sufficiently linearly independent Randomized approaches provide alternatives and flexibility

Edgar Solomonik Parallel Numerical Algorithms 11 / 37

slide-12
SLIDE 12

Low Rank Approximation by SVD Computing Low Rank Approximations Randomness and Approximation Hierarchical Low-Rank Structure Randomized Approximation Basics Structured Randomized Factorization

Randomization Basics

Intuition: consider a random vector w of dimension n, all of the following holds with high probability in exact arithmetic Given any basis Q for the n dimensional space, random w is not orthogonal to any row of QT Let A = UDV T where V T ∈ Rn×k Vector w is at random angle with respect to any row of V T , so z = V T w is a random vector Aw = UDz is random linear combination of cols of UD Given k random vectors, i.e., random matrix W ∈ Rn×k Columns of B = AW gives k random linear combinations

  • f columns of in UD

B has the same span as U!

Edgar Solomonik Parallel Numerical Algorithms 12 / 37

slide-13
SLIDE 13

Low Rank Approximation by SVD Computing Low Rank Approximations Randomness and Approximation Hierarchical Low-Rank Structure Randomized Approximation Basics Structured Randomized Factorization

Using the Basis to Compute a Factorization

If B has the same span as the range of A [Q, R] = QR(B) gives orthogonal basis Q for B = AW QQT A = QQT UDV T = (QQT U)DV T , now QT U is

  • rthogonal and so QQT U is a basis for the range of A

so compute H = QT A, H ∈ Rk×n and compute [U1, D, V ] = SVD(H) then compute U = QU1 and we have a rank k truncated SVD of A A = UDV T

Edgar Solomonik Parallel Numerical Algorithms 13 / 37

slide-14
SLIDE 14

Low Rank Approximation by SVD Computing Low Rank Approximations Randomness and Approximation Hierarchical Low-Rank Structure Randomized Approximation Basics Structured Randomized Factorization

Cost of the Randomized Method

Matrix multiplications e.g. AW , all require O(mnk)

  • perations

QR and SVD require O((m + n)k2) operations If k ≪ min(m, n) the bulk of the computation here is within matrix multiplication, which can be done with fewer synchronizations and higher efficiency than QR with column pivoting or Arnoldi

Edgar Solomonik Parallel Numerical Algorithms 14 / 37

slide-15
SLIDE 15

Low Rank Approximation by SVD Computing Low Rank Approximations Randomness and Approximation Hierarchical Low-Rank Structure Randomized Approximation Basics Structured Randomized Factorization

Randomized Approximate Factorization

Now lets consider the case when A = UDV T + E where D ∈ Rk×k and E is a small perturbation E may be noise in data or numerical error To obtain a basis for U it is insufficient to multiply by random B ∈ Rn×k, due to influence of E However, oversampling, for instance l = k + 10, and random B ∈ Rn×l gives good results A Gaussian random distribution provides particularly good accuracy So far the dimension of B has assumed knowledge of the target approximate rank k, to determine it dynamically generate vectors (columns of B) one at a time or a block at a time, which results in a provably accurate basis

Edgar Solomonik Parallel Numerical Algorithms 15 / 37

slide-16
SLIDE 16

Low Rank Approximation by SVD Computing Low Rank Approximations Randomness and Approximation Hierarchical Low-Rank Structure Randomized Approximation Basics Structured Randomized Factorization

Cost Analysis of Randomized Low-rank Factorization

The cost of the randomized algorithm for is T MM

p

(m, n, k) + T QR

p

(m, k, k) which means that the work is O(mnk) and the algorithm is well-parallelizable This assumes we factorize the basis by QR and SVD of R

Edgar Solomonik Parallel Numerical Algorithms 16 / 37

slide-17
SLIDE 17

Low Rank Approximation by SVD Computing Low Rank Approximations Randomness and Approximation Hierarchical Low-Rank Structure Randomized Approximation Basics Structured Randomized Factorization

Fast Algorithms via Pseudo-Randomness

We can lower the number of operations needed by the randomized algorithm by generating B so that AB can be computed more rapidly Generate W as a pseudo-random matrix B = DF R D is diagonal with random elements F can be applied to a vector in O(n log(n)) operations

e.g. DFT or Hadamard matrix H2n =

  • Hn

Hn Hn −Hn

  • R is p ≈ k columns of the n × n identity matrix

Computes AB with O(mn log(n)) operations (if m > n)

Edgar Solomonik Parallel Numerical Algorithms 17 / 37

slide-18
SLIDE 18

Low Rank Approximation by SVD Computing Low Rank Approximations Randomness and Approximation Hierarchical Low-Rank Structure Randomized Approximation Basics Structured Randomized Factorization

Cost of Pseudo-Randomized Factorization

Instead of matrix multiplication, apply m FFTs of dimension n Each FFT is independent, so it suffices to perform a single transpose So we have the following overall cost O mn log(n) p · γ

  • + T all−to−all

p

(mn/p) + T QR

p

(m, k, k) assuming m > n This is lower with respect to the unstructured/randomized version, however, this idea does not extend well to the case when A is sparse

Edgar Solomonik Parallel Numerical Algorithms 18 / 37

slide-19
SLIDE 19

Low Rank Approximation by SVD Computing Low Rank Approximations Randomness and Approximation Hierarchical Low-Rank Structure HSS Matrix–Vector Multiplication Parallel HSS Matrix–Vector Multiplication

Hierarchical Low Rank Structure

Consider two-way partitioning of vertices of a graph The connectivity within each partition is given by a block diagonal matrix A1 A2

  • If the graph is nicely separable there is little connectivity

between vertices in the two partitions Consequently, it is often possible to approximate the

  • ff-diagonal blocks by low-rank factorization
  • A1

U1D1V T

1

U2D2V T

2

A2

  • Doing this recursively to A1 and A2 yields a matrix with

hierarchical low-rank structure

Edgar Solomonik Parallel Numerical Algorithms 19 / 37

slide-20
SLIDE 20

Low Rank Approximation by SVD Computing Low Rank Approximations Randomness and Approximation Hierarchical Low-Rank Structure HSS Matrix–Vector Multiplication Parallel HSS Matrix–Vector Multiplication

HSS Matrix, Two Levels

Hierarchically semi-separable (HSS) matrix, space padded

Edgar Solomonik Parallel Numerical Algorithms 20 / 37

slide-21
SLIDE 21

Low Rank Approximation by SVD Computing Low Rank Approximations Randomness and Approximation Hierarchical Low-Rank Structure HSS Matrix–Vector Multiplication Parallel HSS Matrix–Vector Multiplication

HSS Matrix, Three Levels

Edgar Solomonik Parallel Numerical Algorithms 21 / 37

slide-22
SLIDE 22

Low Rank Approximation by SVD Computing Low Rank Approximations Randomness and Approximation Hierarchical Low-Rank Structure HSS Matrix–Vector Multiplication Parallel HSS Matrix–Vector Multiplication

HSS Matrix Formal Definition

The l-level HSS factorization is described by Hl(A) =

  • {U, V , T12, T21, A11, A22}

: l = 1 {U, V , T12, T21, Hl−1(A11), Hl−1(A22)} : l > 1 The low-rank representation of the diagonal blocks is given by A21 = ¯ U2T21 ¯ V T

1 , A12 = ¯

U1T12 ¯ V T

2 where for a ∈ {1, 2},

¯ Ua = Ua(Hl(A)) =      Ua : l = 1

  • U1(Hl−1(Aaa))

U2(Hl−1(Aaa))

  • Ua

: l > 1 ¯ Va = Va(Hl(A)) =      Va : l = 1

  • V1(Hl−1(Aaa))

V2(Hl−1(Aaa))

  • Va

: l > 1

Edgar Solomonik Parallel Numerical Algorithms 22 / 37

slide-23
SLIDE 23

Low Rank Approximation by SVD Computing Low Rank Approximations Randomness and Approximation Hierarchical Low-Rank Structure HSS Matrix–Vector Multiplication Parallel HSS Matrix–Vector Multiplication

HSS Matrix–Vector Multiplication

We now consider computing y = Ax With H1(A) we would just compute y1 = A11x1 + U1(T12(V T

2 x2)) and

y2 = A22x2 + U2(T21(V T

1 x1))

For general Hl(A) perform up-sweep and down-sweep up-sweep computes w = ¯ V T

1 x1

¯ V T

2 x2

  • at every tree node

down-sweep computes a tree sum of ¯ U1T12w2 ¯ U2T21w1

  • Edgar Solomonik

Parallel Numerical Algorithms 23 / 37

slide-24
SLIDE 24

Low Rank Approximation by SVD Computing Low Rank Approximations Randomness and Approximation Hierarchical Low-Rank Structure HSS Matrix–Vector Multiplication Parallel HSS Matrix–Vector Multiplication

HSS Matrix Vector Product

Edgar Solomonik Parallel Numerical Algorithms 24 / 37

slide-25
SLIDE 25

Low Rank Approximation by SVD Computing Low Rank Approximations Randomness and Approximation Hierarchical Low-Rank Structure HSS Matrix–Vector Multiplication Parallel HSS Matrix–Vector Multiplication

HSS Matrix Vector Product

Edgar Solomonik Parallel Numerical Algorithms 25 / 37

slide-26
SLIDE 26

Low Rank Approximation by SVD Computing Low Rank Approximations Randomness and Approximation Hierarchical Low-Rank Structure HSS Matrix–Vector Multiplication Parallel HSS Matrix–Vector Multiplication

HSS Matrix–Vector Multiplication, Up-Sweep

The up-sweep is performed by using the nested structure of ¯ V w = W(Hl(A), x) =           

  • V T

1

V T

2

  • x

: l = 1

  • V T

1

V T

2

W(Hl−1(A11), x1) W(Hl−1(A22), x2)

  • : l > 1

Edgar Solomonik Parallel Numerical Algorithms 26 / 37

slide-27
SLIDE 27

Low Rank Approximation by SVD Computing Low Rank Approximations Randomness and Approximation Hierarchical Low-Rank Structure HSS Matrix–Vector Multiplication Parallel HSS Matrix–Vector Multiplication

HSS Matrix–Vector Multiplication, Down-Sweep

Use w = W(Hl(A), x) from the root to the leaves to get y = Ax = U1T12w2 U2T21w1

  • +

A11x1 A22x2

  • =

¯ U1 ¯ U2 T12 T21

  • w+

A11 A22

  • x

using the nested structure of ¯ Ua and v = U1 U2 T12 T21

  • w,

ya = U1(Hl−1(Aaa)) U2(Hl−1(Aaa))

  • va + Aaaxa for a ∈ {1, 2}

which gives the down-sweep recurrence y = Ax + z = Y(Hl(A), x, z) =           

  • U1q1

U2q2

  • +
  • A11x1

A22x2

  • : l = 1
  • Y(Hl−1(A11), x1, U1q1)

Y(Hl−1(A22), x2, U2q2)

  • : l > 1

where q = T12 T21

  • w + z

Edgar Solomonik Parallel Numerical Algorithms 27 / 37

slide-28
SLIDE 28

Low Rank Approximation by SVD Computing Low Rank Approximations Randomness and Approximation Hierarchical Low-Rank Structure HSS Matrix–Vector Multiplication Parallel HSS Matrix–Vector Multiplication

Prefix Sum as HSS Matrix–Vector Multiplication

We can express the n-element prefix sum y(i) = i−1

j=1 x(j) as

y = Lx where L = L11 L21 L22

  • =

      · · · 1 · · · . . . . . . ... ... . . . 1 · · · 1       L is an H-matrix since L21 = 1n1T

n =

1 · · · 1T 1 · · · 1 L also has rank-1 HSS structure, in particular Hl(L) =   

  • 12, 12,
  • ,
  • 1
  • ,
  • ,
  • : l = 1
  • 14, 14,
  • ,
  • 1
  • , Hl−1(L11), Hl−1(L22)
  • : l > 1

so each U, V , ¯ U, ¯ V is a vector of 1s, T12 = and T21 = 1

Edgar Solomonik Parallel Numerical Algorithms 28 / 37

slide-29
SLIDE 29

Low Rank Approximation by SVD Computing Low Rank Approximations Randomness and Approximation Hierarchical Low-Rank Structure HSS Matrix–Vector Multiplication Parallel HSS Matrix–Vector Multiplication

Prefix Sum HSS Up-Sweep

We can use the HSS structure of L to compute the prefix sum of x recall that the up-sweep recurrence has the general form w = W(Hl(A), x) =           

  • V T

1

V T

2

  • x

: l = 1

  • V T

1

V T

2

W(Hl−1(A11), x1) W(Hl−1(A22), x2)

  • : l > 1

for the prefix sum this becomes w = W(Hl(L), x) =      x : l = 1

  • 1

1 1 1 W(Hl−1(L11), x1) W(Hl−1(L22), x2)

  • : l > 1

so the up-sweep computes w = S(x1) S(x2)

  • where S(y) =

i yi

Edgar Solomonik Parallel Numerical Algorithms 29 / 37

slide-30
SLIDE 30

Low Rank Approximation by SVD Computing Low Rank Approximations Randomness and Approximation Hierarchical Low-Rank Structure HSS Matrix–Vector Multiplication Parallel HSS Matrix–Vector Multiplication

Prefix Sum HSS Down-Sweep

The down-sweep has the general structure y = Y(Hl(A), x, z) =           

  • U1

U2

  • q +
  • A11

A22

  • x

: l = 1

  • Y(Hl−1(A11), x1, U1q1)

Y(Hl−1(A22), x2, U2q2)

  • : l > 1

where q = T12 T21

  • W(Hl(A), x) + z, for the prefix sum

T12 T21

  • W(Hl(L), x) =

1 S(x1) S(x2)

  • =
  • S(x1)
  • = q − z

y = Y(Hl(L), x, z) =           

  • z1

x1 + z2

  • : l = 1
  • Y(Hl−1(L11), x1, 12z1)

Y(Hl−1(L22), x2, 12(S(x1) + z2))

  • : l > 1

Initially the prefix z = 0 and it will always be the case that z1 = z2

Edgar Solomonik Parallel Numerical Algorithms 30 / 37

slide-31
SLIDE 31

Low Rank Approximation by SVD Computing Low Rank Approximations Randomness and Approximation Hierarchical Low-Rank Structure HSS Matrix–Vector Multiplication Parallel HSS Matrix–Vector Multiplication

Cost of HSS Matrix–Vector Multiplication

The down-sweep and the up-sweep perform small dense matrix–vector multiplications at each recursive step Lets assume k is the dimension of the leaf blocks and the rank at each level (number of columns in each Ua, Va) The work for both the down-sweep and up-sweep is Q(n, k) = 2Q(n/2, k) + O(k2 · γ), Q(k, k) = O(k2 · γ) Q(n, k) = O(nk · γ) The depth of the algorithm scales as D = Θ(log(n)) for fixed k

Edgar Solomonik Parallel Numerical Algorithms 31 / 37

slide-32
SLIDE 32

Low Rank Approximation by SVD Computing Low Rank Approximations Randomness and Approximation Hierarchical Low-Rank Structure HSS Matrix–Vector Multiplication Parallel HSS Matrix–Vector Multiplication

Parallel of HSS Matrix–Vector Multiplication

If we assign each tree node to a single processor for the first log2(p) levels, and execute a different leaf subtree with a different processor Tp(n, k) = 2Tp/2(n, k) + O(k2 · γ + k · β + α) = O((nk/p + k2 log(p)) · γ + k log(p) · β + log(p) · α)

Edgar Solomonik Parallel Numerical Algorithms 32 / 37

slide-33
SLIDE 33

Low Rank Approximation by SVD Computing Low Rank Approximations Randomness and Approximation Hierarchical Low-Rank Structure HSS Matrix–Vector Multiplication Parallel HSS Matrix–Vector Multiplication

Synchronization-Efficient HSS Multiplication

The leaf subtrees can be computed independently T leaf−subtrees

p

(n, k) = O(nk/p · γ + k · β + α) Focus on up-sweep and down-sweep with log2(p) levels Executing the root subtree sequentially takes time T root−subtree

p

(pk, k) = O(pk2 · γ + pk · β + α) Instead have pr (r < 1) processors compute subtrees with p1−r leaves, then recurse on the pr roots T rec−tree

p

(pk, k) = T rec−tree

pr

(prk, k) + O(p1−rk2 · γ + p1−rk · β + α) = O(p1−rk2 · γ + p1−rk · β + log1/r(log(p)) · α)

Edgar Solomonik Parallel Numerical Algorithms 33 / 37

slide-34
SLIDE 34

Low Rank Approximation by SVD Computing Low Rank Approximations Randomness and Approximation Hierarchical Low-Rank Structure HSS Matrix–Vector Multiplication Parallel HSS Matrix–Vector Multiplication

Synchronization-Efficient HSS Multiplication

Focus on the top tree with p leaves (leaf subtrees) Assign each processor a unique path from a leaf to the root Given w = W(Hl(A), x) at every node each processor can compute a down-sweep path in the subtree independently For the up-sweep, realize that the tree applies a linear transformation, so can sum the results computed in each path For each tree node, there is a contribution from every processor assigned a leaf of the subtree of the node Therefore, there are p − 1 sums of a total of O(p log(p)) contributions, for a total of O(kp log(p)) elements Do these with min(p, k log2(p)) processors, each obtaining max(p, k log2(p)) contributions, so T root−paths

p

(k) = O(k2 log(p) · γ + (k log(p) + p) · β + log(p) · α)

Edgar Solomonik Parallel Numerical Algorithms 34 / 37

slide-35
SLIDE 35

Low Rank Approximation by SVD Computing Low Rank Approximations Randomness and Approximation Hierarchical Low-Rank Structure HSS Matrix–Vector Multiplication Parallel HSS Matrix–Vector Multiplication

HSS Multiplication by Multiple Vectors

Consider multiplication C = AB where A ∈ Rn×n is HSS and B ∈ Rn×b lets consider the case that p ≤ b ≪ n if we assign each processor all of A, each can compute a column of C simultaneously this requires a prohibitive amount of memory usage

perform leaf-level multiplications, processing n/p rows of B with each processor (call intermediate ¯ C) transpose ¯ C and apply log2(p) root levels of HSS tree to columns of ¯ C independently

this algorithm requires replication only of the root O(log(p)) levels of the HSS tree, O(pb) data for large k or larger p different algorithms may be desirable

Edgar Solomonik Parallel Numerical Algorithms 35 / 37

slide-36
SLIDE 36

Low Rank Approximation by SVD Computing Low Rank Approximations Randomness and Approximation Hierarchical Low-Rank Structure

References

Michiel E. Hochstenbach, A Jacobi–Davidson Type SVD Method, SIAM Journal on Scientific Computing 2001 23:2, 606-628 Chan, T. F. (1987). Rank revealing QR factorizations. Linear algebra and its applications, 88, 67-82. Businger, P ., and Golub, G. H. (1965). Linear least squares solutions by Householder transformations. Numerische Mathematik, 7(3), 269-276.

Edgar Solomonik Parallel Numerical Algorithms 36 / 37

slide-37
SLIDE 37

Low Rank Approximation by SVD Computing Low Rank Approximations Randomness and Approximation Hierarchical Low-Rank Structure

References

Quintana-Ortí, G., Sun, X., and Bischof, C. H. (1998). A BLAS-3 version

  • f the QR factorization with column pivoting. SIAM Journal on Scientific

Computing, 19(5), 1486-1494. Demmel, J.W., Grigori, L., Gu, M. and Xiang, H., 2015. Communication avoiding rank revealing QR factorization with column pivoting. SIAM Journal on Matrix Analysis and Applications, 36(1), pp.55-89. Bischof, C.H., 1991. A parallel QR factorization algorithm with controlled local pivoting. SIAM Journal on Scientific and Statistical Computing, 12(1), pp.36-57. Halko, N., Martinsson, P .G. and Tropp, J.A., 2011. Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions. SIAM review, 53(2), pp.217-288.

Edgar Solomonik Parallel Numerical Algorithms 37 / 37