ECS231 Low-rank approximation – revisited
(Introduction to Randomized Algorithms)
May 23, 2019
1 / 15
ECS231 Low-rank approximation revisited (Introduction to - - PowerPoint PPT Presentation
ECS231 Low-rank approximation revisited (Introduction to Randomized Algorithms) May 23, 2019 1 / 15 Outline 1. Review: low-rank approximation 2. Prototype randomized SVD algorithm 3. Accelerated randomized SVD algorithms 4. CUR
1 / 15
2 / 15
◮ The SVD of an m × n matrix A is defined by
◮ Computational cost O(mn2), assuming m ≥ n. ◮ Rank-k truncated SVD of A:
(:,1:k)
3 / 15
◮ Eckart-Young theorem.
n
k+1
1/2 ◮ Theorem A.
F = A − QBk2 F ,
Remark: Given m × n matrix A = (aij ), the Frobineous norm of A is defined by AF = m i=1 n j=1 a2 ij 1/2 = (trace(AT A))1/2. 4 / 15
◮ Input: m × n matrix A with m ≥ n, integers k > 0 and k < ℓ < n ◮ Steps:
◮ Output:
5 / 15
6 / 15
◮ Theorem. With proper choice of an m × O(k/ǫ) sketch Ω,
F ≤ (1 + ǫ)A − Ak2 2
◮ Reading: Halko et al, SIAM Rev., 53:217-288, 2011.
7 / 15
◮ Input: m × n matrix A with m ≥ n, n × ℓ starting matrix Ω and
◮ Steps:
◮ Output:
8 / 15
◮ The orthonormal basis Q of Y = (AAT )qAΩ should be stably
◮ Convergence results:
9 / 15
◮ Input: m × n matrix A with m ≥ n, positive integers k, ℓ, q and
◮ Steps:
◮ Output:
10 / 15
11 / 15
12 / 15
F = C+AR+
Remark: Let A = UΣV T is the SVD of an m × n matrix A with m ≥ n. Then the pseudo-inverse (also called generalized inverse) A+ of A is given by A+ = V Σ+UT , where Σ+ = diag(σ+ 1 , ...) and σ+ j = 1/σj if σj = 0, otherwise σ+ j = 0. If A is
13 / 15
14 / 15
◮ Theorem. With c = O(k/ǫ) columns and r = O(k/ǫ) rows selected
X A − CXR|2 F ≤ (1 + ǫ)A − Ak2 F
◮ Reading: Boutsidis and Woodruff, STOC, pp.353-362, 2014
15 / 15