Structured random measurements in compressed sensing
Holger Rauhut Lehrstuhl C f¨ ur Mathematik (Analysis) RWTH Aachen Winter School on Compressed Sensing TU Berlin December 3–5, 2014
1 / 82
Structured random measurements in compressed sensing Holger Rauhut - - PowerPoint PPT Presentation
Structured random measurements in compressed sensing Holger Rauhut Lehrstuhl C f ur Mathematik (Analysis) RWTH Aachen Winter School on Compressed Sensing TU Berlin December 35, 2014 1 / 82 Too Few Data Often it is hard, expensive,
1 / 82
2 / 82
◮ Compressibility / Sparsity (small complexity of relevant
◮ Efficient algorithms (convex optimization) ◮ Randomness (random matrices)
3 / 82
4 / 82
5 / 82
◮ Standard compressive sensing: Sparsity (small number of
◮ Low rank matrix recovery
◮ Phase retrieval
◮ Low rank tensor recovery
◮ Only partial results for tensor recovery available so far. 6 / 82
◮ coefficient vector: x ∈ CN, N ∈ N ◮ support of x: supp x := {j, xj = 0} ◮ s- sparse vectors: x0 := |supp x| ≤ s.
j=1 |xj|q)1/q
p = {x ∈ CN, xp ≤ 1}, 0 < p ≤ 1, are good
7 / 82
8 / 82
x∈CN x0
x x1
9 / 82
10 / 82
11 / 82
◮ Uniform recovery
◮ Null space property ◮ Restricted isometry property
◮ Nonuniform recovery
◮ Tangent cone (descent cone) of norm at x intersects ker A
◮ Dual certificates 12 / 82
13 / 82
x∈T
14 / 82
15 / 82
Saℓ, sgn(xS)| < 1
S is Moore-Penrose pseudo inverse.
SAS − I2→2 ≤ δ < 1.
16 / 82
17 / 82
2 ≤ Ax2 2 ≤ (1 + δs)x2 2
18 / 82
19 / 82
z z1
20 / 82
21 / 82
22 / 82
1 √mA satisfies δs ≤ δ.
23 / 82
24 / 82
◮ Applications impose structure due to physical constraints,
◮ Fast matrix vector multiplies (FFT) in recovery algorithms,
◮ Storage problems for unstructured matrices.
25 / 82
26 / 82
ℓ=1 is given as
27 / 82
1 √mA satisfies δs ≤ δ with
28 / 82
29 / 82
30 / 82
Image courtesy of Michael Lustig and Shreyas Vasanawala, Stanford University 31 / 82
t∈D
32 / 82
N
N
33 / 82
1 √mA satisfies δs ≤ δ with
34 / 82
35 / 82
1 w 2(t)dν(t) = 1. Then
1 w 2(t)dν(t) defines prob. measure.
ℓ = N
N
2 (1 − x2)1/4. Then
36 / 82
37 / 82
c 1+j2+k2 , j, k ∈ {−N, . . . , N}. If
38 / 82
j∈supp(x) ω2 j
N
39 / 82
1 0.5 0.5 1 0.2 0.4 0.6 0.8 1 Original function
1 1+25x2
1 0.5 0.5 1 0.5 0.5 1 Least squares solution 1 0.5 0.5 1 0.5 0.5 Residual error 1 0.5 0.5 1 0.2 0.4 0.6 0.8 1 Unweighted l1 minimizer 1 0.5 0.5 1 0.5 0.5 Residual error 1 0.5 0.5 1 0.2 0.4 0.6 0.8 1 Weighted l1 minimizer 1 0.5 0.5 1 0.5 0.5 Residual error
40 / 82
−1 −0.5 0.5 1 0.2 0.4 0.6 0.8 1 Original function
1 0.5 0.5 1 0.2 0.4 0.6 0.8 1 Least squares solution 1 0.5 0.5 1 0.5 0.5 Residual error 1 0.5 0.5 1 0.2 0.4 0.6 0.8 1 Unweighted l1 minimizer 1 0.5 0.5 1 0.5 0.5 Residual error 1 0.5 0.5 1 0.2 0.4 0.6 0.8 1 Weighted l1 minimizer 1 0.5 0.5 1 0.5 0.5 Residual error
41 / 82
42 / 82
j=1 bℓ−j mod Nxj
◮ Rademacher b = ǫ: independent ±1 entries ◮ Gaussian b = g: standard Gaussian random vector,
43 / 82
mod N
44 / 82
45 / 82
◮ Use coded mask instead
◮ Observed coded aperture
46 / 82
47 / 82
5 10 15 20 25 30 35 40 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Empirical Recovery Rate Sparsity
48 / 82
1 √mΦΘ(ξ) satisfy δs ≤ δ.
49 / 82
50 / 82
51 / 82
52 / 82
4πz0 G(a, r) with
N
53 / 82
λz0 ∈ N we have EA∗A = I.
54 / 82
55 / 82
λz0 ∈ N, where h is mesh
56 / 82
t∈T
◮ Sd−1 = {x ∈ RN : x2 = 1} (operator norm) ◮ Ts = {x ∈ RN : x2 ≤ 1, x0 ≤ s} (RIP constants) ◮ descent cone T(x)
◮ Empirical processes (subgaussian matrices, random partial
◮ chaos processes (partial random circulant matrices)
57 / 82
58 / 82
2 ≤ Ax2 2 ≤ (1 + δs)x2 2
x∈Ts |Ax2 2 − x2 2| =
S⊂[N],#S≤s max x∈BS
2 − x2 2|
S⊂[N],#S≤s A∗ SAS − I2→2
◮ Fix x ∈ Ts and estimate P(|Ax2 2 − x2 2| ≥ u) ◮ Take union bound over sufficiently fine ε-net of Ts ◮ Extend bound over all of Ts
59 / 82
2 − x2 2| ≤ u) ≤ 2 exp(−cmu2),
◮ Ax2 2 = m j=1 |Aj, x|2 with A1, . . . , Am the rows of A ◮ The random variables Aj, x are Gaussian with variance 1 mx2 2, so that Zj = |Aj, x|2 − 1 mx2 2 are independent
◮ Ax2 2 − x2 2 = m j=1 Zj is a sum of independent mean-zero
2 − x2 2| ≥ u) = P(| m
60 / 82
n
61 / 82
j=1,...,n |Axj2 2 − xj2 2| ≥ u) ≤ n
2 − xj2 2| ≥ u)
x∈BS |Ax2 2 − x2 2| ≥ u) ≤ 2 exp(−c′mu2 + c′′s).
S⊂[N],#S≤s max x∈BS |Ax2 2 − x2 2| ≥ u)
x∈BS |Ax2 2 − x2 2| ≥ u) ≤
62 / 82
k=1 ∈ CN, j = 1, . . . , N. Set
1 √mA.
2 − x2 2 = 1
m
2
m
k=1 xke−2πijk/N
N
N
2
2
63 / 82
2 − x2 2 = 1
m
2
k=1 σ2 k for all t > 0,
m
2
∞x2 1 − x2 2 ≤ s
2 ≤ Es|φjk, x|2 − 1 = s − 1 ≤ s.
2 − x2 2
4-net x1, . . . , xn of
j=1,...,n |Axj2 2 − xj2 2| ≥ u) ≤ 2n exp
65 / 82
SAS − I2→2
ℓ=1 E(X 2 ℓ )
66 / 82
1 √mAS satisfies, for 0 < u ≤ 1,
S
S
67 / 82
1 √m(Ax)ℓ is given by
2.
x∈Ts
2−x2 2 = sup x∈Ts
m
1 √mXℓ, x, δs is the supremum of an empirical process,
x∈Ts m
68 / 82
s )1/p ≤ 2
x∈Ts
m
m
ℓ=1 ǫℓfx(tℓ)2 is a subgaussian
m
z∈Ts
z (tℓ)
ℓ=1,...,m |fx(tℓ) − fy(tℓ)|.
69 / 82
t∈T
t∈T
t∈T
70 / 82
t∈T ∞
tr∈Tr d(t, tr),
71 / 82
s )1/p ≤ 2
x∈Ts
m
z∈Ts
z (tℓ)
z∈Ts
m
z∈Ts
m
s )1/p K
72 / 82
73 / 82
1 √mΦΘ(ξ) satisfy δs ≤ δ.
74 / 82
x∈Ts
2 − x2 2
2 = x2 2.
x∈Ts
2 − EVxξ2 2
75 / 82
B∈B
2 − EBξ2 2
76 / 82
B∈B
2 − EBξ2 2
·2→2(B).
77 / 82
B∈B
2 − EBǫ2 2
B∈B
D∈D
78 / 82
1 √mRΘ(x ∗ ξ) we have
79 / 82
x∈T
2 − x2 2| γ2(T, · 2)2 + sup x∈T
80 / 82
81 / 82
◮
22(1):16-42, 2007.
◮
Information Theory, 54(12):5661-5670, 2008.
◮
Foundations and Numerical Methods for Sparse Recovery, volume 9 of Radon Series Comp. Appl. Math., pages 1-92. deGruyter, 2010.
◮
◮
Methods in Imaging, p. 187-228. Springer, 2011.
◮
◮
◮
164(5):517-533, 2012.
◮
◮
ugel, H. Rauhut, T. Strohmer. Remote sensing via l1-minimization. Found. Comp. Math. 14:115-150, 2014.
◮
property, Comm. Pure Appl. Math., 67(11):1877-1904, 2014.
◮
Image Proc., 23(2):612-622, 2014.
◮
Analysis, volume 2116 of Lecture Notes in Mathematics, pages 6570. Springer, 2014.
◮
2015
◮
◮
auser, ANHA series, 2013. 82 / 82