Phase Retrieval using Partial Unitary Sensing Matrices
Rishabh Dudeja, Milad Bakhshizadeh, Junjie Ma, Arian Maleki
1
Phase Retrieval using Partial Unitary Sensing Matrices Rishabh - - PowerPoint PPT Presentation
Phase Retrieval using Partial Unitary Sensing Matrices Rishabh Dudeja, Milad Bakhshizadeh, Junjie Ma, Arian Maleki 1 The Phase Retrieval Problem Recover unknown x from y = | Ax | x C n : signal vector. y R m :
1
◮ Recover unknown x⋆ from
◮ x⋆ ∈ Cn: signal vector. ◮ y ∈ Rm: measurements. ◮ A: sensing matrix. ◮ δ = m/n: sampling ratio. 2
◮ Popular sensing matrices:
◮ Gaussian: Aij i.i.d.
◮ Coded diffraction pattern (CDP):
◮ Dl = Diag(eiφ(l) 1 , . . . eiφ(l) n ) ◮ F : Fourier matrix ◮ φ1, . . . , φn : independent uniform phases
◮ Objective: Which matrix performs better from a purely theoretical
3
◮ Performance of partial orthogonal versus Gaussian on LASSO
◮ Noiseless measurements: Same phase transition ◮ Noisy measurements: Partial orthogonal (Fourier) is better
◮ Originally observed Donoho, Tanner (2009) ◮ Phase transition analysis of Gaussian matrices Donoho, Tanner (2006) ◮ Mean square error calculation of Gaussian matrices Donohoa, Maleki, Montanari (2011), Bayati, Montanari (2011), Thrampoulidis, Oymak, Hassibi (2015) ◮ Mean square error calculation of partial orthogonal matrices: Tulino, Verdue, Caire (2013), Thrampoulidis,Hassibi (2015) 4
◮ The spectral estimator ˆ
∆
◮ T ∆
◮ T : R≥0 → [0, 1] is a continuous trimming function 5
◮ The spectral estimator ˆ
∆
◮ T ∆
◮ T : R≥0 → [0, 1] is a continuous trimming function
◮ Population Behaviour: E M = λ1x⋆xH ⋆ + λ2(In − x⋆xH ⋆ )
◮ λ1 = E T|Z|2 ◮ λ2 = E T ◮ Z ∼ CN (0, 1) ◮ T = T (|Z|/
5
⋆ ˆ
1 2 3 4 5 6 7 8 9 10 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 δ ρ2 T (y) = δy2/(δy2 + 0.1) T (y) = δy2/(δy2 + √ δ − 1) T (y) = δy2I(
6
◮ Compare the spectral estimator on
◮ Gaussian: Aij ∼ N(0, 1
n).
◮ Oversampled Haar:
◮ Sm,n: selects the columns randomly.
◮ We use the asymptotic framework
7
◮ For δ > 1: ∃ A spectral estimator ˆ
◮ Lu and Li also showed how ρ can be calculated.
8
⋆ ˆ
P
δ δ−1,
T (δ),
δ δ−1,
1 τ−T
τ−T
τ−T
9
◮ Weak recovery threshold of T :
∆
T (δ) > 0}
10
◮ Weak recovery threshold of T :
∆
T (δ) > 0} ◮ For oversampled Haar measurement matrix, the optimal trimming
10
◮ Weak recovery threshold of T :
∆
T (δ) > 0} ◮ For oversampled Haar measurement matrix, the optimal trimming
◮ For Gaussian Sensing: δT⋆ = δIT = 1.
Luo, Alghamadi and Lu (2018). 10
11
◮ Classical probability theory:
◮ Consider two independent random variables X ∼ fX(x), Y ∼ fY (y): ◮ fX+Y (t) = fX(t) ∗ fY (t) =
◮ fXY (t) =
z fX(x)fY ( z x) 1 |x|dx.
12
◮ Classical probability theory:
◮ Consider two independent random variables X ∼ fX(x), Y ∼ fY (y): ◮ fX+Y (t) = fX(t) ∗ fY (t) =
◮ fXY (t) =
z fX(x)fY ( z x) 1 |x|dx.
◮ Free probability theory (for random matrices):
◮ Let X and Y be “freely independent” ◮ Let µX(z) denote empirical spectral distribution of X ◮ µX+Y (z) = µx(z) ⊞ µy(z) (free "additive" convolution) ◮ µXY (z) = µx(z) ⊠ µy(z) (free "multiplicative" convolution) 12
◮ Rotational invariance =
◮ Partition M:
1 T A1
1 T A−1
−1T A1
−1T A−1
◮ Rotational invariance =
◮ Partition M:
1 T A1
1 T A−1
−1T A1
−1T A−1
1 v1|2 = daλ1(M(a))
13
◮ Rotational invariance =
◮ Partition M:
1 T A1
1 T A−1
−1T A1
−1T A−1
1 v1|2 = daλ1(M(a))
∆
−1(T + µT A1(T A1)H)A−1)
13
◮ Analyze L(µ) = λ1(AH −1(T + µT A1(T A1)H)A−1).
14
◮ Analyze L(µ) = λ1(AH −1(T + µT A1(T A1)H)A−1). ◮ A−1, A1 are dependent.
14
◮ Compared oversampled Haar sensing matrix with Gaussian
◮ Oversampled Haar sensing matrix with optimal trimming: δ = 2 ◮ Gaussian matrix with optimal trimming: δ = 1 15
◮ Compared oversampled Haar sensing matrix with Gaussian
◮ Oversampled Haar sensing matrix with optimal trimming: δ = 2 ◮ Gaussian matrix with optimal trimming: δ = 1
◮ Oversampled Haar approximates the CDP sensing matrices
1 2 3 4 5 6 7 8 9 10 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 δ ρ2 T (y) = δy2/(δy2 + 0.1) T (y) = δy2/(δy2 + √ δ − 1) T (y) = δy2I(
n|xH ∗ ˆ
15