Analysis of Compressive Sensing in Radar Holger Rauhut Lehrstuhl C - - PowerPoint PPT Presentation

analysis of compressive sensing in radar
SMART_READER_LITE
LIVE PREVIEW

Analysis of Compressive Sensing in Radar Holger Rauhut Lehrstuhl C - - PowerPoint PPT Presentation

Analysis of Compressive Sensing in Radar Holger Rauhut Lehrstuhl C f ur Mathematik (Analysis) RWTH Aachen Compressed Sensing and Its Applications TU Berlin December 8, 2015 1 / 44 Overview Several radar setups with compressive sensing


slide-1
SLIDE 1

Analysis of Compressive Sensing in Radar

Holger Rauhut Lehrstuhl C f¨ ur Mathematik (Analysis) RWTH Aachen Compressed Sensing and Its Applications TU Berlin December 8, 2015

1 / 44

slide-2
SLIDE 2

Overview

Several radar setups with compressive sensing approaches

◮ Range-Doppler resolution via compressive sensing ◮ Sparse MIMO Radar ◮ Antenna arrays with randomly positioned antennas

2 / 44

slide-3
SLIDE 3

Time-Frequency Structured Random Matrices Resolution of Range-Doppler in Radar

3 / 44

slide-4
SLIDE 4

Resolution of Range-Doppler

Received signal is superposition of delayed and modulated (Doppler shifted) versions of sent signal. Task: Determine delays (corresponding to distances; range) and Doppler shifts (corresponding to radial speed) from subsampled receive signal!

4 / 44

slide-5
SLIDE 5

Gabor Systems in Finite Dimensions

Translation and Modulation on Cm (T kg)j = g(j−k)

mod m

and (Mℓg)j = e2πiℓj/mgj. Time-frequency shifts π(λ) = MℓT k, λ = (k, ℓ) ∈ {0, . . . , m − 1}2. For g ∈ Cm define Gabor synthesis matrix (ω = e2πi/m) Ψg = (π(λ)g)λ∈{0,...,m−1}2 =      

g0 gm−1 · · · g1 g0 · · · g1 · · · g1 g1 g0 · · · g2 ωg1 · · · ωg2 · · · ωm−1g2 g2 g1 · · · g3 ω2g2 · · · ω2g3 · · · ω2(m−1)g3 g3 g2 · · · g4 ω3g3 · · · ω3g4 · · · ω3(m−1)g4 . . . . . . ... . . . . . . ... . . . . . . gm−1 gm−2 · · · g0 ωm−1gm−1 · · · ωm−1g0 · · · ω(m−1)2g0

      . Use of Ψg ∈ Cm×m2 as measurement matrix in compressive sensing

slide-6
SLIDE 6

Gabor Systems in Finite Dimensions

Translation and Modulation on Cm (T kg)j = g(j−k)

mod m

and (Mℓg)j = e2πiℓj/mgj. Time-frequency shifts π(λ) = MℓT k, λ = (k, ℓ) ∈ {0, . . . , m − 1}2. For g ∈ Cm define Gabor synthesis matrix (ω = e2πi/m) Ψg = (π(λ)g)λ∈{0,...,m−1}2 =      

g0 gm−1 · · · g1 g0 · · · g1 · · · g1 g1 g0 · · · g2 ωg1 · · · ωg2 · · · ωm−1g2 g2 g1 · · · g3 ω2g2 · · · ω2g3 · · · ω2(m−1)g3 g3 g2 · · · g4 ω3g3 · · · ω3g4 · · · ω3(m−1)g4 . . . . . . ... . . . . . . ... . . . . . . gm−1 gm−2 · · · g0 ωm−1gm−1 · · · ωm−1g0 · · · ω(m−1)2g0

      . Use of Ψg ∈ Cm×m2 as measurement matrix in compressive sensing

5 / 44

slide-7
SLIDE 7

Radar model (Herman, Strohmer 2008)

Emitted signal: g ∈ Cm. Objects scatters g and radar device receives the contribution xλπ(λ)g = xk,ℓMℓT kg. T k corresponds to delay, i.e., distance of object Mℓ corresponds to Doppler shift, i.e., speed of the object xk,ℓ reflectivity of object Received signal is superposition of contribution of all scatteres: y =

  • λ∈Λ

xλπ(λ)g = Ψgx. Usually few scatterers so that x ∈ Cm2 can be assumed sparse.

6 / 44

slide-8
SLIDE 8

Radar model (Herman, Strohmer 2008)

Emitted signal: g ∈ Cm. Objects scatters g and radar device receives the contribution xλπ(λ)g = xk,ℓMℓT kg. T k corresponds to delay, i.e., distance of object Mℓ corresponds to Doppler shift, i.e., speed of the object xk,ℓ reflectivity of object Received signal is superposition of contribution of all scatteres: y =

  • λ∈Λ

xλπ(λ)g = Ψgx. Usually few scatterers so that x ∈ Cm2 can be assumed sparse. We will choose g as random vector below.

6 / 44

slide-9
SLIDE 9

Reconstruction via compressive sensing

Reconstruction of x from y = Ax via ℓ1-minimization min z1 subject to Az = y min z1 subject to Az − y2 ≤ η

7 / 44

slide-10
SLIDE 10

Reconstruction via compressive sensing

Reconstruction of x from y = Ax via ℓ1-minimization min z1 subject to Az = y min z1 subject to Az − y2 ≤ η Alternatives: Matching Pursuits Iterative hard thresholding (pursuit) Iteratively reweighted least squares ...

7 / 44

slide-11
SLIDE 11

Uniform vs. nonuniform recovery

Often recovery results are for random matrices A ∈ Rm×N; choose generator g ∈ Cm for Ψg at random

◮ Uniform recovery

With high probability on A every sparse vector is recovered; P(∀s-sparse x, recovery of x is successful using A) ≥ 1 − ε.

8 / 44

slide-12
SLIDE 12

Uniform vs. nonuniform recovery

Often recovery results are for random matrices A ∈ Rm×N; choose generator g ∈ Cm for Ψg at random

◮ Uniform recovery

With high probability on A every sparse vector is recovered; P(∀s-sparse x, recovery of x is successful using A) ≥ 1 − ε. Recovery conditions on A

◮ Null space property ◮ Restricted isometry property 8 / 44

slide-13
SLIDE 13

Uniform vs. nonuniform recovery

Often recovery results are for random matrices A ∈ Rm×N; choose generator g ∈ Cm for Ψg at random

◮ Uniform recovery

With high probability on A every sparse vector is recovered; P(∀s-sparse x, recovery of x is successful using A) ≥ 1 − ε. Recovery conditions on A

◮ Null space property ◮ Restricted isometry property

◮ Nonuniform recovery

A fixed sparse vector is recovered with high probability using A ∈ Rm×N; ∀s-sparse x : P(recovery of x is successful using A) ≥ 1 − ε.

8 / 44

slide-14
SLIDE 14

Uniform vs. nonuniform recovery

Often recovery results are for random matrices A ∈ Rm×N; choose generator g ∈ Cm for Ψg at random

◮ Uniform recovery

With high probability on A every sparse vector is recovered; P(∀s-sparse x, recovery of x is successful using A) ≥ 1 − ε. Recovery conditions on A

◮ Null space property ◮ Restricted isometry property

◮ Nonuniform recovery

A fixed sparse vector is recovered with high probability using A ∈ Rm×N; ∀s-sparse x : P(recovery of x is successful using A) ≥ 1 − ε. Recovery conditions on A

◮ Tangent cone (descent cone) of norm at x intersects ker A

trivially.

◮ Dual certificates 8 / 44

slide-15
SLIDE 15

Restricted isometry property (RIP)

Definition

The restricted isometry constant δs of a matrix A ∈ Cm×N is defined as the smallest δs such that (1 − δs)x2

2 ≤ Ax2 2 ≤ (1 + δs)x2 2

for all s-sparse x ∈ CN.

9 / 44

slide-16
SLIDE 16

Stable and robust recovery

Theorem (Cand` es, Romberg, Tao ’04 – Cai, Zhang ’13)

Let A ∈ Cm×N with δ2s < 1/ √ 2 ≈ 0.7071. Let x ∈ CN, and assume that noisy data are observed, y = Ax + e with e2 ≤ τ. Let x# by a solution of min

z z1

such that Az − y2 ≤ τ. Then x − x#2 ≤ C σs(x)1 √s + Dτ, x − x#1 ≤ Cσs(x)1 + D√sτ for constants C, D > 0, that depend only on δ2s. Here σs(x)1 = inf

z:z0≤s x − z1.

Implies exact recovery in the s-sparse and noiseless case.

10 / 44

slide-17
SLIDE 17

Dual certificate

Theorem (Fuchs 2004, Tropp 2005)

For A ∈ Cm×N, x ∈ CN with support S is the unique solution of min z1 subject to Az = Ax if AS is injective and there exists a dual vector h ∈ Cm such that (A∗h)j = sgn(xj), j ∈ S, |(A∗h)ℓ| < 1, ℓ ∈ S.

Corollary

Let a1, . . . , aN be the columns of A ∈ Cm×N. For x ∈ CN with support S, if the matrix AS is injective and if |A†

Saℓ, sgn(xS)| < 1

for all ℓ ∈ S, then the vector x is the unique ℓ1-minimizer with y = Ax. Here, A†

S is Moore-Penrose pseudo inverse.

11 / 44

slide-18
SLIDE 18

Dual certificate

Theorem (Fuchs 2004, Tropp 2005)

For A ∈ Cm×N, x ∈ CN with support S is the unique solution of min z1 subject to Az = Ax if AS is injective and there exists a dual vector h ∈ Cm such that (A∗h)j = sgn(xj), j ∈ S, |(A∗h)ℓ| < 1, ℓ ∈ S.

Corollary

Let a1, . . . , aN be the columns of A ∈ Cm×N. For x ∈ CN with support S, if the matrix AS is injective and if |A†

Saℓ, sgn(xS)| < 1

for all ℓ ∈ S, then the vector x is the unique ℓ1-minimizer with y = Ax. Here, A†

S is Moore-Penrose pseudo inverse.

One ingredient: Check that A∗

SAS − I2→2 ≤ δ < 1.

11 / 44

slide-19
SLIDE 19

Stability and robustness via dual certificate

Theorem

Let x ∈ CN and A ∈ Cm×N with ℓ2-normalized columns. Denote by S ⊂ [N] the indices of the s largest absolute entries of x. Assume that (i) there is a dual certificate u = A∗h ∈ CN with h ∈ Cm s.t. uT = sgn(x)T, uT c∞ ≤ 1 2, h2 ≤ 3√s. (ii) A∗

TAT − I2→2 ≤ 1 2.

Given noisy measurements y = Ax + e ∈ Cm with e2 ≤ τ, the solution ˆ x ∈ CN of noise-constrained ℓ1-minimization satisfies x − ˆ x2 ≤ 52√sτ + 16σs(x)1.

12 / 44

slide-20
SLIDE 20

Stability and robustness via dual certificate

Theorem

Let x ∈ CN and A ∈ Cm×N with ℓ2-normalized columns. Denote by S ⊂ [N] the indices of the s largest absolute entries of x. Assume that (i) there is a dual certificate u = A∗h ∈ CN with h ∈ Cm s.t. uT = sgn(x)T, uT c∞ ≤ 1 2, h2 ≤ 3√s. (ii) A∗

TAT − I2→2 ≤ 1 2.

Given noisy measurements y = Ax + e ∈ Cm with e2 ≤ τ, the solution ˆ x ∈ CN of noise-constrained ℓ1-minimization satisfies x − ˆ x2 ≤ 52√sτ + 16σs(x)1. Remark: Error bound is worse by factor of √s than the one

  • btained from RIP.

Can be removed again by additionally requiring the weak RIP.

12 / 44

slide-21
SLIDE 21

Random choice of generator g

Recall Gabor synthesis matrix Ψg = (MℓT kg)(k,ℓ)∈[m]2 ∈ Cm×m2

13 / 44

slide-22
SLIDE 22

Random choice of generator g

Recall Gabor synthesis matrix Ψg = (MℓT kg)(k,ℓ)∈[m]2 ∈ Cm×m2 Choice of g as subgaussian random vector: Entries of g are independent, mean-zero, variance one and subgaussian: P(|gj| ≥ t) ≤ 2e−Kt2 for some K > 0. Examples:

◮ Rademacher: entries ±1 with equal probability ◮ Steinhaus: entries are uniformly distributed on complex torus

{z ∈ C : |z| = 1}

◮ Gaussian: entries are standard real or complex Gaussian

variables

13 / 44

slide-23
SLIDE 23

RIP estimate for random generator (Krahmer, Mendelson, Rauhut 2014)

Theorem

Let Ψg ∈ Cm×N, N = m2, be generated by a subgaussian random vector g. If, for δ ∈ (0, 1), m ≥ Cδ−2s max{log2 s log2 N, log(ε−1)}, then with probability at least 1 − ε the restricted isometry constant

  • f

1 √mΨg satisfies δs ≤ δ.

Implies stable and robust recovery via ℓ1 minimization with high probability if m ≥ Cs log2(s) log2(N).

14 / 44

slide-24
SLIDE 24

RIP estimate for random generator (Krahmer, Mendelson, Rauhut 2014)

Theorem

Let Ψg ∈ Cm×N, N = m2, be generated by a subgaussian random vector g. If, for δ ∈ (0, 1), m ≥ Cδ−2s max{log2 s log2 N, log(ε−1)}, then with probability at least 1 − ε the restricted isometry constant

  • f

1 √mΨg satisfies δs ≤ δ.

Implies stable and robust recovery via ℓ1 minimization with high probability if m ≥ Cs log2(s) log2(N). Previous results: Pfander, R, Tropp 2012: m ≥ Cδs3/2 log3 N Nonuniform recovery, Pfander, R 2010: m ≥ Cs log(N) Theorem can be generalized to certain other systems of operators (instead of time-frequency shifts).

14 / 44

slide-25
SLIDE 25

Numerical experiments for Steinhaus g

0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Horizontal axis 1/m = m/m2, vertical axis s/m. Contours of success probability, 93% success rate, 1/(2 log(m)). Numerical experiments suggest s ≤

m 2 log(m) ensures s-sparse

recovery.

15 / 44

slide-26
SLIDE 26

Proof ingredient: chaos processes

Recall: δs is smallest constant such that (1 − δs)x2

2 ≤ Ax2 2 ≤ (1 + δs)x2 2

Equivalently, with Ts = {x ∈ CN : x2 ≤ 1, x0 ≤ s} δs = sup

x∈Ts

|Ax2

2 − x2 2|

16 / 44

slide-27
SLIDE 27

Proof ingredient: chaos processes

Recall: δs is smallest constant such that (1 − δs)x2

2 ≤ Ax2 2 ≤ (1 + δs)x2 2

Equivalently, with Ts = {x ∈ CN : x2 ≤ 1, x0 ≤ s} δs = sup

x∈Ts

|Ax2

2 − x2 2|

In our case Ax = 1 √mΨgx = 1 √m

m−1

  • k,ℓ=0

xk,ℓMℓT kg = Vxg, with Vx =

1 √m

m−1

k,ℓ=0 xk,ℓMℓT k. Since entries of g have mean

zero and variance one, EVxg2

2 = x2 2, so that

δs = sup

x∈Ts

|Vxg2

2 − EVxg2 2|

This is a second order chaos processes.

16 / 44

slide-28
SLIDE 28

Generic Chaining for Chaos Processes

Theorem (Krahmer, Mendelson, R 2014)

Let B = −B ⊂ Cm×N be a symmetric set of matrices and ξ ∈ CN be a subgaussian random vector. Then E sup

B∈B

  • Bξ2

2 − EBξ2 2

  • ≤ C1γ2(B, · 2→2)2 + C2∆·F (B)γ2(B, · 2→2).

Here, BF =

  • tr(B∗B) denotes the Frobenius norm.

Symmetry assumption B = −B can be dropped at the cost of slightly more complicated bound. Here, ∆·(B) is the diameter of B with respect to · and γ2(B, · ) is Talagrand’s γ2-functional which can be bounded by γ2(B, · ) ≤ C ∆·(B)

  • log N(B, · , u)du,

where N(B, · , u) are the covering numbers of B at radius u.

17 / 44

slide-29
SLIDE 29

Tail bound

Theorem (Krahmer, Mendelson, R ’14 – Dirksen ’15)

Let B = −B ⊂ Cm×N and ξ ∈ CN be a subgaussian random

  • vector. Then

P

  • sup

B∈B

  • Bξ2

2 − EBξ2 2

  • ≥ C1E + t
  • ≤ 2 exp
  • −C2 min

t2 V 2 , t U

  • ,

where E := ∆·F (B)γ2(B, · 2→2) + γ2(B, · 2→2)2, V := ∆·2→2∆·F (B), U := ∆2

·2→2(B).

18 / 44

slide-30
SLIDE 30

Sparse MIMO radar

19 / 44

slide-31
SLIDE 31

MIMO Radar

20 / 44

slide-32
SLIDE 32

MIMO Radar in 2D

◮ NT transmit antennas at locations

  • 0, (k − 1)dTλ
  • ,

k = 1, 2, . . . , NT

◮ NR receive antennas at locations

  • 0, (j − 1)dRλ
  • ,

j = 1, . . . , NR Choose dT = 1/2, dR = NT/2. Then system has similar characteristics as antenna array with NTNR antennas. (Alternatively, dT = NR/2, dR = 1/2)

21 / 44

slide-33
SLIDE 33

Measurement model Strohmer, Friedlander 2012; Yu, Petropulu, Poor, 2011

◮ Transmit antennas send periodic continuous-time complex Gaussian

pulses s1, . . . , sNT with period T and band-width B.

22 / 44

slide-34
SLIDE 34

Measurement model Strohmer, Friedlander 2012; Yu, Petropulu, Poor, 2011

◮ Transmit antennas send periodic continuous-time complex Gaussian

pulses s1, . . . , sNT with period T and band-width B.

◮ Echo of target of unit reflectivity at position (r cos(θ), r sin(θ) and

radial speed v at receiver j: rj(t) =

NT

  • k=1

e2πicλ−1(t−dk,j(t)/c)sk(t − dk,j(t)/c) with carrier frequency λ, speed of light c, and distance from kth transmitter to target and from target to jth receiver dk,j(t) = 2(r + vt) + sin(θ)dT(k − 1)λ + sin(θ)(j − 1)dRλ

◮ Demodulation (multiplication of rj(t) with e−2πicλ−1t) and assuming

B ≪ λ (narrowband transmit waveforms), v ≪ c (slowly moving targets), r ≫ λNRNT/2 (far field scenario) yields measurements yj(t) ≈ e2πi·2λ−1re2πi sin(θ)dR(j−1)

N

  • k=1

e2πi·2λ−1vte2πi sin(θ)dT (k−1)sk(t−2r/c)

22 / 44

slide-35
SLIDE 35

Discretization

◮ By the Shannon-Nyquist sampling theorem, the band-limited

periodic complex Gaussian transmit signals can be represented by their sampled counterparts sk ∈ CNt (sampled over one period [0, T]); Nt: number of samples.

23 / 44

slide-36
SLIDE 36

Discretization

◮ By the Shannon-Nyquist sampling theorem, the band-limited

periodic complex Gaussian transmit signals can be represented by their sampled counterparts sk ∈ CNt (sampled over one period [0, T]); Nt: number of samples.

◮ Target is described by triple (θ, r, v) (azimuth, range,

velocity); Discretization of (β, τ, f ) = (sin(θ), 2r/c, 2λ−1v) with stepsizes ∆β = 2 NTNR , ∆τ = 1 2B , ∆f = 1 T yields grid G =

  • (β∆β, τ∆τ, f ∆f ) : β ∈ [NRNT], τ ∈ [Nt], f ∈ [Nt]
  • Index set G = [NTNR] × [Nt] × [Nt] of size N := NRNTN2

t

(here [k] = {1, . . . , k}).

23 / 44

slide-37
SLIDE 37

Discretization grid

✜ ✵ ✜ ✶ ✜ ✷ ✜ ✸ ✜ ✹ ✜ ✺ ✜ ✻ ✜ ✼

β8 β16

☞ ✷ ✷

β24 β26 β28 β30 β32 β34 β36 β38 β40

☞ ✹ ✷

β48 β56 β64 MIMO radar module fk sparse target scene

NR = NT = 8

24 / 44

slide-38
SLIDE 38

Measurement Model I

One target with unit reflectivity at grid point indexed by Θ = (β, τ, f ) ∈ G Discrete time samples at receiver j (j = 1, . . . , NR) yj = (yj(∆t), yj(2∆t), . . . , yj(Nt∆t))T = e2πi·cλ−1τ∆τ

  • e2πi·dRβ∆β(j−1)

NT

  • k=1

e2πi·dT β∆β(k−1)Mf Tτsk

  • ∈ CNt

with translation and modulation operators on CNt defined as (Tτs)k = sk−τ, (Mf s)k = e2πi· fk

Nt (s)k.

Targets on grid points index by Θ ∈ G with reflectivities ρΘ, setting also xΘ = e2πi·cλ−1τ∆τ ρΘ; measurements at receiver j: yj =

  • Θ∈G

xΘ e2πi·dRβ∆β(j−1)

NT

  • k=1

e2πi·dT β∆β(k−1)Mf Tτsk

  • =:Aj

Θ 25 / 44

slide-39
SLIDE 39

Measurement Model II

Collection of sampled signals at all receivers: y =    y1 . . . yNR    =   

  • Θ∈G xΘA1

Θ

. . .

  • Θ∈G xΘANR

Θ

   = Ax ∈ CNr·Nt Measurement matrix A =    A1

Θ

. . . ANR

Θ

  

Θ∈G

∈ CNRNt×NRNT N2

t ,

G = [NRNT] × [Nt] × [Nt] Aj

Θ = e2πi·dRβ∆β(j−1) NT

  • k=1

e2πi·dT β∆β(k−1)Mf Tτsk ∈ CNt, Θ = (β, τ, f )

Structured random matrix; the s1, . . . , sNT are independent subgaussian random vectors, e.g. standard complex Gaussian random vectors, Rademacher vectors, or Steinhaus vectors Number of measurements: m = NRNt, signal dimension N = NRNTN2

t ,

i.e., m ≪ N; recall dT = 1/2, dR = NT/2, ∆β =

2 NT NR

26 / 44

slide-40
SLIDE 40

Reconstruction via Compressive Sensing

Reconstruction problem of solving Ax = y is underdetermined.

27 / 44

slide-41
SLIDE 41

Reconstruction via Compressive Sensing

Reconstruction problem of solving Ax = y is underdetermined. In many situations only very few targets are present, i.e., the vector x of reflectivities is sparse!

27 / 44

slide-42
SLIDE 42

Reconstruction via Compressive Sensing

Reconstruction problem of solving Ax = y is underdetermined. In many situations only very few targets are present, i.e., the vector x of reflectivities is sparse! Use compressive sensing for reconstruction! For recovery, we will study ℓ1-minimization min

z z1

subject to Ax − y2 ≤ τ and LASSO min

z

1 2Az − y2

2 + λz1

27 / 44

slide-43
SLIDE 43

Recovery for random support sets

Strohmer and Friedlander (2013) showed recovery of the correct support via (debiased) LASSO for s-sparse signals with random support (and random signs) with high probability under the condition m = NRNt ≥ Cs log(N) (plus minor additional technical assumptions). Proof is based on an analysis of the coherence of A and a general recovery result for random signals due to Tropp (2008).

28 / 44

slide-44
SLIDE 44

Recovery for random support sets

Strohmer and Friedlander (2013) showed recovery of the correct support via (debiased) LASSO for s-sparse signals with random support (and random signs) with high probability under the condition m = NRNt ≥ Cs log(N) (plus minor additional technical assumptions). Proof is based on an analysis of the coherence of A and a general recovery result for random signals due to Tropp (2008). Question: Can we avoid the assumption of randomness of the support?

28 / 44

slide-45
SLIDE 45

The RIP for MIMO radar measurements

Theorem (Dorsch, R 2015)

If Nt ≥ Cδ−2s max{log2(s) log2(N), log(ε−1)} then the rescaled random radar measurement matrix

1 √NRNT Nt A ∈ CNRNt×NRNT N2

t satisfies δs ≤ δ with probability at

least 1 − ε. Implies stable and robust sparse recovery via ℓ1-minimization.

29 / 44

slide-46
SLIDE 46

The RIP for MIMO radar measurements

Theorem (Dorsch, R 2015)

If Nt ≥ Cδ−2s max{log2(s) log2(N), log(ε−1)} then the rescaled random radar measurement matrix

1 √NRNT Nt A ∈ CNRNt×NRNT N2

t satisfies δs ≤ δ with probability at

least 1 − ε. Implies stable and robust sparse recovery via ℓ1-minimization. Proof uses generic chaining estimates for suprema of chaos processes (Krahmer, Mendelson, R 2014).

29 / 44

slide-47
SLIDE 47

The RIP for MIMO radar measurements

Theorem (Dorsch, R 2015)

If Nt ≥ Cδ−2s max{log2(s) log2(N), log(ε−1)} then the rescaled random radar measurement matrix

1 √NRNT Nt A ∈ CNRNt×NRNT N2

t satisfies δs ≤ δ with probability at

least 1 − ε. Implies stable and robust sparse recovery via ℓ1-minimization. Proof uses generic chaining estimates for suprema of chaos processes (Krahmer, Mendelson, R 2014). Compared to other random matrix constructions in compressed sensing (where m ≍ s log(eN/s) ) the result requires more measurements because here m = NtNR; i.e., we suffer an additional factor of NR.

29 / 44

slide-48
SLIDE 48

Almost optimality of RIP estimate

Theorem (Dorsch, R 2015)

If a realization of the random MIMO radar measurement matrix

1

NRNT N2

t A satisfies δs ≤ 0.7 for s ≤ N2

t , then necessarily

Nt ≥ Cs log(eN2

t /s).

Proof idea: Introduce Sβ := {(β′, τ ′, f ′) ∈ G : β′ = β}. If x has support in Sβ then one can write Ax = aR(β) ⊗ BxSβ for a vector aR(β) ∈ CNR with entries of magnitude 1 and a matrix B ∈ CNt×N2

t . Applying lower sparse recovery bounds for B yields

the claim.

30 / 44

slide-49
SLIDE 49

Towards nonuniform recovery

Recovery depends on the fine structure of the support set: Equivalence class of angles, β, β′ ∈ [NRNT], β ∼ β′ : β′ − β ≡ 0 mod NR This definition is motivated by the fact that the columns of A satisfy, for Θ = (β, τ, f ), Θ′ = (β′, τ ′, f ′), AΘ, AΘ′ = NR δβ,β′ = NR if β ∼ β′,

  • therwise.

31 / 44

slide-50
SLIDE 50

Towards nonuniform recovery

Recovery depends on the fine structure of the support set: Equivalence class of angles, β, β′ ∈ [NRNT], β ∼ β′ : β′ − β ≡ 0 mod NR This definition is motivated by the fact that the columns of A satisfy, for Θ = (β, τ, f ), Θ′ = (β′, τ ′, f ′), AΘ, AΘ′ = NR δβ,β′ = NR if β ∼ β′,

  • therwise.

Intuitively, the more elements of the support S are such that the corresponding β’s are contained in different equivalent classes the better the matrix AS is conditioned.

31 / 44

slide-51
SLIDE 51

Well-balanced support sets

For a support set S ⊂ G = [NRNT] × [Nt] × [Nt] let S[β] := {Θ′ = (β′, τ ′, f ′) ∈ S : β′ ∼ β}.

32 / 44

slide-52
SLIDE 52

Well-balanced support sets

For a support set S ⊂ G = [NRNT] × [Nt] × [Nt] let S[β] := {Θ′ = (β′, τ ′, f ′) ∈ S : β′ ∼ β}.

Definition

A support set S ⊂ G is called η-balanced, if for all angle classes [β], |S[β]| ≤ η |S| NR . The parameter η ranges in [1, NR]. A small value of η means that the support S is well-distributed

  • ver the angle classes, which is favorable for recovery.

32 / 44

slide-53
SLIDE 53

Nonuniform recovery I

Theorem

Let x ∈ CN and S ⊂ G be an index set corresponding to s largest absolute entries in x. Assume S to be η-balanced and that the signs of the coefficients xS form a Steinhaus sequence. Assume measurements y = Ax + n ∈ CNRNt are given, where the signals s1, s2, . . . , sNT generating the measurement matrix A are independent subgaussian random vectors, and n2 ≤ τ. If m = NRNt ηs log3(N/ε), then, with probability at least 1 − ε, the solution x# to constrained ℓ1-minimization satisfies x# − x2 ≤ C1σs(x)1 + C2 τ√s √NTNRNt , where C1 and C2 are numerical constants. Exact recovery for s-sparse scene x.

33 / 44

slide-54
SLIDE 54

Nonuniform recovery for LASSO

Theorem (Dorsch, R 2015)

Let x ∈ CN, N = NRNTN2

t , be a fixed s-sparse target scene with

η-balanced support S such that the phases of the nonzero entries form a random Steinhaus sequence and such that min

Θ∈S > 8σ

  • 2 log(N)

NTNRNt . Draw A at random and let y = A + e be noisy measurements with random noise, e ∼ CN(0, σ2). Assume that m = NRNt ≥ Cηs log3(N/ε). Then, with probability at least 1 − 7 max{ε, N−3}, the solution x♯ of min

z

1 2Az − y2

2 + λz1

with λ = 2σ

  • 2 log(N)

NTNRNt satisfies supp(x) = supp(x#).

34 / 44

slide-55
SLIDE 55

Remarks about nonuniform recovery

◮ The debiased LASSO estimator

x – least squares on supp(x♯), after computing LASSO solution – satisfies x − x2 ≤ 2σ

  • 2s log(N)/(NTNRNt).

◮ The randomness in the signs of the nonzero entries of x can

likely be removed.

◮ For optimal balancedness parameter η = 1, we obtain a

(near-)optimal bound on the number of measurements: m ≥ Cs log3(N/ε).

◮ RIP-result covers the worst case where η = NR. ◮ A random support set will be η-balanced for small η with high

probability, which explains the result of Strohmer and Friedlander.

35 / 44

slide-56
SLIDE 56

Numerical experiments for Doppler-free scenario

sparsity |S| probability of success 8 16 24 32 40 48 56 64 72 80 88 96 104 112 120 128 136 0.2 0.4 0.6 0.8 1.0 η ∈ [1, NR] η = 1 η = 2 η = 4 η = 8

Success rates for various values of η red curve corresponds to randomly chosen support sets NT = NR = 8 transmit and receive antennas, Nt = 64 time-domain samples, grid size N = NTNRNt = 4096 m = NRNt = 512 measurements

36 / 44

slide-57
SLIDE 57

Antenna arrays with random antenna positions

37 / 44

slide-58
SLIDE 58

Radar setup

n antenna elements on square [0, B]2 in plane z = 0. Targets in the plane z = z0 on grid of resolution cells rj ∈ [−L, L]2 × {z0}, j = 1, . . . , N with mesh size h. x ∈ CN: vector of reflectivities in resolution cells (rj)j=1,...,N.

38 / 44

slide-59
SLIDE 59

Sensing mechanism (Fannjiang, Strohmer, Yan 2010)

Antenna at position a ∈ R3 emits monochromatic wave (wavelength λ, wavenumber ω) with amplitude at position r ∈ R3 given by Green’s function of Helmholtz equation H(a, r) = exp (2πir − a2/λ) 4πr − a2 . Approximation (valid for large z0): H(a, r) ≈ eiωz0

4πz0 G(a, r) with

G(a, r) = exp iω 2z0 (|r1 − a1|2 + |r2 − a2|2)

  • Signal corresponding to emitting antenna aℓ and receive antenna

ak (Born approximation) y(k,ℓ) =

N

  • j=1

xjG(aℓ, rj)G(rj, ak) = (Ax)(k,ℓ), k, ℓ = 1, . . . , n n2 measurements

39 / 44

slide-60
SLIDE 60

Random scattering matrix

Choose antenna positions aj, j ∈ [n], independently and uniformly at random in [0, B]2. Then A ∈ Cn2×N is structured random

  • matrix. Entries

A(k,ℓ);j = G(ak, rj)G(rj, aℓ), (k, ℓ) ∈ [n]2, j ∈ [N]. Define v(ak, aℓ) = (G(ak, rj)G(rj, aℓ))j∈[N] ∈ CN. Then A =           v(a1, a1) v(a1, a2) . . . v(a2, a1) . . . v(an, an)           Rows and columns are coupled. Under the condition hB

λz0 ∈ N we have EA∗A = I.

40 / 44

slide-61
SLIDE 61

Reconstruction via ℓ1-minimization

Sparse scene (sparsity s = 100, 6400 grid points): Reconstruction (n = 30 antennas, 900 noisy measurements, SNR 20dB)

41 / 44

slide-62
SLIDE 62

Nonuniform recovery

Theorem (H¨ ugel, R, Strohmer 2014)

Let x ∈ CN. Choose the n antenna positions independent and uniformly at random in [0, B]2. Assume hB

λz0 ∈ N, where h is mesh

size and λ the wavelength; further n2 ≥ Cs ln2(N/ε) . Let y = Ax + e ∈ Cn2 with e2 ≤ ηn. Let x# be the solution to min z1 subject to y − Az2 ≤ ηn. Then with probability at least 1 − ε x − x#2 ≤ C1σs(x)1 + C2 √sη. Exact recovery when η = 0 and σs(x)1 = 0. RIP estimate open.

42 / 44

slide-63
SLIDE 63

Conclusions

Analysis of compressive sensing in various radar setups may be interesting and challenging!

◮ Time-Frequency (range-Doppler) structured random matrices

(Pfander, R 2010; Pfander, R, Tropp 2012; Krahmer, Mendelson, R - 2014)

◮ MIMO radar with random transmit pulses

(Friedlander, Strohmer 2014; Dorsch, R 2015)

◮ Antenna arrays with random antenna positions

(Fannjiang, Strohmer 2013; H¨ ugel, R, Strohmer 2014)

43 / 44

slide-64
SLIDE 64

Conclusions

Analysis of compressive sensing in various radar setups may be interesting and challenging!

◮ Time-Frequency (range-Doppler) structured random matrices

(Pfander, R 2010; Pfander, R, Tropp 2012; Krahmer, Mendelson, R - 2014)

◮ MIMO radar with random transmit pulses

(Friedlander, Strohmer 2014; Dorsch, R 2015)

◮ Antenna arrays with random antenna positions

(Fannjiang, Strohmer 2013; H¨ ugel, R, Strohmer 2014)

◮ Not covered:

◮ Subsampled random convolutions (R, Romberg, Tropp 2012;

Krahmer, Mendelson, R 2014)

◮ MIMO radar with random antenna position (Strohmer, Wang

2013)

◮ ... 43 / 44

slide-65
SLIDE 65

Conclusions

Analysis of compressive sensing in various radar setups may be interesting and challenging!

◮ Time-Frequency (range-Doppler) structured random matrices

(Pfander, R 2010; Pfander, R, Tropp 2012; Krahmer, Mendelson, R - 2014)

◮ MIMO radar with random transmit pulses

(Friedlander, Strohmer 2014; Dorsch, R 2015)

◮ Antenna arrays with random antenna positions

(Fannjiang, Strohmer 2013; H¨ ugel, R, Strohmer 2014)

◮ Not covered:

◮ Subsampled random convolutions (R, Romberg, Tropp 2012;

Krahmer, Mendelson, R 2014)

◮ MIMO radar with random antenna position (Strohmer, Wang

2013)

◮ ...

◮ More challenging mathematical problems from radar

applications

◮ Off grid compressive sensing ◮ ... 43 / 44

slide-66
SLIDE 66

The End

Literature

  • D. Dorsch, H. Rauhut, Refined analysis of sparse MIMO radar, Preprint

2015, arXiv:1509.03625.

  • M. H¨

ugel, H. Rauhut, T. Strohmer, Remote sensing via l1-minimization.

  • Found. Comp. Math. 14:115-150, 2014.

  • F. Krahmer, S. Mendelson, H. Rauhut, Suprema of chaos processes and

the restricted isometry property. Comm. Pure Appl. Math. 67(11):1877-1904, 2014.

  • T. Strohmer and B. Friedlander. Analysis of Sparse MIMO Radar. Appl.
  • Comp. Harm. Anal. vol.37, pp. 361-388, 2014.

  • G. Pfander, H. Rauhut, Sparsity in time-frequency representations. J.

Fourier Anal. Appl., 16(2):233-260, 2010.

  • A. Fannjiang, P. Yan, and T. Strohmer. Compressed Remote Sensing of

Sparse Objects. SIAM J. Imag. Sci. vol. 3(3), pp.596-618, 2010.

  • M. Herman and T. Strohmer. High Resolution Radar via Compressed
  • Sensing. IEEE Trans. Signal Processing, vol.57(6): 2275-2284, 2009.

  • G. Pfander, H. Rauhut, J. Tanner, Identification of matrices having a

sparse representation. IEEE Trans. Signal Process., 56(11):5376-5388, 2008. 44 / 44