infinite dimensional compressed sensing
play

Infinite Dimensional Compressed Sensing Anders C. Hansen, University - PowerPoint PPT Presentation

Infinite Dimensional Compressed Sensing Anders C. Hansen, University of Cambridge Chemnitz, October 5, 2010 Anders C. Hansen, University of Cambridge Infinite Dimensional Compressed Sensing Compressed Sensing Let U C n n , x 0 C n


  1. Infinite Dimensional Compressed Sensing Anders C. Hansen, University of Cambridge Chemnitz, October 5, 2010 Anders C. Hansen, University of Cambridge Infinite Dimensional Compressed Sensing

  2. Compressed Sensing Let U ∈ C n × n , x 0 ∈ C n and consider y = Ux 0 . We want to recover x 0 from y . This is obvious if U is invertible and we know y . What if we do not know y , but rather P Ω y , where P Ω is the projection onto span { e j } j ∈ Ω and Ω ⊂ { 1 , . . . , n } with | Ω | = m and Ω is randomly chosen. Can we recover x 0 from P Ω y ? Anders C. Hansen, University of Cambridge Infinite Dimensional Compressed Sensing

  3. Where Magnetic Resonance Imaging (MRI) Let U be the discrete Fourier Transform and x 0 be an image of the brain. The question is now: How to reconstruct x 0 from the measurement vector y . In particular, we have: x 0 = y = Ux 0 , Anders C. Hansen, University of Cambridge Infinite Dimensional Compressed Sensing

  4. Sparsity Given x 0 ∈ C n let ∆ = { k ∈ N : � x 0 , e j � � = 0 } . Want to find a strategy so that x 0 can be reconstructed from P Ω Ux 0 , where | Ω | = m , with high probability. In particular we would like to know how large m must be as a function of n and | ∆ | . Anders C. Hansen, University of Cambridge Infinite Dimensional Compressed Sensing

  5. Convex Optimization Want to recover x 0 from P Ω Ux 0 by finding inf x � x � l 0 , P Ω Ux = P Ω Ux 0 (1) where � x � l 0 = |{ j : x j � = 0 }| or inf x � x � l 1 , P Ω Ux = P Ω Ux 0 , (2) where � x � l 1 = � n j =1 | x j | . Note that (1) is a non-convex optimization problem and (2) is a convex optimization problem. Anders C. Hansen, University of Cambridge Infinite Dimensional Compressed Sensing

  6. Theoretical Results Theorem (Candes, Romberg, Tao) Let x 0 ∈ C n be a discrete signal supported on an unknown set ∆ , and choose Ω of size | Ω | = m uniformly at random. For a given accuracy parameter M there is a constant C M such that if m ≥ C M · | ∆ | · log( n ) then with probability at least 1 − O ( n − M ) , the minimizer to the problem (2) is unique and is equal to x 0 . Anders C. Hansen, University of Cambridge Infinite Dimensional Compressed Sensing

  7. The Model ◮ Given a separable Hilbert space H with an orthonormal set { ϕ k } k ∈ N . ◮ Given a vector ∞ � x 0 = β k ϕ k , β = { β 1 , β 2 , . . . } . k =1 ◮ Suppose also that we are given a set of linear functionals { ζ j } j ∈ N such that we can ”measure” the vector x 0 by applying the linear functionals e.g. we can obtain { ζ j ( x 0 ) } j ∈ N . Anders C. Hansen, University of Cambridge Infinite Dimensional Compressed Sensing

  8. An Infinite System of Equations With some appropriate assumptions on the linear functionals { ζ j } j ∈ N we may view the full recovery problem as the infinite dimensional system of linear equations       ζ 1 ( x 0 ) u 11 u 12 u 13 . . . β 1       ζ 2 ( x 0 ) u 21 u 22 u 23 . . . β 2        =  , u ij = ζ i ( ϕ j ) ,       ζ 3 ( x 0 ) u 31 u 32 u 33 . . . β 3     . . . . . ... . . . . . . . . . . (3) where we will refer to U = { u ij } i , j ∈ N as the ”measurement matrix”. Anders C. Hansen, University of Cambridge Infinite Dimensional Compressed Sensing

  9. Solution I If we for example have that U forms an isometry on l 2 ( N ) we could, for every K ∈ N , compute an approximation x = � K k =1 ˜ β k ϕ j by solving   ˜   β 1 ζ 1 ( x 0 )  ˜  β 2     ζ 2 ( x 0 )     ˜ β 3 = P K U ∗ P N A = P K U ∗ P N UP K | P K l 2 ( N ) , A      , ζ 3 ( x 0 )    . .   . . . . ˜ β K for some appropriately chosen N ∈ N (the number of samples). We would then get the following error: � x − x 0 � H ≤ (1 + C K , N ) � P ⊥ N β � l 2 ( N ) , β = { β 1 , β 2 , . . . } , Anders C. Hansen, University of Cambridge Infinite Dimensional Compressed Sensing

  10. Solution II where, for fixed K , the constant C K , N → 0 as N → ∞ . Moreover, the constant C K , N is given explicitly by � � � � � ( P K U ∗ P N UP K | P K l 2 ( N ) ) − 1 P K U ∗ P N UP ⊥ C K , N = � , K and hence we may find, for any K ∈ N , the appropriate choice of N ∈ N (the number of samples) to get the desired error bound. In particular, this can be done numerically, by computing with different sections of the infinite matrix U . Anders C. Hansen, University of Cambridge Infinite Dimensional Compressed Sensing

  11. Infinite Dimensional Compressed Sensing (i) Are there other ways of approximating (3)? (ii) Could there be ways of reconstructing, with the same accuracy, but using fewer samples from { ζ j ( x 0 ) } ? Anders C. Hansen, University of Cambridge Infinite Dimensional Compressed Sensing

  12. Infinite Dimensional Compressed Sensing Let Ω ⊂ N such that | Ω | = m < ∞ be randomly chosen and let P Ω denote the projection onto span { e j } j ∈ Ω . Now consider the convex (infinite-dimensional) optimization problem       ζ 1 ( x 0 ) u 11 u 12 u 13 . . . η 1       ζ 2 ( x 0 ) u 21 u 22 u 23 . . . η 2       η ∈ l 1 ( N ) � η � l 1 ( N ) : P Ω inf  = P Ω  .       ζ 3 ( x 0 ) u 31 u 32 u 33 . . . η 3     . . . . . ... . . . . . . . . . . (4) Anders C. Hansen, University of Cambridge Infinite Dimensional Compressed Sensing

  13. Infinite Dimensional Compressed Sensing (i) How do we randomly choose Ω?It does not make sense to choose Ω uniformly from the whole set N . (iii) What if we chose an N ∈ N and choose Ω ⊂ { 1 , . . . , N } uniformly at random with | Ω | = m < N ? But how big must N be? (iii) If η is a solution to (4) (note that we may not have uniqueness) what is the error � η − β � l 2 ( N ) , and how does it depend on the choice of Ω? In particular, how big must m be. (Note that we must have the extra assumption that β ∈ l 1 ( N ).) Anders C. Hansen, University of Cambridge Infinite Dimensional Compressed Sensing

  14. Infinite Dimensional Compressed Sensing The solution to problem (4) cannot be computed explicitly because it is infinite-dimensional, and thus an approximation must be computed instead. For M ∈ N , consider the optimization problem       ζ 1 ( x 0 ) u 11 u 12 u 13 . . . η 1     ζ 2 ( x 0 ) u 21 u 22 u 23 . . .      .  . η ∈ P M l 1 ( N ) � η � l 1 ( N ) : P Ω inf  = P Ω  P M  .      ζ 3 ( x 0 ) u 31 u 32 u 33 . . . .   . . . . ... η M . . . . . . . . (5) (i) If ˜ η M = { η 1 , . . . , η M } is a minimizer of (5), what is the behavior of ˜ η M as M → ∞ ? Moreover, what happens to the error � ˜ η M − β � l 2 ( N ) as M → ∞ ? (ii) Observe that M cannot be too small, since then (5) may not have a solution. However, since U = { u ij } is an isometry up to a constant, it follows that (8) is feasible for all sufficiently large M . Anders C. Hansen, University of Cambridge Infinite Dimensional Compressed Sensing

  15. The Semi-Infinite-Dimensional Model We are given an M ∈ N and for x 0 = � ∞ k =1 β k ϕ k ∈ H we have that supp ( x 0 ) = { j ∈ N : β j � = 0 } = ∆ ⊂ { 1 , . . . , M } We will choose only finitely many of the samples { ζ j ( x 0 ) } j ∈ N , in particular, we will choose a set Ω ⊂ { 1 , . . . , N } of size m uniformly at random. ◮ How large must N be? ◮ How large must m be to recover x 0 with high probability? Moreover, if m = N will we then get perfect recovery with probability one? Anders C. Hansen, University of Cambridge Infinite Dimensional Compressed Sensing

  16. The Full Infinite-Dimensional Model In the full infinite dimensional model we consider the problem of recovering a vector y 0 = � ∞ k =1 α k ϕ k ∈ H where ∞ � y 0 = x 0 + h , h = c k ϕ k , k =1 supp ( x 0 ) = ∆ ⊂ { 1 , . . . , M } , supp ( h ) = { 1 , . . . , ∞} , where we have some estimate on � ∞ k =1 | c k | . In other words, we do not know the support of h . In this case we may get in trouble if we try to solve (5) since it may not have a solution. Note however that (5) will have a solution if we replace P M with a projection P e M and let � M be sufficiently large? But what happens when � M → ∞ ? Anders C. Hansen, University of Cambridge Infinite Dimensional Compressed Sensing

  17. The Generalized Sampling Theorem Theorem (Adcock,H’10) Let F denote the Fourier transform on L 2 ( R d ) . Suppose that { ϕ j } j ∈ N is an orthonormal set in L 2 ( R d ) such that there exists a T > 0 with supp ( ϕ j ) ⊂ [ − T , T ] d for all j ∈ N . For ǫ > 0 , let ρ : N → ( ǫ Z ) d be a bijection. Define the infinite matrix   u 11 u 12 u 13 . . .   u 21 u 22 u 23 . . .   U =  , u ij = ( F ϕ j )( ρ ( i )) . (6)   u 31 u 32 u 33 . . .  . . . ... . . . . . . 1 2 T , we have that ǫ d / 2 U is an isometry. Then, for ǫ ≤ Anders C. Hansen, University of Cambridge Infinite Dimensional Compressed Sensing

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend