Fast Data Driven Compressed Sensing and application to compressed - - PowerPoint PPT Presentation

β–Ά
fast data driven compressed sensing
SMART_READER_LITE
LIVE PREVIEW

Fast Data Driven Compressed Sensing and application to compressed - - PowerPoint PPT Presentation

IDCOM, University of Edinburgh Fast Data Driven Compressed Sensing and application to compressed quantitative MRI Mike Davies & Mohammad Golbabaee Joint work with Zhouye Chen, Yves Wiaux Zaid Mahbub, Ian Marshall IDCOM, University of


slide-1
SLIDE 1

IDCOM, University of Edinburgh

Mike Davies & Mohammad Golbabaee

Joint work with

Zhouye Chen, Yves Wiaux Zaid Mahbub, Ian Marshall

Fast Data Driven Compressed Sensing

and application to

compressed quantitative MRI

slide-2
SLIDE 2

IDCOM, University of Edinburgh

Outline

  • Iterative Projected Gradients (IPG)
  • Approximate/inexact oracles
  • Robustness of inexact IPG
  • Application to data driven Compressed Sensing
  • Fast MR Fingerprinting reconstruction
  • IPG with Approximate Nearest Neighbour searches
  • Cover trees for fast ANN
  • Numerical results
slide-3
SLIDE 3

IDCOM, University of Edinburgh

Inverse problems

𝑧=𝐡𝑦+π‘₯βˆˆβ€‹π‘Ίβ†‘ 𝑺↑𝑛 , Β‘ Β‘ Β‘ Β‘ Β‘ Β‘ Β‘ Β‘ Β‘ Β‘ Β‘ Β‘ Β‘ Β‘ Β‘ ‘𝐡:​𝑺↑ π‘Ίβ†‘π‘œ →​𝑺↑ 𝑺↑𝑛 where 𝑛β‰ͺπ‘œ

Challenge: Missing information Complete measurements can be costly, time consuming and sometimes just impossible! Compressed sensing to address the challenge

[Donoho’06; CandΓ¨s, Romberg, and Tao,’06]

slide-4
SLIDE 4

IDCOM, University of Edinburgh

Data models/priors

Manifolds

slide-5
SLIDE 5

IDCOM, University of Edinburgh

Solving Compressed Sensing/Inv. Problems

Estimating by constrained least squares

​𝑦 ∈argmin⁠{𝑔(𝑦)≔​1/2 ​||π‘§βˆ’π΅π‘¦||↓2↑2 Β‘} Β‘ Β‘ Β‘ ‘𝑑.𝑒. Β‘ Β‘ Β‘π‘¦βˆˆπ·

!! NP-hard for most interesting models (e.g. sparsity [Natarajan’95]) Iterative Gradient Projection (IPG) Generally proximal-gradient algorithms are very popular: ​𝑦↑𝑙 =​𝑸↓

𝑸↓𝐷 (π‘¦β†‘π‘™βˆ’1 βˆ’πœˆβ€‹A↑T (​Ax↑kβˆ’1 βˆ’y))

Gradient ​A↑T (​Ax↑kβˆ’1 βˆ’y)), step size 𝜈, , Euclidean projection

​𝑸↓ 𝑸↓𝐷 (𝑦)∈argmin​||π‘¦βˆ’β€‹π‘¦β†‘β€² ||↓2 Β‘ Β‘ Β‘ ‘𝑑.𝑒. Β‘ Β‘ Β‘ ‘​𝑦↑′ ∈𝐷 Β‘

Signal/data model projection (observation) nonlinear approximation (reconstruction)

slide-6
SLIDE 6

IDCOM, University of Edinburgh

Embedding: key to CS/IPG stability

Model π·βˆˆβ€‹π‘Ί ​𝑺↑𝒐 𝒐 𝐏(𝒏) π‘ž Β‘(unstructured) Β‘points β€‹πœ„β†‘βˆ’2 log(π‘ž) ⋃𝑗↑𝑀▒𝐿 -flats β€‹πœ„β†‘βˆ’2 (𝐿+log(𝑀)) rank 𝑠 Β‘ Β‘(βˆšβ π‘œ Γ—βˆšβ π‘œ ) matrices β€‹πœ„β†‘βˆ’2 π‘ βˆšπ‘œ β€˜smooth’ 𝑒 dim. manifold β€‹πœ„β†‘βˆ’2 𝑒 𝐿 tree-sparse β€‹πœ„β†‘βˆ’2 𝐿

[Johnson, Lindenstrauss’89] [Blumensath, Davies’09] [CandΓ¨s,Recht;Ma etal.’09] [Wakin,Baraniuk’09] [Baraniuk etal.’09]

Bi-Lipschitz embeddable sets: βˆ€π‘¦,π‘¦β€²βˆˆπ·

​𝛽||π‘¦βˆ’β€‹π‘¦β†‘β€² ||↓2 ≀||𝐡(π‘¦βˆ’β€‹π‘¦β†‘β€² )||↓2 ≀​𝛾||π‘¦βˆ’β€‹π‘¦β†‘β€² ||↓2 Theorem.[Blumensath’11] For any (𝐷, ‘𝐡) Β‘if holds 𝛾≀1.5𝛽, ,

IPG à stable & linear convergence: |​|𝑦↑𝐿 βˆ’β€‹π‘¦β†“0 ||→𝑃(π‘₯)+𝜐, Β‘ Β‘ Β‘ ‘𝐿∼(log Β‘β€‹πœβ†‘βˆ’1 )

Global optimality even for nonconvex programs! Sample complexity e.g. 𝐡~ i.i.d. subgaussian ~ i.i.d. subgaussian

slide-7
SLIDE 7

IDCOM, University of Edinburgh

A limitation…

Exact oracles might be too expensive or even do not exist! Gradient ​A↑T (​Ax↑kβˆ’1 βˆ’y))

  • 𝐡 too large to fully access or fully compute/update 𝛼𝑔 Β‘

too large to fully access or fully compute/update 𝛼𝑔 Β‘

  • Noise in communication in distributed solvers

Projection ​𝑸↓

𝑸↓𝐷 (𝑦)∈argmin​||π‘¦βˆ’β€‹π‘¦β†‘β€² ||↓2 ‘𝑑.𝑒. Β‘ ‘​𝑦↑′ ∈𝐷

  • ​𝑸↓

𝑸↓𝐷 may not be analytic and requires solving an auxiliary optimization (e.g. inclusions 𝐷= ‘⋂𝑗↑▒ Β‘ ​𝐷↓𝑗 , total variation ball, low-rank, tree-sparse,…)

  • ​𝑸↓

𝑸↓𝐷 might be NP hard! (e.g. analysis sparsity, low-rank tensor decomposition)

Is IPG robust against inexact/approximate oracles?

slide-8
SLIDE 8

IDCOM, University of Edinburgh

Inexact oracles I: Fixed Precision

​𝑦↑𝑙 =​𝑸 𝑸 ↓𝐷 (π‘¦β†‘π‘™βˆ’1 βˆ’πœˆ ‘​𝛼 ‘𝑔(β€‹π‘¦β†‘π‘™βˆ’1 ))

Fixed Precision (FP) approximate oracles:

​||​𝛼 ‘𝑔(.)βˆ’π›Όπ‘”(.)||↓2 β‰€β€‹πœ‰β†“π‘• , Β‘ Β‘ Β‘ Β‘ ‘​||​𝑸↓ 𝑸↓𝐷 Β‘(.)βˆ’β€‹π‘Έβ†“ 𝑸↓𝐷 (.)||↓2 β‰€β€‹πœ‰β†“π‘ž , Β‘ Β‘ Β‘(​ Β‘ ‘​𝑸 ↓𝐷 (.)∈𝐷 Β‘) Examples: TV ball, inclusions (e.g. Djkstra alg.), and many more… (in convex settings, Duality gap à FP proj.)

Progressive Fixed Precision (PFP) oracles:

​||​𝛼 ‘𝑔(.)βˆ’π›Όπ‘”(.)||↓2 β‰€β€‹πœ‰β†“π‘•β†‘π‘™ , Β‘ Β‘ Β‘ Β‘ ‘​||​𝑸 𝑸 Β‘(.)βˆ’π‘Έ(.)||↓2 β‰€β€‹πœ‰β†“π‘žβ†‘π‘™ Examples: Any FP oracle with progressive refinement of the approx. levels e.g. convex sparse CUR factorization for β€‹πœ‰β†“π‘žβ†‘π‘™ βˆΌπ‘ƒ(1/​𝑙↑3 ) [Schmidt etal.’11] x x’ C

slide-9
SLIDE 9

IDCOM, University of Edinburgh

Inexact oracles II: (1+ 1+πœ—)-optimal

(1+πœ—)-approximate projections combined with FP/PFP gradient oracle

​𝑦↑𝑙 =​𝑸↑

π‘Έβ†‘πœ— ↓𝐷 (π‘¦β†‘π‘™βˆ’1 βˆ’πœˆ ‘​𝛼↓k ‘𝑔(β€‹π‘¦β†‘π‘™βˆ’1 )) Gradient ​||​𝛼 ↓𝑙 ‘𝑔(.)βˆ’π›Όπ‘”(.)||↓2 β‰€β€‹πœ‰β†“π‘•β†‘π‘™

Projection ​||​𝑸↑

π‘Έβ†‘πœ— ↓𝐷 (𝑦)βˆ’π‘¦||↓2 ≀​(𝟐+𝝑)||​𝑸↓ 𝑸↓𝐷 Β‘(𝑦)βˆ’π‘¦||↓2 Β‘ Β‘ Β‘ Β‘ Β‘

Examples. Many nonconvex constraints: Cheaper low-rank proxies based on randomized lin. algebra [Halko etal.’11], K-tree sparse signals [Hegde etal.’14], Tensor low-rank (Tucker) decomposition [Rauhut etal.’16], … and shortly, ANN for data driven CS!

​𝑸↑ π‘Έβ†‘πœ— ↓𝐷 (𝑦)∈𝐷

slide-10
SLIDE 10

IDCOM, University of Edinburgh

Robustness & linear convergence

  • f the

inexact IPG

slide-11
SLIDE 11

IDCOM, University of Edinburgh

IPG with (P)FP oracles

​𝑦↑𝑙 =​𝑸 𝑸 ↓𝐷 (π‘¦β†‘π‘™βˆ’1 βˆ’πœˆ ‘​𝛼 ‘𝑔(β€‹π‘¦β†‘π‘™βˆ’1 ))

Remark:

β€‹πœβ†‘βˆ’π‘— supresses the early stages errors

β‡’ use β€œprogressive” approximations to get as good as exact!

  • Theorem. For any ​(𝑦↓0 ∈𝐷, ‘𝐷,𝐡) Β‘if π›Ύβ‰€β€‹πœˆβ†‘βˆ’1 <2​𝛽↓0 then

||​𝑦↑𝑙 βˆ’β€‹π‘¦β†“0 ||β‰€β€‹πœβ†‘π‘™ (||​𝑦↓0 ||+βˆ‘π‘—=1β†‘π‘™β–’β€‹πœβ†‘βˆ’π‘— ​𝑓↑𝑗 )+​2βˆšβ π›Ύ /(1βˆ’πœ)​𝛽↓0 π‘₯ where, 𝜍=βˆšβ β€‹1β„πœˆβ€‹π›½β†“0 Β‘ βˆ’1 Β‘ and ​𝑓↑𝑗 =​2β€‹πœ‰β†“π‘•β†‘π‘— /​𝛽↓0 +βˆšβ β€‹πœ‰β†“π‘žβ†‘π‘— /β€‹πœˆπ›½β†“0

slide-12
SLIDE 12

IDCOM, University of Edinburgh

IPG with (P)FP oracles

​𝑦↑𝑙 =​𝑸 𝑸 ↓𝐷 (π‘¦β†‘π‘™βˆ’1 βˆ’πœˆ ‘​𝛼 ‘𝑔(β€‹π‘¦β†‘π‘™βˆ’1 ))

Corollary I. After 𝐿=𝑃(log(β€‹πœβ†‘βˆ’1 ) Β‘) iterations IPG-FP achieves

||​𝑦↑𝐿 βˆ’β€‹π‘¦β†“0 ||≀𝑃(π‘₯+β€‹πœ‰β†“π‘• +βˆšβ β€‹πœ‰β†“π‘ž )+𝜐 Β‘ Β‘ Β‘ Β‘ Β‘ Β‘ Β‘

linear convergence at rate 𝜍=βˆšβ β€‹1β„πœˆβ€‹π›½β†“0 Β‘ βˆ’1 Β‘ . Corollary II. Assume β€‹βˆƒπ‘ <1 Β‘ ‘𝑑.𝑒. Β‘ ‘𝑓↑𝑗 =𝑃(​𝑠↑𝑗 ), then after 𝐿=𝑃(log(β€‹πœβ†‘βˆ’1 ) Β‘) iterations IPG-PFP achieves

||​𝑦↑𝐿 βˆ’β€‹π‘¦β†“0 ||≀𝑃(π‘₯)+𝜐 Β‘ Β‘ Β‘ Β‘ Β‘ Β‘ Β‘

linear convergence at rate β€‹πœ ={β–ˆβ–‘max Β‘(𝜍, ‘𝑠) Β‘ Β‘πœβ‰ π‘ β πœ+𝜊 Β‘ Β‘ Β‘ Β‘ Β‘ Β‘ Β‘ Β‘ ‘𝜍=𝑠 Β‘ Β‘ (for any small 𝜊>0) Β‘

slide-13
SLIDE 13

IDCOM, University of Edinburgh

IPG with (1+ 1+πœ—) πœ—)-approximate projection

​𝑦↑𝑙 =​𝑸↓ π‘Έβ†“π·β†‘πœ— (π‘¦β†‘π‘™βˆ’1 βˆ’πœˆ ‘​𝛼↓k ‘𝑔(β€‹π‘¦β†‘π‘™βˆ’1 ))

  • Theorem. Assume for any ​(𝑦↓0 ∈𝐷, ‘𝐷,𝐡) Β‘π‘π‘œπ‘’ Β‘π‘π‘œ Β‘πœ—β‰₯0 it holds

√⁠2πœ—+β€‹πœ—β†‘2 β‰€πœ€β€‹βˆšβ β€‹π›½β†“0 /|||𝐡||| Β‘ Β‘ Β‘ Β‘ Β‘ Β‘ Β‘ Β‘ Β‘ Β‘ Β‘π‘π‘œπ‘’ Β‘ Β‘ Β‘ Β‘ Β‘ Β‘ Β‘ Β‘ Β‘ Β‘π›Ύβ‰€β€‹πœˆβ†‘βˆ’1 <​(2βˆ’2πœ€+β€‹πœ€β†‘2 )𝛽↓​𝑦↓0 Then, ||​𝑦↑𝑙 βˆ’β€‹π‘¦β†“0 ||β‰€β€‹πœβ†‘π‘™ (||​𝑦↓0 ||+β€‹πœ†β†“π‘• βˆ‘π‘—=1β†‘π‘™β–’β€‹πœβ†‘βˆ’π‘— β€‹πœ‰β†“π‘•β†‘π‘— )+β€‹πœ†β†“π‘¨ /

(1βˆ’πœ) π‘₯

where, 𝜍=βˆšβ β€‹1β„πœˆβ€‹π›½β†“0 Β‘ βˆ’1 +πœ€ Β‘ Β‘ Β‘ Β‘ Β‘ Β‘β€‹πœ†β†“π‘• =​2/​𝛽↓0 +β€‹βˆšπœˆ/|||𝐡||| πœ€ Β‘ Β‘ Β‘ Β‘ Β‘ Β‘π‘π‘œπ‘’ Β‘ Β‘ Β‘ Β‘β€‹πœ†β†“π‘¨ =​2βˆšβ π›Ύ /​𝛽↓0

+√⁠𝜈 πœ€

Remarks.

  • Requires stronger embedding cond., slower convergence!
  • Still linear conv. & 𝑃(π‘₯)+𝜐 Β‘ accuracy after 𝑃(log Β‘β€‹πœβ†‘βˆ’1 Β‘) iterations
  • higher noise amplification
slide-14
SLIDE 14

IDCOM, University of Edinburgh

Application in data driven CS

slide-15
SLIDE 15

IDCOM, University of Edinburgh

Data driven CS

In the absence of (semi) algebraic physical models (l0, l1, rank,…) Collect a possibly large dictionary (sample the model)

𝐷=​βˆͺ↓𝑗=1,…,𝑒 {β€‹πœ”β†“π‘— } Β‘ Β‘ Β‘ Β‘ Β‘ Β‘ Β‘β€‹πœ”β†“π‘— ‘𝑏𝑒𝑝𝑛𝑑 ‘𝑝𝑔 Β‘ Β‘ Β‘Ξ¨βˆˆβ€‹π‘†β†‘π‘œΓ—π‘’

Examples in multi-dim. imaging: Hyperspectral, Mass/Raman/MALDI,… spectroscopy [Golbabaee et al ’13;

Duarte,Baraniuk’12;…]

slide-16
SLIDE 16

IDCOM, University of Edinburgh

MR Fingerprinting: fast CQ-MRI

Goal: Measuring fast the NMR properties (relaxation times: T1, T2) [Ma etal.’13]

1. Multiple random/optimized excitations (magnetic field rotation) [Cohen’15;Mahbub, Golbabaee,

D., Marshall ’16]

2. Subsampling the k-space (per excitation) 3. Construct a (huge) dictionary of β€œfingerprints” i.e. solving Bloch dynamic eq. for all possible parameters (T1,T2)… 4. Use this dictionary to solve inverse problem

  • Exact IPG [D. et al β€˜15]

β€‹πœ– Β‘m(t)/πœ– ‘𝑒 = Β‘m(t) ‘× ‘γ Β‘hRF(t) ‘– Β‘ Β‘ Β‘(β–ˆβ–‘β€‹m↑𝑦 (𝑒)/T2@​𝑛↑𝑧 (𝑒)/T2@​(𝑛↑𝑨 (𝑒)βˆ’β€‹m↓eq )/T1 ) 𝜍 Ξ¨=​[ℬ]↓𝑗

Continuous model (Bloch manifold) exists,

  • but not explicit i.e. requires solving dynamic ODEs
  • MRF sacrifices memory ~ enables ​𝑸↓

𝑸↓𝐷 computation

slide-17
SLIDE 17

IDCOM, University of Edinburgh

CS in product (tensor) space

Per pixel data driven model 𝐷=​βˆͺ↓𝑗=1,…,𝑒 {β€‹πœ”β†“π‘— } Β‘ Β‘ Β‘ Β‘ Β‘ Β‘ Β‘β€‹πœ”β†“π‘— ‘𝑏𝑒𝑝𝑛𝑑 ‘𝑝𝑔 Β‘Ξ¨βˆˆβ€‹π‘†β†‘π‘œΓ—π‘’ Multi dim. image: β€‹π‘Œβˆˆβ€‹π‘†β†‘π‘œΓ—π‘„ Β‘ Β‘π‘₯β„Žπ‘“π‘ π‘“ Β‘ Β‘ Β‘π‘Œβ†“π‘ž ∈𝐷 Β‘ Β‘ Β‘βˆ€π‘ž=1,…,𝑄 Inverse problem: ​minβ”¬β€‹π‘Œβ†“π‘€π‘“π‘‘ βˆˆβˆπ‘˜β†‘β–’π· ​||π‘§βˆ’π΅(π‘Œ)||↑2 Direct recovery complexity O(​P↑d )!

IPG (I-NNsearch-G)

β€‹π‘Œβ†“π‘žβ†‘π‘™ =𝑂​𝑂↓𝐷 (​[β€‹π‘Œβ†‘π‘™βˆ’1 βˆ’πœˆβ€‹π΅β†‘πΌ (𝐡(β€‹π‘Œβ†‘π‘™βˆ’1 )βˆ’π‘§)]β†“π‘ž ), Β‘βˆ€π‘ž Complexity = iter Γ— (gradient+O(Pd)) Sufficient RIP stability 𝑁=𝑃(𝑄 ‘𝑛) Β‘, (ignoring inter channel structures) 𝑛= RIP complexity of each channel

. . .

Big datasets?

MRF uses large dictionaries d~ 500K

Can we do better?

slide-18
SLIDE 18

IDCOM, University of Edinburgh

Fast NN searches

Trees: (historical) approach to fast NN

  • Hierarchical partitioning + Brand & bound
  • search. e.g., kd-trees, Metric/Ball-trees,…

β€œCurse of dimensionality”: exact NN in ​𝑺↑

π‘Ίβ†‘π‘œ cannot achieve 𝑝(π‘œπ‘’) Β‘in time with a reasonable memory. Approximate NN can! If datasets ~ low dim.

Nagivating nets, Cover trees

[Krauthgamer,Lee’04; Beygelzimmer’06]

At scale π‘š Β‘= Β‘1,…,𝑀

  • Covering (parent nodes) β€‹πœ2β†‘βˆ’l
  • Separation (nodes appearing at scale l) β€‹πœ2β†‘βˆ’l

Kd-tree Partitions Search

CT builds multi-resolution cover-nets Brand & bound search

slide-19
SLIDE 19

IDCOM, University of Edinburgh

CT provably good for low dim data

Search options with cover trees

1. (1+πœ—)-ANN: as proposed by [Beygelzimmer et al.’06]

  • 2. FP-ANN: truncated tree L=⌈log​(πœ‰β†“π‘ž /𝜏)βŒ‰+ exact NN

(complexity could be arbitrary large/theoretically)

  • 3. PFP-ANN: progressing on truncation level β€‹πœ‰β†“π‘žβ†‘π‘™ =𝑃(​𝑠↑𝑙 ), Β‘ ‘𝑠<1

Inexact IPG

β€‹π‘Œβ†“π‘žβ†‘π‘™ =​𝐡𝑂𝑂↓𝐷 (​[β€‹π‘Œβ†‘π‘™βˆ’1 βˆ’πœˆβ€‹π΅β†‘πΌ (𝐡(β€‹π‘Œβ†‘π‘™βˆ’1 )βˆ’π‘§)]β†“π‘ž ), Β‘βˆ€π‘ž

  • Theorem. Cover tree (1+πœ—)-ANN complexity: [Krauthgamer + Lee 04]

​2↑𝒫(​dim⁠(𝐷) ) ​log⁠Δ +β€‹πœ—β†‘βˆ’π’«(​dim⁠(𝐷) ) in time, 𝑃(𝑒) space

(typically ​log⁠Δ =𝒫(​log⁠(𝑒) ))

slide-20
SLIDE 20

IDCOM, University of Edinburgh

Numerical experiments 1: Toy problem

slide-21
SLIDE 21

IDCOM, University of Edinburgh

2D manifold data

𝐷=​βˆͺ↓𝑗=1,…,𝑒 {β€‹πœ”β†“π‘— } Β‘ Β‘ Β‘β€‹πœ”β†“π‘— ‘𝑏𝑒𝑝𝑛𝑑 Β‘Ξ¨βˆˆβ€‹π‘†β†‘π‘œΓ—π‘’ CT level 2 CT level 3 CT level 4 CT level 5

slide-22
SLIDE 22

IDCOM, University of Edinburgh

Solution accuracy vs. Iterations (FP)

Signal: π‘Œβˆˆβ€‹π‘Ίβ†‘

π‘Ίβ†‘π‘œΓ—π‘„ Β‘π‘œ=200 Β‘, ‘𝑄=50 Β‘(randomly chosen ∈C) mπ‘„Γ—π‘œπ‘„ i.i.d. Normal A, CS ratio=m/n (noiseless)

i.i.d. Normal A, CS ratio=m/n (noiseless)

​minβ”¬β€‹π‘Œβ†“π‘€π‘“π‘‘ βˆˆβˆπ‘˜β†‘β–’π· ​||π‘§βˆ’π΅(π‘Œ)||↑2 Β‘ Β‘ Β‘ ‘⇔ Β‘β€‹π‘Œβ†“π‘žβ†‘π‘™ =𝐡𝑂​𝑂↓𝐷 (​[β€‹π‘Œβ†‘π‘™βˆ’1 βˆ’πœˆβ€‹π΅β†‘πΌ (𝐡(β€‹π‘Œβ†‘π‘™ βˆ’1 )βˆ’π‘§)]β†“π‘ž ), Β‘βˆ€π‘ž Swiss roll

slide-23
SLIDE 23

IDCOM, University of Edinburgh

Solution accuracy vs. Iterations (PFP)

Signal: π‘Œβˆˆβ€‹π‘Ίβ†‘

π‘Ίβ†‘π‘œΓ—π‘„ Β‘π‘œ=200 Β‘, ‘𝑄=50 Β‘(randomly chosen ∈C) mπ‘„Γ—π‘œπ‘„ i.i.d. Normal A, CS ratio=m/n (noiseless)

i.i.d. Normal A, CS ratio=m/n (noiseless)

​minβ”¬β€‹π‘Œβ†“π‘€π‘“π‘‘ βˆˆβˆπ‘˜β†‘β–’π· ​||π‘§βˆ’π΅(π‘Œ)||↑2 Β‘ Β‘ Β‘ ‘⇔ Β‘β€‹π‘Œβ†“π‘žβ†‘π‘™ =𝐡𝑂​𝑂↓𝐷 (​[β€‹π‘Œβ†‘π‘™βˆ’1 βˆ’πœˆβ€‹π΅β†‘πΌ (𝐡(β€‹π‘Œβ†‘π‘™ βˆ’1 )βˆ’π‘§)]β†“π‘ž ), Β‘βˆ€π‘ž Swiss roll β€‹πœ‰β†“π‘žβ†‘π‘™ =​𝑠↑𝑙

slide-24
SLIDE 24

IDCOM, University of Edinburgh

Solution accuracy vs. Iterations (1+ 1+πœ—)-ANN

Signal: π‘Œβˆˆβ€‹π‘Ίβ†‘

π‘Ίβ†‘π‘œΓ—π‘„ Β‘π‘œ=200 Β‘, ‘𝑄=50 Β‘(randomly chosen ∈C) mπ‘„Γ—π‘œπ‘„ i.i.d. Normal A, CS ratio=m/n (noiseless)

i.i.d. Normal A, CS ratio=m/n (noiseless)

​minβ”¬β€‹π‘Œβ†“π‘€π‘“π‘‘ βˆˆβˆπ‘˜β†‘β–’π· ​||π‘§βˆ’π΅(π‘Œ)||↑2 Β‘ Β‘ Β‘ ‘⇔ Β‘β€‹π‘Œβ†“π‘žβ†‘π‘™ =​𝐡𝑂𝑂↓𝐷 (​[β€‹π‘Œβ†‘π‘™βˆ’1 βˆ’πœˆβ€‹π΅β†‘πΌ (𝐡(β€‹π‘Œβ†‘π‘™ βˆ’1 )βˆ’π‘§)]β†“π‘ž ), Β‘βˆ€π‘ž Swiss roll

slide-25
SLIDE 25

IDCOM, University of Edinburgh

Phase transitions

(1+πœ—)-ANN IPG

PFP-ANN IPG Β‘ Β‘ Β‘β€‹πœ‰β†“π‘žβ†‘π‘™ =​𝑠↑𝑙

Signal: π‘Œβˆˆβ€‹π‘Ίβ†‘

π‘Ίβ†‘π‘œΓ—π‘„ Β‘π‘œ=200 Β‘, ‘𝑄=50 Β‘(randomly chosen ∈C) mπ‘„Γ—π‘œπ‘„ i.i.d. Normal A, CS ratio=m/n (noiseless) ~ averaged 25trials

i.i.d. Normal A, CS ratio=m/n (noiseless) ~ averaged 25trials

Recovery PT: Black/white = low/high sol. nMSE, red curve = recovery region nMSE<10e-4)

slide-26
SLIDE 26

IDCOM, University of Edinburgh

Numerical experiments 2: MRF

slide-27
SLIDE 27

IDCOM, University of Edinburgh

MRF cone & EPI acquisition

  • K-space subsampling: Eco Planar Imaging (EPI)
  • Anatomical phantom {Grey/White matters, CSF, muscle, skin}
  • Bloch eq. dictionary Ξ¨βˆˆβ€‹β„‚β†‘512 ‘×~ Β‘50β€²000
  • ​min┬ ​||π‘§βˆ’π΅(π‘Œ)||↑2 Β‘ Β‘ Β‘ ‘𝑑.𝑒. Β‘ Β‘ Β‘ Β‘β€‹π‘Œβ†“π‘€π‘“π‘‘ βˆˆβˆπ‘˜β†‘β–’π‘‘π‘π‘œπ‘“(Ξ¨)
  • 1. β€‹π‘β†“π‘ž =​ Β‘π‘Œβ†“π‘žβ†‘kβˆ’1 βˆ’πœˆβ€‹A↑H (​ Β‘A(π‘Œβ†“π‘žβ†‘kβˆ’1 )βˆ’y))
  • 2. β€‹πœ”β†“π‘ž =π΅π‘‚β€‹π‘‚β†“π‘œπ‘π‘ π‘›π‘π‘šπ‘—π‘‘π‘“π‘’{Ξ¨} (β€‹π‘β†“π‘žβ†‘π‘™ /||β€‹π‘β†“π‘žβ†‘π‘™ ||)
  • 3. β€‹πœβ†“π‘žβ†‘π‘™ =β€‹π‘β†“π‘ž /β€‹πœ”β†“π‘ž
  • 4. β€‹π‘Œβ†“π‘žβ†‘π‘™ =β€‹πœβ†“π‘ž β€‹πœ”β†“π‘ž , Β‘ Β‘βˆ€π‘ž

𝜍 Ξ¨=​[ℬ]↓𝑗

PD map T2map T1 map

slide-28
SLIDE 28

IDCOM, University of Edinburgh Dominant costà NN/ANN (since 𝐡 is FFT) is FFT) Projection cost = # matches calculated (i.e. visited nodes on the tree)

Accuracy vs. computation

Brute force CT’s exact NN

πœ— Β‘~ 0.2-0.8

slide-29
SLIDE 29

IDCOM, University of Edinburgh

Summary

  • IPG robustness to inexact oracles (under embedding assumption)
  • Linear convergence result:
  • PFP/(1+πœ—)-oracles: same final accuracy vs. exact IPG
  • PFP: same convergence rate vs. exact IPG
  • (1+πœ—): stronger assumptions/sensitive to conditioning of A
  • Implications in data driven CS (using ANN)
  • Cover trees for fast ANN: complexity ~ intrinsic dim(data)
  • O(10e3) faster parameter estimation in MRF

Thnx!